All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/7] Mem_event and mem_access for ARM
@ 2014-08-22  9:30 Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
                   ` (6 more replies)
  0 siblings, 7 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

The ARM virtualization extension provides 2-stage paging, a similar mechanisms
to Intel's EPT, which can be used to trace the memory accesses performed by
the guest systems. This series moves the mem_access and mem_event codebase
into Xen common, then sets up the necessary infrastructure in the ARM code
to deliver the event on R/W/X traps. Lastly, we turn on the compilation of
the xen-access test tool.

This is an early RFC on the series to solicite feedback. The series thus far
has only been boot-tested on an Arndale board.

The series is also available at:
https://github.com/tklengyel/xen/tree/arm_memaccess_rfc

Tamas K Lengyel (7):
  xen: Relocate mem_access and mem_event into common.
  xen/mem_event: Clean out superflous white-spaces
  xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  tools/libxc: Allocate magic page for mem access on ARM
  xen/arm: Data abort exception (R/W) mem_events.
  xen/arm: Instruction prefetch abort (X) mem_event handling
  tools/tests: Enable xen-access on ARM

 tools/libxc/xc_dom_arm.c            |   6 +-
 tools/tests/xen-access/Makefile     |   4 +-
 tools/tests/xen-access/xen-access.c |  53 ++-
 xen/arch/arm/domctl.c               |  36 +-
 xen/arch/arm/mm.c                   |  18 +-
 xen/arch/arm/p2m.c                  | 412 ++++++++++++++++++---
 xen/arch/arm/traps.c                |  51 ++-
 xen/arch/x86/domctl.c               |   2 +-
 xen/arch/x86/hvm/hvm.c              |  61 +--
 xen/arch/x86/mm/Makefile            |   2 -
 xen/arch/x86/mm/hap/nested_ept.c    |   2 +-
 xen/arch/x86/mm/hap/nested_hap.c    |   2 +-
 xen/arch/x86/mm/mem_access.c        | 133 -------
 xen/arch/x86/mm/mem_event.c         | 705 -----------------------------------
 xen/arch/x86/mm/mem_paging.c        |   2 +-
 xen/arch/x86/mm/mem_sharing.c       |   2 +-
 xen/arch/x86/mm/p2m-pod.c           |   2 +-
 xen/arch/x86/mm/p2m-pt.c            |   2 +-
 xen/arch/x86/mm/p2m.c               |   2 +-
 xen/arch/x86/x86_64/compat/mm.c     |   4 +-
 xen/arch/x86/x86_64/mm.c            |   4 +-
 xen/common/Makefile                 |   2 +
 xen/common/domain.c                 |   1 +
 xen/common/mem_access.c             | 135 +++++++
 xen/common/mem_event.c              | 716 ++++++++++++++++++++++++++++++++++++
 xen/common/memory.c                 |  62 ++++
 xen/include/asm-arm/mm.h            |   1 -
 xen/include/asm-arm/p2m.h           | 109 ++++--
 xen/include/asm-arm/processor.h     |  10 +-
 xen/include/asm-x86/hvm/hvm.h       |   6 -
 xen/include/asm-x86/mem_access.h    |  39 --
 xen/include/asm-x86/mem_event.h     |  82 -----
 xen/include/asm-x86/mm.h            |   2 -
 xen/include/xen/mem_access.h        |  39 ++
 xen/include/xen/mem_event.h         |  98 +++++
 xen/include/xen/mm.h                |   6 +
 xen/include/xen/sched.h             |   1 -
 xen/include/xsm/dummy.h             |  24 +-
 xen/include/xsm/xsm.h               |  25 +-
 xen/xsm/dummy.c                     |   4 +-
 40 files changed, 1685 insertions(+), 1182 deletions(-)
 delete mode 100644 xen/arch/x86/mm/mem_access.c
 delete mode 100644 xen/arch/x86/mm/mem_event.c
 create mode 100644 xen/common/mem_access.c
 create mode 100644 xen/common/mem_event.c
 delete mode 100644 xen/include/asm-x86/mem_access.h
 delete mode 100644 xen/include/asm-x86/mem_event.h
 create mode 100644 xen/include/xen/mem_access.h
 create mode 100644 xen/include/xen/mem_event.h

-- 
2.0.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  2014-08-25 17:19   ` Andres Lagar Cavilla
  2014-08-26 13:34   ` Jan Beulich
  2014-08-22  9:30 ` [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

In preparation to add support for ARM LPAE mem_event, relocate mem_access
and mem_event into common Xen code. This patch makes no functional changes
to the X86 side, for ARM mem_event and mem_access functions are just
placeholder stubs.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/domctl.c            |   2 +-
 xen/arch/x86/hvm/hvm.c           |  61 +---
 xen/arch/x86/mm/Makefile         |   2 -
 xen/arch/x86/mm/hap/nested_ept.c |   2 +-
 xen/arch/x86/mm/hap/nested_hap.c |   2 +-
 xen/arch/x86/mm/mem_access.c     | 133 --------
 xen/arch/x86/mm/mem_event.c      | 705 --------------------------------------
 xen/arch/x86/mm/mem_paging.c     |   2 +-
 xen/arch/x86/mm/mem_sharing.c    |   2 +-
 xen/arch/x86/mm/p2m-pod.c        |   2 +-
 xen/arch/x86/mm/p2m-pt.c         |   2 +-
 xen/arch/x86/mm/p2m.c            |   2 +-
 xen/arch/x86/x86_64/compat/mm.c  |   4 +-
 xen/arch/x86/x86_64/mm.c         |   4 +-
 xen/common/Makefile              |   2 +
 xen/common/domain.c              |   1 +
 xen/common/mem_access.c          | 137 ++++++++
 xen/common/mem_event.c           | 707 +++++++++++++++++++++++++++++++++++++++
 xen/common/memory.c              |  62 ++++
 xen/include/asm-arm/mm.h         |   1 -
 xen/include/asm-x86/hvm/hvm.h    |   6 -
 xen/include/asm-x86/mem_access.h |  39 ---
 xen/include/asm-x86/mem_event.h  |  82 -----
 xen/include/asm-x86/mm.h         |   2 -
 xen/include/xen/mem_access.h     |  58 ++++
 xen/include/xen/mem_event.h      | 141 ++++++++
 xen/include/xen/mm.h             |   6 +
 27 files changed, 1128 insertions(+), 1041 deletions(-)
 delete mode 100644 xen/arch/x86/mm/mem_access.c
 delete mode 100644 xen/arch/x86/mm/mem_event.c
 create mode 100644 xen/common/mem_access.c
 create mode 100644 xen/common/mem_event.c
 delete mode 100644 xen/include/asm-x86/mem_access.h
 delete mode 100644 xen/include/asm-x86/mem_event.h
 create mode 100644 xen/include/xen/mem_access.h
 create mode 100644 xen/include/xen/mem_event.h

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index d1517c4..3aeb79d 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,7 +30,7 @@
 #include <xen/hypercall.h> /* for arch_do_domctl */
 #include <xsm/xsm.h>
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d40c48e..14ce761 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -63,8 +63,8 @@
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
 #include <public/memory.h>
-#include <asm/mem_event.h>
-#include <asm/mem_access.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
 #include <public/mem_event.h>
 #include <xen/rangeset.h>
 #include <public/arch-x86/cpuid.h>
@@ -486,19 +486,6 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
     clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
 }
 
-void destroy_ring_for_helper(
-    void **_va, struct page_info *page)
-{
-    void *va = *_va;
-
-    if ( va != NULL )
-    {
-        unmap_domain_page_global(va);
-        put_page_and_type(page);
-        *_va = NULL;
-    }
-}
-
 static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
@@ -506,50 +493,6 @@ static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
     destroy_ring_for_helper(&iorp->va, iorp->page);
 }
 
-int prepare_ring_for_helper(
-    struct domain *d, unsigned long gmfn, struct page_info **_page,
-    void **_va)
-{
-    struct page_info *page;
-    p2m_type_t p2mt;
-    void *va;
-
-    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
-    if ( p2m_is_paging(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        p2m_mem_paging_populate(d, gmfn);
-        return -ENOENT;
-    }
-    if ( p2m_is_shared(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        return -ENOENT;
-    }
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    va = __map_domain_page_global(page);
-    if ( va == NULL )
-    {
-        put_page_and_type(page);
-        return -ENOMEM;
-    }
-
-    *_va = va;
-    *_page = page;
-
-    return 0;
-}
-
 static int hvm_map_ioreq_page(
     struct hvm_ioreq_server *s, bool_t buf, unsigned long gmfn)
 {
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 73dcdf4..ed4b1f8 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -6,10 +6,8 @@ obj-y += p2m.o p2m-pt.o p2m-ept.o p2m-pod.o
 obj-y += guest_walk_2.o
 obj-y += guest_walk_3.o
 obj-$(x86_64) += guest_walk_4.o
-obj-$(x86_64) += mem_event.o
 obj-$(x86_64) += mem_paging.o
 obj-$(x86_64) += mem_sharing.o
-obj-$(x86_64) += mem_access.o
 
 guest_walk_%.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 0d044bc..704bb66 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 137a87c..f6becd4 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -23,7 +23,7 @@
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
deleted file mode 100644
index e8465a5..0000000
--- a/xen/arch/x86/mm/mem_access.c
+++ /dev/null
@@ -1,133 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_access.c
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#include <xen/sched.h>
-#include <xen/guest_access.h>
-#include <xen/hypercall.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <xsm/xsm.h>
-
-
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
-{
-    long rc;
-    xen_mem_access_op_t mao;
-    struct domain *d;
-
-    if ( copy_from_guest(&mao, arg, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
-    if ( rc )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
-    if ( rc )
-        goto out;
-
-    rc = -ENODEV;
-    if ( unlikely(!d->mem_event->access.ring_page) )
-        goto out;
-
-    switch ( mao.op )
-    {
-    case XENMEM_access_op_resume:
-        p2m_mem_access_resume(d);
-        rc = 0;
-        break;
-
-    case XENMEM_access_op_set_access:
-    {
-        unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
-
-        rc = -EINVAL;
-        if ( (mao.pfn != ~0ull) &&
-             (mao.nr < start_iter ||
-              ((mao.pfn + mao.nr - 1) < mao.pfn) ||
-              ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
-            break;
-
-        rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
-                                MEMOP_CMD_MASK, mao.access);
-        if ( rc > 0 )
-        {
-            ASSERT(!(rc & MEMOP_CMD_MASK));
-            rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
-                                               XENMEM_access_op | rc, arg);
-        }
-        break;
-    }
-
-    case XENMEM_access_op_get_access:
-    {
-        xenmem_access_t access;
-
-        rc = -EINVAL;
-        if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
-            break;
-
-        rc = p2m_get_mem_access(d, mao.pfn, &access);
-        if ( rc != 0 )
-            break;
-
-        mao.access = access;
-        rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
-
-        break;
-    }
-
-    default:
-        rc = -ENOSYS;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
-{
-    int rc = mem_event_claim_slot(d, &d->mem_event->access);
-    if ( rc < 0 )
-        return rc;
-
-    mem_event_put_request(d, &d->mem_event->access, req);
-
-    return 0;
-} 
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
deleted file mode 100644
index ba7e71e..0000000
--- a/xen/arch/x86/mm/mem_event.c
+++ /dev/null
@@ -1,705 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_event.c
- *
- * Memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#include <asm/domain.h>
-#include <xen/event.h>
-#include <xen/wait.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <asm/mem_paging.h>
-#include <asm/mem_access.h>
-#include <asm/mem_sharing.h>
-#include <xsm/xsm.h>
-
-/* for public/io/ring.h macros */
-#define xen_mb()   mb()
-#define xen_rmb()  rmb()
-#define xen_wmb()  wmb()
-
-#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
-#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
-#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
-
-static int mem_event_enable(
-    struct domain *d,
-    xen_domctl_mem_event_op_t *mec,
-    struct mem_event_domain *med,
-    int pause_flag,
-    int param,
-    xen_event_channel_notification_t notification_fn)
-{
-    int rc;
-    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
-
-    /* Only one helper at a time. If the helper crashed,
-     * the ring is in an undefined state and so is the guest.
-     */
-    if ( med->ring_page )
-        return -EBUSY;
-
-    /* The parameter defaults to zero, and it should be 
-     * set to something */
-    if ( ring_gfn == 0 )
-        return -ENOSYS;
-
-    mem_event_ring_lock_init(med);
-    mem_event_ring_lock(med);
-
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
-                                    &med->ring_page);
-    if ( rc < 0 )
-        goto err;
-
-    /* Set the number of currently blocked vCPUs to 0. */
-    med->blocked = 0;
-
-    /* Allocate event channel */
-    rc = alloc_unbound_xen_event_channel(d->vcpu[0],
-                                         current->domain->domain_id,
-                                         notification_fn);
-    if ( rc < 0 )
-        goto err;
-
-    med->xen_port = mec->port = rc;
-
-    /* Prepare ring buffer */
-    FRONT_RING_INIT(&med->front_ring,
-                    (mem_event_sring_t *)med->ring_page,
-                    PAGE_SIZE);
-
-    /* Save the pause flag for this particular ring. */
-    med->pause_flag = pause_flag;
-
-    /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&med->wq);
-
-    mem_event_ring_unlock(med);
-    return 0;
-
- err:
-    destroy_ring_for_helper(&med->ring_page, 
-                            med->ring_pg_struct);
-    mem_event_ring_unlock(med);
-
-    return rc;
-}
-
-static unsigned int mem_event_ring_available(struct mem_event_domain *med)
-{
-    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
-    avail_req -= med->target_producers;
-    avail_req -= med->foreign_producers;
-
-    BUG_ON(avail_req < 0);
-
-    return avail_req;
-}
-
-/*
- * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
- * ring. These vCPUs were paused on their way out after placing an event,
- * but need to be resumed where the ring is capable of processing at least
- * one event from them.
- */
-static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
-{
-    struct vcpu *v;
-    int online = d->max_vcpus;
-    unsigned int avail_req = mem_event_ring_available(med);
-
-    if ( avail_req == 0 || med->blocked == 0 )
-        return;
-
-    /*
-     * We ensure that we only have vCPUs online if there are enough free slots
-     * for their memory events to be processed.  This will ensure that no
-     * memory events are lost (due to the fact that certain types of events
-     * cannot be replayed, we need to ensure that there is space in the ring
-     * for when they are hit).
-     * See comment below in mem_event_put_request().
-     */
-    for_each_vcpu ( d, v )
-        if ( test_bit(med->pause_flag, &v->pause_flags) )
-            online--;
-
-    ASSERT(online == (d->max_vcpus - med->blocked));
-
-    /* We remember which vcpu last woke up to avoid scanning always linearly
-     * from zero and starving higher-numbered vcpus under high load */
-    if ( d->vcpu )
-    {
-        int i, j, k;
-
-        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
-        {
-            k = i % d->max_vcpus;
-            v = d->vcpu[k];
-            if ( !v )
-                continue;
-
-            if ( !(med->blocked) || online >= avail_req )
-               break;
-
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                online++;
-                med->blocked--;
-                med->last_vcpu_wake_up = k;
-            }
-        }
-    }
-}
-
-/*
- * In the event that a vCPU attempted to place an event in the ring and
- * was unable to do so, it is queued on a wait queue.  These are woken as
- * needed, and take precedence over the blocked vCPUs.
- */
-static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
-{
-    unsigned int avail_req = mem_event_ring_available(med);
-
-    if ( avail_req > 0 )
-        wake_up_nr(&med->wq, avail_req);
-}
-
-/*
- * mem_event_wake() will wakeup all vcpus waiting for the ring to
- * become available.  If we have queued vCPUs, they get top priority. We
- * are guaranteed that they will go through code paths that will eventually
- * call mem_event_wake() again, ensuring that any blocked vCPUs will get
- * unpaused once all the queued vCPUs have made it through.
- */
-void mem_event_wake(struct domain *d, struct mem_event_domain *med)
-{
-    if (!list_empty(&med->wq.list))
-        mem_event_wake_queued(d, med);
-    else
-        mem_event_wake_blocked(d, med);
-}
-
-static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
-{
-    if ( med->ring_page )
-    {
-        struct vcpu *v;
-
-        mem_event_ring_lock(med);
-
-        if ( !list_empty(&med->wq.list) )
-        {
-            mem_event_ring_unlock(med);
-            return -EBUSY;
-        }
-
-        /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d->vcpu[0], med->xen_port);
-
-        /* Unblock all vCPUs */
-        for_each_vcpu ( d, v )
-        {
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                med->blocked--;
-            }
-        }
-
-        destroy_ring_for_helper(&med->ring_page, 
-                                med->ring_pg_struct);
-        mem_event_ring_unlock(med);
-    }
-
-    return 0;
-}
-
-static inline void mem_event_release_slot(struct domain *d,
-                                          struct mem_event_domain *med)
-{
-    /* Update the accounting */
-    if ( current->domain == d )
-        med->target_producers--;
-    else
-        med->foreign_producers--;
-
-    /* Kick any waiters */
-    mem_event_wake(d, med);
-}
-
-/*
- * mem_event_mark_and_pause() tags vcpu and put it to sleep.
- * The vcpu will resume execution in mem_event_wake_waiters().
- */
-void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
-{
-    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
-    {
-        vcpu_pause_nosync(v);
-        med->blocked++;
-    }
-}
-
-/*
- * This must be preceded by a call to claim_slot(), and is guaranteed to
- * succeed.  As a side-effect however, the vCPU may be paused if the ring is
- * overly full and its continued execution would cause stalling and excessive
- * waiting.  The vCPU will be automatically unpaused when the ring clears.
- */
-void mem_event_put_request(struct domain *d,
-                           struct mem_event_domain *med,
-                           mem_event_request_t *req)
-{
-    mem_event_front_ring_t *front_ring;
-    int free_req;
-    unsigned int avail_req;
-    RING_IDX req_prod;
-
-    if ( current->domain != d )
-    {
-        req->flags |= MEM_EVENT_FLAG_FOREIGN;
-        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
-    }
-
-    mem_event_ring_lock(med);
-
-    /* Due to the reservations, this step must succeed. */
-    front_ring = &med->front_ring;
-    free_req = RING_FREE_REQUESTS(front_ring);
-    ASSERT(free_req > 0);
-
-    /* Copy request */
-    req_prod = front_ring->req_prod_pvt;
-    memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
-    req_prod++;
-
-    /* Update ring */
-    front_ring->req_prod_pvt = req_prod;
-    RING_PUSH_REQUESTS(front_ring);
-
-    /* We've actually *used* our reservation, so release the slot. */
-    mem_event_release_slot(d, med);
-
-    /* Give this vCPU a black eye if necessary, on the way out.
-     * See the comments above wake_blocked() for more information
-     * on how this mechanism works to avoid waiting. */
-    avail_req = mem_event_ring_available(med);
-    if( current->domain == d && avail_req < d->max_vcpus )
-        mem_event_mark_and_pause(current, med);
-
-    mem_event_ring_unlock(med);
-
-    notify_via_xen_event_channel(d, med->xen_port);
-}
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
-{
-    mem_event_front_ring_t *front_ring;
-    RING_IDX rsp_cons;
-
-    mem_event_ring_lock(med);
-
-    front_ring = &med->front_ring;
-    rsp_cons = front_ring->rsp_cons;
-
-    if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
-    {
-        mem_event_ring_unlock(med);
-        return 0;
-    }
-
-    /* Copy response */
-    memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
-    rsp_cons++;
-
-    /* Update ring */
-    front_ring->rsp_cons = rsp_cons;
-    front_ring->sring->rsp_event = rsp_cons + 1;
-
-    /* Kick any waiters -- since we've just consumed an event,
-     * there may be additional space available in the ring. */
-    mem_event_wake(d, med);
-
-    mem_event_ring_unlock(med);
-
-    return 1;
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
-{
-    mem_event_ring_lock(med);
-    mem_event_release_slot(d, med);
-    mem_event_ring_unlock(med);
-}
-
-static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
-{
-    unsigned int avail_req;
-
-    if ( !med->ring_page )
-        return -ENOSYS;
-
-    mem_event_ring_lock(med);
-
-    avail_req = mem_event_ring_available(med);
-    if ( avail_req == 0 )
-    {
-        mem_event_ring_unlock(med);
-        return -EBUSY;
-    }
-
-    if ( !foreign )
-        med->target_producers++;
-    else
-        med->foreign_producers++;
-
-    mem_event_ring_unlock(med);
-
-    return 0;
-}
-
-/* Simple try_grab wrapper for use in the wait_event() macro. */
-static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
-{
-    *rc = mem_event_grab_slot(med, 0);
-    return *rc;
-}
-
-/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
-static int mem_event_wait_slot(struct mem_event_domain *med)
-{
-    int rc = -EBUSY;
-    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
-    return rc;
-}
-
-bool_t mem_event_check_ring(struct mem_event_domain *med)
-{
-    return (med->ring_page != NULL);
-}
-
-/*
- * Determines whether or not the current vCPU belongs to the target domain,
- * and calls the appropriate wait function.  If it is a guest vCPU, then we
- * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
- * this function will always return 0 for a guest.  For a non-guest, we check
- * for space and return -EBUSY if the ring is not available.
- *
- * Return codes: -ENOSYS: the ring is not yet configured
- *               -EBUSY: the ring is busy
- *               0: a spot has been reserved
- *
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep)
-{
-    if ( (current->domain == d) && allow_sleep )
-        return mem_event_wait_slot(med);
-    else
-        return mem_event_grab_slot(med, (current->domain != d));
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_paging_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
-        p2m_mem_paging_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
-        p2m_mem_access_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_sharing_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
-        mem_sharing_sharing_resume(v->domain);
-}
-
-int do_mem_event_op(int op, uint32_t domain, void *arg)
-{
-    int ret;
-    struct domain *d;
-
-    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
-    if ( ret )
-        return ret;
-
-    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
-    if ( ret )
-        goto out;
-
-    switch (op)
-    {
-        case XENMEM_paging_op:
-            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
-            break;
-        case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
-            break;
-        default:
-            ret = -ENOSYS;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return ret;
-}
-
-/* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d)
-{
-    if ( d->mem_event->paging.ring_page ) {
-        /* Destroying the wait queue head means waking up all
-         * queued vcpus. This will drain the list, allowing
-         * the disable routine to complete. It will also drop
-         * all domain refs the wait-queued vcpus are holding.
-         * Finally, because this code path involves previously
-         * pausing the domain (domain_kill), unpausing the 
-         * vcpus causes no harm. */
-        destroy_waitqueue_head(&d->mem_event->paging.wq);
-        (void)mem_event_disable(d, &d->mem_event->paging);
-    }
-    if ( d->mem_event->access.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->access.wq);
-        (void)mem_event_disable(d, &d->mem_event->access);
-    }
-    if ( d->mem_event->share.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->share.wq);
-        (void)mem_event_disable(d, &d->mem_event->share);
-    }
-}
-
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
-    int rc;
-
-    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
-    if ( rc )
-        return rc;
-
-    if ( unlikely(d == current->domain) )
-    {
-        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
-        return -EINVAL;
-    }
-
-    if ( unlikely(d->is_dying) )
-    {
-        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
-                 d->domain_id);
-        return 0;
-    }
-
-    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
-    {
-        gdprintk(XENLOG_INFO,
-                 "Memory event op on a domain (%u) with no vcpus\n",
-                 d->domain_id);
-        return -EINVAL;
-    }
-
-    rc = -ENOSYS;
-
-    switch ( mec->mode )
-    {
-    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
-    {
-        struct mem_event_domain *med = &d->mem_event->paging;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
-        {
-            struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-            rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
-                break;
-
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* No paging if iommu is used */
-            rc = -EMLINK;
-            if ( unlikely(need_iommu(d)) )
-                break;
-
-            rc = -EXDEV;
-            /* Disallow paging in a PoD guest */
-            if ( p2m->pod.entry_count )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
-                                    HVM_PARAM_PAGING_RING_PFN,
-                                    mem_paging_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
-    {
-        struct mem_event_domain *med = &d->mem_event->access;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
-        {
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* Currently only EPT is supported */
-            if ( !cpu_has_vmx )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
-                                    HVM_PARAM_ACCESS_RING_PFN,
-                                    mem_access_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
-    {
-        struct mem_event_domain *med = &d->mem_event->share;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
-        {
-            rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
-                break;
-
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
-                                    HVM_PARAM_SHARING_RING_PFN,
-                                    mem_sharing_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    default:
-        rc = -ENOSYS;
-    }
-
-    return rc;
-}
-
-void mem_event_vcpu_pause(struct vcpu *v)
-{
-    ASSERT(v == current);
-
-    atomic_inc(&v->mem_event_pause_count);
-    vcpu_pause_nosync(v);
-}
-
-void mem_event_vcpu_unpause(struct vcpu *v)
-{
-    int old, new, prev = v->mem_event_pause_count.counter;
-
-    /* All unpause requests as a result of toolstack responses.  Prevent
-     * underflow of the vcpu pause count. */
-    do
-    {
-        old = prev;
-        new = old - 1;
-
-        if ( new < 0 )
-        {
-            printk(XENLOG_G_WARNING
-                   "%pv mem_event: Too many unpause attempts\n", v);
-            return;
-        }
-
-        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
-    } while ( prev != old );
-
-    vcpu_unpause(v);
-}
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 235776d..65f6a3d 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,7 +22,7 @@
 
 
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 
 
 int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 79188b9..fa845fd 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -30,7 +30,7 @@
 #include <asm/page.h>
 #include <asm/string.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <asm/atomic.h>
 #include <xen/rcupdate.h>
 #include <asm/event.h>
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bd4c7c8..881259a 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -26,7 +26,7 @@
 #include <asm/p2m.h>
 #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 085ab6f..46231cf 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -30,7 +30,7 @@
 #include <asm/paging.h>
 #include <asm/p2m.h>
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index bca9f0f..5190cde 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -30,7 +30,7 @@
 #include <asm/p2m.h>
 #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 69c6195..203c6b4 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -2,9 +2,9 @@
 #include <xen/multicall.h>
 #include <compat/memory.h>
 #include <compat/xen.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
+#include <xen/mem_access.h>
 
 int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 4937f9a..1f9702d 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -35,9 +35,9 @@
 #include <asm/msr.h>
 #include <asm/setup.h>
 #include <asm/numa.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
+#include <xen/mem_access.h>
 #include <public/memory.h>
 
 /* Parameters for PFN/MADDR compression. */
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 3683ae3..a1b6128 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -51,6 +51,8 @@ obj-y += tmem_xen.o
 obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += lzo.o
+obj-y += mem_access.o
+obj-y += mem_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 1952070..6f51311 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -15,6 +15,7 @@
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
+#include <xen/mem_event.h>
 #include <xen/time.h>
 #include <xen/console.h>
 #include <xen/softirq.h>
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
new file mode 100644
index 0000000..84acdf9
--- /dev/null
+++ b/xen/common/mem_access.c
@@ -0,0 +1,137 @@
+/******************************************************************************
+ * mem_access.c
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#include <xen/sched.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <asm/p2m.h>
+#include <public/memory.h>
+#include <xen/mem_event.h>
+#include <xsm/xsm.h>
+
+#ifdef CONFIG_X86
+
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+    long rc;
+    xen_mem_access_op_t mao;
+    struct domain *d;
+
+    if ( copy_from_guest(&mao, arg, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
+    if ( rc )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    if ( rc )
+        goto out;
+
+    rc = -ENODEV;
+    if ( unlikely(!d->mem_event->access.ring_page) )
+        goto out;
+
+    switch ( mao.op )
+    {
+    case XENMEM_access_op_resume:
+        p2m_mem_access_resume(d);
+        rc = 0;
+        break;
+
+    case XENMEM_access_op_set_access:
+    {
+        unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
+
+        rc = -EINVAL;
+        if ( (mao.pfn != ~0ull) &&
+             (mao.nr < start_iter ||
+              ((mao.pfn + mao.nr - 1) < mao.pfn) ||
+              ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
+            break;
+
+        rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
+                                MEMOP_CMD_MASK, mao.access);
+        if ( rc > 0 )
+        {
+            ASSERT(!(rc & MEMOP_CMD_MASK));
+            rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
+                                               XENMEM_access_op | rc, arg);
+        }
+        break;
+    }
+
+    case XENMEM_access_op_get_access:
+    {
+        xenmem_access_t access;
+
+        rc = -EINVAL;
+        if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
+            break;
+
+        rc = p2m_get_mem_access(d, mao.pfn, &access);
+        if ( rc != 0 )
+            break;
+
+        mao.access = access;
+        rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
+
+        break;
+    }
+
+    default:
+        rc = -ENOSYS;
+        break;
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+    int rc = mem_event_claim_slot(d, &d->mem_event->access);
+    if ( rc < 0 )
+        return rc;
+
+    mem_event_put_request(d, &d->mem_event->access, req);
+
+    return 0;
+}
+
+#endif /* CONFIG_X86 */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
new file mode 100644
index 0000000..38c6697
--- /dev/null
+++ b/xen/common/mem_event.c
@@ -0,0 +1,707 @@
+/******************************************************************************
+ * mem_event.c
+ *
+ * Memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#ifdef CONFIG_X86
+
+#include <asm/domain.h>
+#include <xen/event.h>
+#include <xen/wait.h>
+#include <asm/p2m.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <asm/mem_paging.h>
+#include <asm/mem_sharing.h>
+#include <xsm/xsm.h>
+
+/* for public/io/ring.h macros */
+#define xen_mb()   mb()
+#define xen_rmb()  rmb()
+#define xen_wmb()  wmb()
+
+#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
+#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
+#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
+
+static int mem_event_enable(
+    struct domain *d,
+    xen_domctl_mem_event_op_t *mec,
+    struct mem_event_domain *med,
+    int pause_flag,
+    int param,
+    xen_event_channel_notification_t notification_fn)
+{
+    int rc;
+    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
+
+    /* Only one helper at a time. If the helper crashed,
+     * the ring is in an undefined state and so is the guest.
+     */
+    if ( med->ring_page )
+        return -EBUSY;
+
+    /* The parameter defaults to zero, and it should be 
+     * set to something */
+    if ( ring_gfn == 0 )
+        return -ENOSYS;
+
+    mem_event_ring_lock_init(med);
+    mem_event_ring_lock(med);
+
+    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
+                                    &med->ring_page);
+    if ( rc < 0 )
+        goto err;
+
+    /* Set the number of currently blocked vCPUs to 0. */
+    med->blocked = 0;
+
+    /* Allocate event channel */
+    rc = alloc_unbound_xen_event_channel(d->vcpu[0],
+                                         current->domain->domain_id,
+                                         notification_fn);
+    if ( rc < 0 )
+        goto err;
+
+    med->xen_port = mec->port = rc;
+
+    /* Prepare ring buffer */
+    FRONT_RING_INIT(&med->front_ring,
+                    (mem_event_sring_t *)med->ring_page,
+                    PAGE_SIZE);
+
+    /* Save the pause flag for this particular ring. */
+    med->pause_flag = pause_flag;
+
+    /* Initialize the last-chance wait queue. */
+    init_waitqueue_head(&med->wq);
+
+    mem_event_ring_unlock(med);
+    return 0;
+
+ err:
+    destroy_ring_for_helper(&med->ring_page, 
+                            med->ring_pg_struct);
+    mem_event_ring_unlock(med);
+
+    return rc;
+}
+
+static unsigned int mem_event_ring_available(struct mem_event_domain *med)
+{
+    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
+    avail_req -= med->target_producers;
+    avail_req -= med->foreign_producers;
+
+    BUG_ON(avail_req < 0);
+
+    return avail_req;
+}
+
+/*
+ * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
+ * ring. These vCPUs were paused on their way out after placing an event,
+ * but need to be resumed where the ring is capable of processing at least
+ * one event from them.
+ */
+static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
+{
+    struct vcpu *v;
+    int online = d->max_vcpus;
+    unsigned int avail_req = mem_event_ring_available(med);
+
+    if ( avail_req == 0 || med->blocked == 0 )
+        return;
+
+    /*
+     * We ensure that we only have vCPUs online if there are enough free slots
+     * for their memory events to be processed.  This will ensure that no
+     * memory events are lost (due to the fact that certain types of events
+     * cannot be replayed, we need to ensure that there is space in the ring
+     * for when they are hit).
+     * See comment below in mem_event_put_request().
+     */
+    for_each_vcpu ( d, v )
+        if ( test_bit(med->pause_flag, &v->pause_flags) )
+            online--;
+
+    ASSERT(online == (d->max_vcpus - med->blocked));
+
+    /* We remember which vcpu last woke up to avoid scanning always linearly
+     * from zero and starving higher-numbered vcpus under high load */
+    if ( d->vcpu )
+    {
+        int i, j, k;
+
+        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
+        {
+            k = i % d->max_vcpus;
+            v = d->vcpu[k];
+            if ( !v )
+                continue;
+
+            if ( !(med->blocked) || online >= avail_req )
+               break;
+
+            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            {
+                vcpu_unpause(v);
+                online++;
+                med->blocked--;
+                med->last_vcpu_wake_up = k;
+            }
+        }
+    }
+}
+
+/*
+ * In the event that a vCPU attempted to place an event in the ring and
+ * was unable to do so, it is queued on a wait queue.  These are woken as
+ * needed, and take precedence over the blocked vCPUs.
+ */
+static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
+{
+    unsigned int avail_req = mem_event_ring_available(med);
+
+    if ( avail_req > 0 )
+        wake_up_nr(&med->wq, avail_req);
+}
+
+/*
+ * mem_event_wake() will wakeup all vcpus waiting for the ring to
+ * become available.  If we have queued vCPUs, they get top priority. We
+ * are guaranteed that they will go through code paths that will eventually
+ * call mem_event_wake() again, ensuring that any blocked vCPUs will get
+ * unpaused once all the queued vCPUs have made it through.
+ */
+void mem_event_wake(struct domain *d, struct mem_event_domain *med)
+{
+    if (!list_empty(&med->wq.list))
+        mem_event_wake_queued(d, med);
+    else
+        mem_event_wake_blocked(d, med);
+}
+
+static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
+{
+    if ( med->ring_page )
+    {
+        struct vcpu *v;
+
+        mem_event_ring_lock(med);
+
+        if ( !list_empty(&med->wq.list) )
+        {
+            mem_event_ring_unlock(med);
+            return -EBUSY;
+        }
+
+        /* Free domU's event channel and leave the other one unbound */
+        free_xen_event_channel(d->vcpu[0], med->xen_port);
+
+        /* Unblock all vCPUs */
+        for_each_vcpu ( d, v )
+        {
+            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            {
+                vcpu_unpause(v);
+                med->blocked--;
+            }
+        }
+
+        destroy_ring_for_helper(&med->ring_page, 
+                                med->ring_pg_struct);
+        mem_event_ring_unlock(med);
+    }
+
+    return 0;
+}
+
+static inline void mem_event_release_slot(struct domain *d,
+                                          struct mem_event_domain *med)
+{
+    /* Update the accounting */
+    if ( current->domain == d )
+        med->target_producers--;
+    else
+        med->foreign_producers--;
+
+    /* Kick any waiters */
+    mem_event_wake(d, med);
+}
+
+/*
+ * mem_event_mark_and_pause() tags vcpu and put it to sleep.
+ * The vcpu will resume execution in mem_event_wake_waiters().
+ */
+void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
+{
+    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
+    {
+        vcpu_pause_nosync(v);
+        med->blocked++;
+    }
+}
+
+/*
+ * This must be preceded by a call to claim_slot(), and is guaranteed to
+ * succeed.  As a side-effect however, the vCPU may be paused if the ring is
+ * overly full and its continued execution would cause stalling and excessive
+ * waiting.  The vCPU will be automatically unpaused when the ring clears.
+ */
+void mem_event_put_request(struct domain *d,
+                           struct mem_event_domain *med,
+                           mem_event_request_t *req)
+{
+    mem_event_front_ring_t *front_ring;
+    int free_req;
+    unsigned int avail_req;
+    RING_IDX req_prod;
+
+    if ( current->domain != d )
+    {
+        req->flags |= MEM_EVENT_FLAG_FOREIGN;
+        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+    }
+
+    mem_event_ring_lock(med);
+
+    /* Due to the reservations, this step must succeed. */
+    front_ring = &med->front_ring;
+    free_req = RING_FREE_REQUESTS(front_ring);
+    ASSERT(free_req > 0);
+
+    /* Copy request */
+    req_prod = front_ring->req_prod_pvt;
+    memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
+    req_prod++;
+
+    /* Update ring */
+    front_ring->req_prod_pvt = req_prod;
+    RING_PUSH_REQUESTS(front_ring);
+
+    /* We've actually *used* our reservation, so release the slot. */
+    mem_event_release_slot(d, med);
+
+    /* Give this vCPU a black eye if necessary, on the way out.
+     * See the comments above wake_blocked() for more information
+     * on how this mechanism works to avoid waiting. */
+    avail_req = mem_event_ring_available(med);
+    if( current->domain == d && avail_req < d->max_vcpus )
+        mem_event_mark_and_pause(current, med);
+
+    mem_event_ring_unlock(med);
+
+    notify_via_xen_event_channel(d, med->xen_port);
+}
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
+{
+    mem_event_front_ring_t *front_ring;
+    RING_IDX rsp_cons;
+
+    mem_event_ring_lock(med);
+
+    front_ring = &med->front_ring;
+    rsp_cons = front_ring->rsp_cons;
+
+    if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
+    {
+        mem_event_ring_unlock(med);
+        return 0;
+    }
+
+    /* Copy response */
+    memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
+    rsp_cons++;
+
+    /* Update ring */
+    front_ring->rsp_cons = rsp_cons;
+    front_ring->sring->rsp_event = rsp_cons + 1;
+
+    /* Kick any waiters -- since we've just consumed an event,
+     * there may be additional space available in the ring. */
+    mem_event_wake(d, med);
+
+    mem_event_ring_unlock(med);
+
+    return 1;
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{
+    mem_event_ring_lock(med);
+    mem_event_release_slot(d, med);
+    mem_event_ring_unlock(med);
+}
+
+static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
+{
+    unsigned int avail_req;
+
+    if ( !med->ring_page )
+        return -ENOSYS;
+
+    mem_event_ring_lock(med);
+
+    avail_req = mem_event_ring_available(med);
+    if ( avail_req == 0 )
+    {
+        mem_event_ring_unlock(med);
+        return -EBUSY;
+    }
+
+    if ( !foreign )
+        med->target_producers++;
+    else
+        med->foreign_producers++;
+
+    mem_event_ring_unlock(med);
+
+    return 0;
+}
+
+/* Simple try_grab wrapper for use in the wait_event() macro. */
+static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
+{
+    *rc = mem_event_grab_slot(med, 0);
+    return *rc;
+}
+
+/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
+static int mem_event_wait_slot(struct mem_event_domain *med)
+{
+    int rc = -EBUSY;
+    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
+    return rc;
+}
+
+bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+    return (med->ring_page != NULL);
+}
+
+/*
+ * Determines whether or not the current vCPU belongs to the target domain,
+ * and calls the appropriate wait function.  If it is a guest vCPU, then we
+ * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
+ * this function will always return 0 for a guest.  For a non-guest, we check
+ * for space and return -EBUSY if the ring is not available.
+ *
+ * Return codes: -ENOSYS: the ring is not yet configured
+ *               -EBUSY: the ring is busy
+ *               0: a spot has been reserved
+ *
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep)
+{
+    if ( (current->domain == d) && allow_sleep )
+        return mem_event_wait_slot(med);
+    else
+        return mem_event_grab_slot(med, (current->domain != d));
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_access_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
+        p2m_mem_access_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_paging_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
+        p2m_mem_paging_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_sharing_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
+        mem_sharing_sharing_resume(v->domain);
+}
+
+int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+    int ret;
+    struct domain *d;
+
+    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
+    if ( ret )
+        return ret;
+
+    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
+    if ( ret )
+        goto out;
+
+    switch (op)
+    {
+        case XENMEM_paging_op:
+            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
+            break;
+        case XENMEM_sharing_op:
+            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
+            break;
+        default:
+            ret = -ENOSYS;
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return ret;
+}
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d)
+{
+    if ( d->mem_event->paging.ring_page ) {
+        /* Destroying the wait queue head means waking up all
+         * queued vcpus. This will drain the list, allowing
+         * the disable routine to complete. It will also drop
+         * all domain refs the wait-queued vcpus are holding.
+         * Finally, because this code path involves previously
+         * pausing the domain (domain_kill), unpausing the 
+         * vcpus causes no harm. */
+        destroy_waitqueue_head(&d->mem_event->paging.wq);
+        (void)mem_event_disable(d, &d->mem_event->paging);
+    }
+    if ( d->mem_event->access.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->access.wq);
+        (void)mem_event_disable(d, &d->mem_event->access);
+    }
+    if ( d->mem_event->share.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->share.wq);
+        (void)mem_event_disable(d, &d->mem_event->share);
+    }
+}
+
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    int rc;
+
+    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
+    if ( rc )
+        return rc;
+
+    if ( unlikely(d == current->domain) )
+    {
+        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
+                 d->domain_id);
+        return 0;
+    }
+
+    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
+    {
+        gdprintk(XENLOG_INFO,
+                 "Memory event op on a domain (%u) with no vcpus\n",
+                 d->domain_id);
+        return -EINVAL;
+    }
+
+    rc = -ENOSYS;
+
+    switch ( mec->mode )
+    {
+    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
+    {
+        struct mem_event_domain *med = &d->mem_event->access;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
+        {
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            /* Currently only EPT is supported */
+            if ( !cpu_has_vmx )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_access,
+                                    HVM_PARAM_ACCESS_RING_PFN,
+                                    mem_access_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
+    {
+        struct mem_event_domain *med = &d->mem_event->paging;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
+        {
+            struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+            rc = -EOPNOTSUPP;
+            /* pvh fixme: p2m_is_foreign types need addressing */
+            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+                break;
+
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            /* No paging if iommu is used */
+            rc = -EMLINK;
+            if ( unlikely(need_iommu(d)) )
+                break;
+
+            rc = -EXDEV;
+            /* Disallow paging in a PoD guest */
+            if ( p2m->pod.entry_count )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
+                                    HVM_PARAM_PAGING_RING_PFN,
+                                    mem_paging_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
+    {
+        struct mem_event_domain *med = &d->mem_event->share;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
+        {
+            rc = -EOPNOTSUPP;
+            /* pvh fixme: p2m_is_foreign types need addressing */
+            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+                break;
+
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
+                                    HVM_PARAM_SHARING_RING_PFN,
+                                    mem_sharing_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    default:
+        rc = -ENOSYS;
+    }
+
+    return rc;
+}
+
+void mem_event_vcpu_pause(struct vcpu *v)
+{
+    ASSERT(v == current);
+
+    atomic_inc(&v->mem_event_pause_count);
+    vcpu_pause_nosync(v);
+}
+
+void mem_event_vcpu_unpause(struct vcpu *v)
+{
+    int old, new, prev = v->mem_event_pause_count.counter;
+
+    /* All unpause requests as a result of toolstack responses.  Prevent
+     * underflow of the vcpu pause count. */
+    do
+    {
+        old = prev;
+        new = old - 1;
+
+        if ( new < 0 )
+        {
+            printk(XENLOG_G_WARNING
+                   "%pv mem_event: Too many unpause attempts\n", v);
+            return;
+        }
+
+        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
+    } while ( prev != old );
+
+    vcpu_unpause(v);
+}
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index c2dd31b..711aaef 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -977,6 +977,68 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     return rc;
 }
 
+void destroy_ring_for_helper(
+    void **_va, struct page_info *page)
+{
+    void *va = *_va;
+
+    if ( va != NULL )
+    {
+        unmap_domain_page_global(va);
+        put_page_and_type(page);
+        *_va = NULL;
+    }
+}
+
+int prepare_ring_for_helper(
+    struct domain *d, unsigned long gmfn, struct page_info **_page,
+    void **_va)
+{
+    struct page_info *page;
+    p2m_type_t p2mt;
+    void *va;
+
+    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
+
+#ifdef CONFIG_X86
+    if ( p2m_is_paging(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        p2m_mem_paging_populate(d, gmfn);
+        return -ENOENT;
+    }
+    if ( p2m_is_shared(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        return -ENOENT;
+    }
+#endif
+
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    va = __map_domain_page_global(page);
+    if ( va == NULL )
+    {
+        put_page_and_type(page);
+        return -ENOMEM;
+    }
+
+    *_va = va;
+    *_page = page;
+
+    return 0;
+}
+
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 9fa80a4..7fc3b97 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -301,7 +301,6 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
     })
 
 static inline void put_gfn(struct domain *d, unsigned long gfn) {}
-static inline void mem_event_cleanup(struct domain *d) {}
 static inline int relinquish_shared_pages(struct domain *d)
 {
     return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 0ebd478..b07400e 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -226,12 +226,6 @@ int hvm_vcpu_cacheattr_init(struct vcpu *v);
 void hvm_vcpu_cacheattr_destroy(struct vcpu *v);
 void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip);
 
-/* Prepare/destroy a ring for a dom0 helper. Helper with talk
- * with Xen on behalf of this hvm domain. */
-int prepare_ring_for_helper(struct domain *d, unsigned long gmfn, 
-                            struct page_info **_page, void **_va);
-void destroy_ring_for_helper(void **_va, struct page_info *page);
-
 bool_t hvm_send_assist_req(ioreq_t *p);
 void hvm_broadcast_assist_req(ioreq_t *p);
 
diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
deleted file mode 100644
index 5c7c5fd..0000000
--- a/xen/include/asm-x86/mem_access.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_access.h
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-#ifndef _XEN_ASM_MEM_ACCESS_H
-#define _XEN_ASM_MEM_ACCESS_H
-
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-int mem_access_send_req(struct domain *d, mem_event_request_t *req);
-
-#endif /* _XEN_ASM_MEM_ACCESS_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
deleted file mode 100644
index ed4481a..0000000
--- a/xen/include/asm-x86/mem_event.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_event.h
- *
- * Common interface for memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#ifndef __MEM_EVENT_H__
-#define __MEM_EVENT_H__
-
-/* Returns whether a ring has been set up */
-bool_t mem_event_check_ring(struct mem_event_domain *med);
-
-/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
- * available space and the caller is a foreign domain. If the guest itself
- * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events. 
- *
- * However, the allow_sleep flag can be set to false in cases in which it is ok
- * to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!). 
- *
- * In general, you must follow a claim_slot() call with either put_request() or
- * cancel_slot(), both of which are guaranteed to
- * succeed. 
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d, 
-                                        struct mem_event_domain *med)
-{
-    return __mem_event_claim_slot(d, med, 1);
-}
-
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
-{
-    return __mem_event_claim_slot(d, med, 0);
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
-
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req);
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp);
-
-int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
-
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
-
-#endif /* __MEM_EVENT_H__ */
-
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index d253117..bafd28c 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -590,8 +590,6 @@ unsigned int domain_clamp_alloc_bitsize(struct domain *d, unsigned int bits);
 
 unsigned long domain_get_maximum_gpfn(struct domain *d);
 
-void mem_event_cleanup(struct domain *d);
-
 extern struct domain *dom_xen, *dom_io, *dom_cow;	/* for vmcoreinfo */
 
 /* Definition of an mm lock: spinlock with extra fields for debugging */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
new file mode 100644
index 0000000..ded5441
--- /dev/null
+++ b/xen/include/xen/mem_access.h
@@ -0,0 +1,58 @@
+/******************************************************************************
+ * include/asm-x86/mem_access.h
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#ifndef _XEN_ASM_MEM_ACCESS_H
+#define _XEN_ASM_MEM_ACCESS_H
+
+#ifdef CONFIG_X86
+
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
+int mem_access_send_req(struct domain *d, mem_event_request_t *req);
+
+#else
+
+static inline
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+    return -ENOSYS;
+}
+
+static inline
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+    return -ENOSYS;
+}
+
+#endif /* CONFIG_X86 */
+
+#endif /* _XEN_ASM_MEM_ACCESS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
new file mode 100644
index 0000000..a28d453
--- /dev/null
+++ b/xen/include/xen/mem_event.h
@@ -0,0 +1,141 @@
+/******************************************************************************
+ * include/asm-x86/mem_event.h
+ *
+ * Common interface for memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#ifndef __MEM_EVENT_H__
+#define __MEM_EVENT_H__
+
+#ifdef CONFIG_X86
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d);
+
+/* Returns whether a ring has been set up */
+bool_t mem_event_check_ring(struct mem_event_domain *med);
+
+/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
+ * available space and the caller is a foreign domain. If the guest itself
+ * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
+ * that the ring does not lose future events. 
+ *
+ * However, the allow_sleep flag can be set to false in cases in which it is ok
+ * to lose future events, and thus -EBUSY can be returned to guest vcpus
+ * (handle with care!). 
+ *
+ * In general, you must follow a claim_slot() call with either put_request() or
+ * cancel_slot(), both of which are guaranteed to
+ * succeed. 
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep);
+static inline int mem_event_claim_slot(struct domain *d, 
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 1);
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 0);
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
+
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+                            mem_event_request_t *req);
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+                           mem_event_response_t *rsp);
+
+int do_mem_event_op(int op, uint32_t domain, void *arg);
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+
+void mem_event_vcpu_pause(struct vcpu *v);
+void mem_event_vcpu_unpause(struct vcpu *v);
+
+#else
+
+static inline void mem_event_cleanup(struct domain *d) {}
+
+static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+    return 0;
+}
+
+static inline int mem_event_claim_slot(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return -ENOSYS;
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return -ENOSYS;
+}
+
+static inline
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{}
+
+static inline
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+                            mem_event_request_t *req)
+{}
+
+static inline
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+                           mem_event_response_t *rsp)
+{
+    return -ENOSYS;
+}
+
+static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+    return -ENOSYS;
+}
+
+static inline
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    return -ENOSYS;
+}
+
+static inline void mem_event_vcpu_pause(struct vcpu *v) {}
+static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+
+#endif /* CONFIG_X86 */
+
+#endif /* __MEM_EVENT_H__ */
+
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index b183189..7c0efc7 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -371,4 +371,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn);
 /* TRUE if the whole page at @mfn is of the requested RAM type(s) above. */
 int page_is_ram_type(unsigned long mfn, unsigned long mem_type);
 
+/* Prepare/destroy a ring for a dom0 helper. Helper with talk
+ * with Xen on behalf of this domain. */
+int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
+                            struct page_info **_page, void **_va);
+void destroy_ring_for_helper(void **_va, struct page_info *page);
+
 #endif /* __XEN_MM_H__ */
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  2014-08-25 17:20   ` Andres Lagar Cavilla
  2014-08-26 13:35   ` Jan Beulich
  2014-08-22  9:30 ` [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/common/mem_event.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 38c6697..a94ddf6 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -58,7 +58,7 @@ static int mem_event_enable(
     if ( med->ring_page )
         return -EBUSY;
 
-    /* The parameter defaults to zero, and it should be 
+    /* The parameter defaults to zero, and it should be
      * set to something */
     if ( ring_gfn == 0 )
         return -ENOSYS;
@@ -66,7 +66,7 @@ static int mem_event_enable(
     mem_event_ring_lock_init(med);
     mem_event_ring_lock(med);
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
+    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
                                     &med->ring_page);
     if ( rc < 0 )
         goto err;
@@ -98,7 +98,7 @@ static int mem_event_enable(
     return 0;
 
  err:
-    destroy_ring_for_helper(&med->ring_page, 
+    destroy_ring_for_helper(&med->ring_page,
                             med->ring_pg_struct);
     mem_event_ring_unlock(med);
 
@@ -227,7 +227,7 @@ static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
             }
         }
 
-        destroy_ring_for_helper(&med->ring_page, 
+        destroy_ring_for_helper(&med->ring_page,
                                 med->ring_pg_struct);
         mem_event_ring_unlock(med);
     }
@@ -480,7 +480,7 @@ void mem_event_cleanup(struct domain *d)
          * the disable routine to complete. It will also drop
          * all domain refs the wait-queued vcpus are holding.
          * Finally, because this code path involves previously
-         * pausing the domain (domain_kill), unpausing the 
+         * pausing the domain (domain_kill), unpausing the
          * vcpus causes no harm. */
         destroy_waitqueue_head(&d->mem_event->paging.wq);
         (void)mem_event_disable(d, &d->mem_event->paging);
@@ -598,7 +598,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( p2m->pod.entry_count )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
                                     HVM_PARAM_PAGING_RING_PFN,
                                     mem_paging_notification);
         }
@@ -618,7 +618,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
+    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
     {
         struct mem_event_domain *med = &d->mem_event->share;
         rc = -EINVAL;
@@ -637,7 +637,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !hap_enabled(d) )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
                                     HVM_PARAM_SHARING_RING_PFN,
                                     mem_sharing_notification);
         }
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  2014-08-25 17:25   ` Andres Lagar Cavilla
  2014-08-26 13:51   ` Jan Beulich
  2014-08-22  9:30 ` [PATCH RFC 4/7] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

This patch sets up the infrastructure to support mem_access and mem_event
 on ARM and turns on compilation. We define the required XSM functions,
handling of domctl copyback, and the required p2m types and stub-functions
in this patch.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/arm/domctl.c        |  36 ++++++++++++++--
 xen/arch/arm/mm.c            |  18 ++++++--
 xen/arch/arm/p2m.c           |   5 +++
 xen/common/mem_access.c      |   6 +--
 xen/common/mem_event.c       |  15 +++++--
 xen/include/asm-arm/p2m.h    | 100 ++++++++++++++++++++++++++++++++++---------
 xen/include/xen/mem_access.h |  19 --------
 xen/include/xen/mem_event.h  |  53 +++--------------------
 xen/include/xen/sched.h      |   1 -
 xen/include/xsm/dummy.h      |  24 +++++------
 xen/include/xsm/xsm.h        |  25 +++++------
 xen/xsm/dummy.c              |   4 +-
 12 files changed, 178 insertions(+), 128 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 45974e7..bb0b8d3 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,10 +11,17 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <asm/guest_access.h>
+#include <xen/mem_event.h>
+#include <public/mem_event.h>
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+
+    long ret;
+    bool_t copyback = 0;
+
     switch ( domctl->cmd )
     {
     case XEN_DOMCTL_cacheflush:
@@ -23,17 +30,38 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         unsigned long e = s + domctl->u.cacheflush.nr_pfns;
 
         if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
-            return -EINVAL;
+        {
+            ret = -EINVAL;
+            break;
+        }
 
         if ( e < s )
-            return -EINVAL;
+        {
+            ret = -EINVAL;
+            break;
+        }
 
-        return p2m_cache_flush(d, s, e);
+        ret = p2m_cache_flush(d, s, e);
     }
+    break;
+
+    case XEN_DOMCTL_mem_event_op:
+    {
+        ret = mem_event_domctl(d, &domctl->u.mem_event_op,
+                              guest_handle_cast(u_domctl, void));
+        copyback = 1;
+    }
+    break;
 
     default:
-        return subarch_do_domctl(domctl, d, u_domctl);
+        ret = subarch_do_domctl(domctl, d, u_domctl);
+        break;
     }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }
 
 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..cd04dec 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -35,6 +35,9 @@
 #include <asm/current.h>
 #include <asm/flushtlb.h>
 #include <public/memory.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <xen/hypercall.h>
 #include <xen/sched.h>
 #include <xen/vmap.h>
 #include <xsm/xsm.h>
@@ -1111,18 +1114,27 @@ int xenmem_add_to_physmap_one(
 
 long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    switch ( op )
+
+    long rc;
+
+    switch ( op & MEMOP_CMD_MASK )
     {
     /* XXX: memsharing not working yet */
     case XENMEM_get_sharing_shared_pages:
     case XENMEM_get_sharing_freed_pages:
         return 0;
+    case XENMEM_access_op:
+    {
+        rc = mem_access_memop(op, guest_handle_cast(arg, xen_mem_access_op_t));
+        break;
+    }
 
     default:
-        return -ENOSYS;
+        rc = -ENOSYS;
+        break;
     }
 
-    return 0;
+    return rc;
 }
 
 struct domain *page_get_owner_and_reference(struct page_info *page)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 143199b..0ca0d2f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -10,6 +10,9 @@
 #include <asm/event.h>
 #include <asm/hardirq.h>
 #include <asm/page.h>
+#include <xen/mem_event.h>
+#include <public/mem_event.h>
+#include <xen/mem_access.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -999,6 +1002,8 @@ int p2m_init(struct domain *d)
     p2m->max_mapped_gfn = 0;
     p2m->lowest_mapped_gfn = ULONG_MAX;
 
+    p2m->default_access = p2m_access_rwx;
+
 err:
     spin_unlock(&p2m->lock);
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 84acdf9..6bb9cf4 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -29,8 +29,6 @@
 #include <xen/mem_event.h>
 #include <xsm/xsm.h>
 
-#ifdef CONFIG_X86
-
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
 {
@@ -45,9 +43,11 @@ int mem_access_memop(unsigned long cmd,
     if ( rc )
         return rc;
 
+#ifdef CONFIG_X86
     rc = -EINVAL;
     if ( !is_hvm_domain(d) )
         goto out;
+#endif
 
     rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
     if ( rc )
@@ -125,8 +125,6 @@ int mem_access_send_req(struct domain *d, mem_event_request_t *req)
     return 0;
 }
 
-#endif /* CONFIG_X86 */
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index a94ddf6..2a91928 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -20,16 +20,19 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
-#ifdef CONFIG_X86
-
+#include <xen/sched.h>
 #include <asm/domain.h>
 #include <xen/event.h>
 #include <xen/wait.h>
 #include <asm/p2m.h>
 #include <xen/mem_event.h>
 #include <xen/mem_access.h>
+
+#ifdef CONFIG_X86
 #include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
+#endif
+
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
@@ -427,6 +430,7 @@ static void mem_access_notification(struct vcpu *v, unsigned int port)
         p2m_mem_access_resume(v->domain);
 }
 
+#ifdef CONFIG_X86
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
@@ -470,6 +474,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
     rcu_unlock_domain(d);
     return ret;
 }
+#endif
 
 /* Clean up on domain destruction */
 void mem_event_cleanup(struct domain *d)
@@ -538,6 +543,8 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         {
         case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
         {
+
+#ifdef CONFIG_X86
             rc = -ENODEV;
             /* Only HAP is supported */
             if ( !hap_enabled(d) )
@@ -546,6 +553,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             /* Currently only EPT is supported */
             if ( !cpu_has_vmx )
                 break;
+#endif
 
             rc = mem_event_enable(d, mec, med, _VPF_mem_access,
                                     HVM_PARAM_ACCESS_RING_PFN,
@@ -567,6 +575,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
+#ifdef CONFIG_X86
     case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
     {
         struct mem_event_domain *med = &d->mem_event->paging;
@@ -656,6 +665,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
     }
     break;
+#endif
 
     default:
         rc = -ENOSYS;
@@ -695,7 +705,6 @@ void mem_event_vcpu_unpause(struct vcpu *v)
 
     vcpu_unpause(v);
 }
-#endif
 
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 06c93a0..f3d1f33 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,9 +2,55 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
+#include <public/memory.h>
+#include <public/mem_event.h>
 
 struct domain;
 
+/* List of possible type for each page in the p2m entry.
+ * The number of available bit per page in the pte for this purpose is 4 bits.
+ * So it's possible to only have 16 fields. If we run out of value in the
+ * future, it's possible to use higher value for pseudo-type and don't store
+ * them in the p2m entry.
+ */
+typedef enum {
+    p2m_invalid = 0,    /* Nothing mapped here */
+    p2m_ram_rw,         /* Normal read/write guest RAM */
+    p2m_ram_ro,         /* Read-only; writes are silently dropped */
+    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
+    p2m_map_foreign,    /* Ram pages from foreign domain */
+    p2m_grant_map_rw,   /* Read/write grant mapping */
+    p2m_grant_map_ro,   /* Read-only grant mapping */
+    /* The types below are only used to decide the page attribute in the P2M */
+    p2m_iommu_map_rw,   /* Read/write iommu mapping */
+    p2m_iommu_map_ro,   /* Read-only iommu mapping */
+    p2m_max_real_type,  /* Types after this won't be store in the p2m */
+} p2m_type_t;
+
+/*
+ * Additional access types, which are used to further restrict
+ * the permissions given by the p2m_type_t memory type.  Violations
+ * caused by p2m_access_t restrictions are sent to the mem_event
+ * interface.
+ *
+ * The access permissions are soft state: when any ambigious change of page
+ * type or use occurs, or when pages are flushed, swapped, or at any other
+ * convenient type, the access permissions can get reset to the p2m_domain
+ * default.
+ */
+typedef enum {
+    p2m_access_n     = 0, /* No access permissions allowed */
+    p2m_access_r     = 1,
+    p2m_access_w     = 2, 
+    p2m_access_rw    = 3,
+    p2m_access_x     = 4, 
+    p2m_access_rx    = 5,
+    p2m_access_wx    = 6, 
+    p2m_access_rwx   = 7
+
+    /* NOTE: Assumed to be only 4 bits right now */
+} p2m_access_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
@@ -38,27 +84,17 @@ struct p2m_domain {
          * at each p2m tree level. */
         unsigned long shattered[4];
     } stats;
-};
 
-/* List of possible type for each page in the p2m entry.
- * The number of available bit per page in the pte for this purpose is 4 bits.
- * So it's possible to only have 16 fields. If we run out of value in the
- * future, it's possible to use higher value for pseudo-type and don't store
- * them in the p2m entry.
- */
-typedef enum {
-    p2m_invalid = 0,    /* Nothing mapped here */
-    p2m_ram_rw,         /* Normal read/write guest RAM */
-    p2m_ram_ro,         /* Read-only; writes are silently dropped */
-    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
-    p2m_map_foreign,    /* Ram pages from foreign domain */
-    p2m_grant_map_rw,   /* Read/write grant mapping */
-    p2m_grant_map_ro,   /* Read-only grant mapping */
-    /* The types below are only used to decide the page attribute in the P2M */
-    p2m_iommu_map_rw,   /* Read/write iommu mapping */
-    p2m_iommu_map_ro,   /* Read-only iommu mapping */
-    p2m_max_real_type,  /* Types after this won't be store in the p2m */
-} p2m_type_t;
+    /* Default P2M access type for each page in the the domain: new pages,
+     * swapped in pages, cleared pages, and pages that are ambiquously
+     * retyped get this access type.  See definition of p2m_access_t. */
+    p2m_access_t default_access;
+
+    /* If true, and an access fault comes in and there is no mem_event listener, 
+     * pause domain.  Otherwise, remove access restrictions. */
+    bool_t       access_required;
+
+};
 
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
@@ -195,6 +231,30 @@ static inline int get_page_and_type(struct page_info *page,
     return rc;
 }
 
+/* get host p2m table */
+#define p2m_get_hostp2m(d)      (&((d)->arch.p2m))
+
+/* Resumes the running of the VCPU, restarting the last instruction */
+static inline void p2m_mem_access_resume(struct domain *d) {}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+static inline
+long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+    return -ENOSYS;
+}
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+static inline
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+                       xenmem_access_t *access)
+{
+    return -ENOSYS;
+}
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index ded5441..5c7c5fd 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -23,29 +23,10 @@
 #ifndef _XEN_ASM_MEM_ACCESS_H
 #define _XEN_ASM_MEM_ACCESS_H
 
-#ifdef CONFIG_X86
-
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
 int mem_access_send_req(struct domain *d, mem_event_request_t *req);
 
-#else
-
-static inline
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
-{
-    return -ENOSYS;
-}
-
-static inline
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
-{
-    return -ENOSYS;
-}
-
-#endif /* CONFIG_X86 */
-
 #endif /* _XEN_ASM_MEM_ACCESS_H */
 
 /*
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
index a28d453..e2a9d4d 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/mem_event.h
@@ -24,8 +24,6 @@
 #ifndef __MEM_EVENT_H__
 #define __MEM_EVENT_H__
 
-#ifdef CONFIG_X86
-
 /* Clean up on domain destruction */
 void mem_event_cleanup(struct domain *d);
 
@@ -67,66 +65,25 @@ void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
 int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
                            mem_event_response_t *rsp);
 
-int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
                      XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 void mem_event_vcpu_pause(struct vcpu *v);
 void mem_event_vcpu_unpause(struct vcpu *v);
 
-#else
-
-static inline void mem_event_cleanup(struct domain *d) {}
-
-static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
-{
-    return 0;
-}
-
-static inline int mem_event_claim_slot(struct domain *d,
-                                        struct mem_event_domain *med)
-{
-    return -ENOSYS;
-}
-
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
-{
-    return -ENOSYS;
-}
-
-static inline
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
-{}
-
-static inline
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req)
-{}
+#ifdef CONFIG_X86
 
-static inline
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp)
-{
-    return -ENOSYS;
-}
+int do_mem_event_op(int op, uint32_t domain, void *arg);
 
-static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
-{
-    return -ENOSYS;
-}
+#else
 
 static inline
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+int do_mem_event_op(int op, uint32_t domain, void *arg)
 {
     return -ENOSYS;
 }
 
-static inline void mem_event_vcpu_pause(struct vcpu *v) {}
-static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
-
-#endif /* CONFIG_X86 */
+#endif
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 4575dda..2365fad 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1,4 +1,3 @@
-
 #ifndef __SCHED_H__
 #define __SCHED_H__
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index c5aa316..61677ea 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -507,6 +507,18 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
+static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
+static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
 {
@@ -550,18 +562,6 @@ static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
-{
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index a85045d..64289cd 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -140,6 +140,9 @@ struct xsm_operations {
     int (*hvm_control) (struct domain *d, unsigned long op);
     int (*hvm_param_nested) (struct domain *d);
 
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
+
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -148,8 +151,6 @@ struct xsm_operations {
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*hvm_ioreq_server) (struct domain *d, int op);
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -534,6 +535,16 @@ static inline int xsm_hvm_param_nested (xsm_default_t def, struct domain *d)
     return xsm_ops->hvm_param_nested(d);
 }
 
+static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+{
+    return xsm_ops->mem_event_control(d, mode, op);
+}
+
+static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+{
+    return xsm_ops->mem_event_op(d, op);
+}
+
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
 {
@@ -570,16 +581,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int
     return xsm_ops->hvm_ioreq_server(d, op);
 }
 
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
-{
-    return xsm_ops->mem_event_control(d, mode, op);
-}
-
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->mem_event_op(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index c95c803..9df9d81 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -116,6 +116,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
@@ -125,8 +127,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, hvm_ioreq_server);
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, platform_op);
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC 4/7] tools/libxc: Allocate magic page for mem access on ARM
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2014-08-22  9:30 ` [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 5/7] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 tools/libxc/xc_dom_arm.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index 9b31b1f..13e881e 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -26,9 +26,10 @@
 #include "xg_private.h"
 #include "xc_dom.h"
 
-#define NR_MAGIC_PAGES 2
+#define NR_MAGIC_PAGES 3
 #define CONSOLE_PFN_OFFSET 0
 #define XENSTORE_PFN_OFFSET 1
+#define MEMACCESS_PFN_OFFSET 2
 
 #define LPAE_SHIFT 9
 
@@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
+    xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
             dom->console_pfn);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
             dom->xenstore_pfn);
+    xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
+            base + MEMACCESS_PFN_OFFSET);
     /* allocated by toolstack */
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
             dom->console_evtchn);
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC 5/7] xen/arm: Data abort exception (R/W) mem_events.
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2014-08-22  9:30 ` [PATCH RFC 4/7] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 6/7] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 7/7] tools/tests: Enable xen-access on ARM Tamas K Lengyel
  6 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

This patch replaces the stub p2m functions with the actual ones required to
store, set, check and deliver LPAE R/W mem_events. As the LPAE PTE lacks
available bits to store the p2m_access_t bits, we use a separate radix
tree (mem_access_settings) to store the permissions with the key being the
gfn.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/arm/p2m.c        | 407 +++++++++++++++++++++++++++++++++++++++-------
 xen/arch/arm/traps.c      |  25 ++-
 xen/include/asm-arm/p2m.h |  47 +++---
 3 files changed, 390 insertions(+), 89 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0ca0d2f..41cfe99 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -10,6 +10,7 @@
 #include <asm/event.h>
 #include <asm/hardirq.h>
 #include <asm/page.h>
+#include <xen/radix-tree.h>
 #include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <xen/mem_access.h>
@@ -148,13 +149,89 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr)
     return __map_domain_page(page);
 }
 
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+    /* First apply type permissions */
+    switch (t)
+    {
+    case p2m_ram_rw:
+        e->p2m.xn = 0;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_ram_ro:
+        e->p2m.xn = 0;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct:
+        e->p2m.xn = 1;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_iommu_map_ro:
+    case p2m_grant_map_ro:
+    case p2m_invalid:
+        e->p2m.xn = 1;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_max_real_type:
+        BUG();
+        break;
+    }
+
+    /* Then restrict with access permissions */
+    switch(a)
+    {
+    case p2m_access_n:
+        e->p2m.read = e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_r:
+        e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_x:
+    case p2m_access_rx:
+        e->p2m.write = e->p2m.xn = 0;
+        break;
+    case p2m_access_w:
+    case p2m_access_rw:
+        e->p2m.write = 1;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_wx:
+    case p2m_access_rwx:
+        break;
+    }
+}
+
+static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
+{
+    write_pte(p, pte);
+    if ( flush_cache )
+        clean_xen_dcache(*p);
+}
+
 /*
  * Lookup the MFN corresponding to a domain's PFN.
  *
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
+ *
+ * [IN]:  d      Domain
+ * [IN]:  paddr  IPA
+ * [IN]:  a      (Optional) Update PTE access permission
+ * [OUT]: t      (Optional) Return PTE type
  */
-paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+paddr_t p2m_lookup(struct domain *d,
+    paddr_t paddr,
+    p2m_access_t *a,
+    p2m_type_t *t)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte, *first = NULL, *second = NULL, *third = NULL;
@@ -167,8 +244,6 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
 
     *t = p2m_invalid;
 
-    spin_lock(&p2m->lock);
-
     first = p2m_map_first(p2m, paddr);
     if ( !first )
         goto err;
@@ -200,6 +275,14 @@ done:
     {
         ASSERT(pte.p2m.type != p2m_invalid);
         maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
+        ASSERT(mfn_valid(maddr>>PAGE_SHIFT));
+
+        if ( a )
+        {
+            p2m_set_permission(&pte, pte.p2m.type, *a);
+            p2m_write_pte(&pte, pte, 1);
+        }
+
         *t = pte.p2m.type;
     }
 
@@ -208,8 +291,6 @@ done:
     if (first) unmap_domain_page(first);
 
 err:
-    spin_unlock(&p2m->lock);
-
     return maddr;
 }
 
@@ -228,7 +309,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
 }
 
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
-                               p2m_type_t t)
+                               p2m_type_t t, p2m_access_t a)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     /* sh, xn and write bit will be defined in the following switches
@@ -258,37 +339,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
     }
 
-    switch (t)
-    {
-    case p2m_ram_rw:
-        e.p2m.xn = 0;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_ram_ro:
-        e.p2m.xn = 0;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_iommu_map_rw:
-    case p2m_map_foreign:
-    case p2m_grant_map_rw:
-    case p2m_mmio_direct:
-        e.p2m.xn = 1;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_iommu_map_ro:
-    case p2m_grant_map_ro:
-    case p2m_invalid:
-        e.p2m.xn = 1;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_max_real_type:
-        BUG();
-        break;
-    }
+    p2m_set_permission(&e, t, a);
 
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
@@ -298,13 +349,6 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
     return e;
 }
 
-static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
-{
-    write_pte(p, pte);
-    if ( flush_cache )
-        clean_xen_dcache(*p);
-}
-
 /*
  * Allocate a new page table page and hook it in via the given entry.
  * apply_one_level relies on this returning 0 on success
@@ -346,7 +390,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
              pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
-                                    MATTR_MEM, t);
+                                    MATTR_MEM, t, p2m->default_access);
 
              /*
               * First and second level super pages set p2m.table = 0, but
@@ -366,7 +410,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
+    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid, p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
 
@@ -461,7 +505,8 @@ static int apply_one_level(struct domain *d,
                            paddr_t *maddr,
                            bool_t *flush,
                            int mattr,
-                           p2m_type_t t)
+                           p2m_type_t t,
+                           p2m_access_t a)
 {
     /* Helpers to lookup the properties of each level */
     const paddr_t level_sizes[] =
@@ -497,7 +542,7 @@ static int apply_one_level(struct domain *d,
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
+                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
                 p2m_write_pte(entry, pte, flush_cache);
@@ -532,7 +577,7 @@ static int apply_one_level(struct domain *d,
              (level == 3 || !p2m_table(orig_pte)) )
         {
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
+            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
@@ -639,6 +684,7 @@ static int apply_one_level(struct domain *d,
 
         memset(&pte, 0x00, sizeof(pte));
         p2m_write_pte(entry, pte, flush_cache);
+        radix_tree_delete(&p2m->mem_access_settings, paddr_to_pfn(*addr));
 
         *addr += level_size;
 
@@ -693,7 +739,8 @@ static int apply_p2m_changes(struct domain *d,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     p2m_access_t a)
 {
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -758,7 +805,7 @@ static int apply_p2m_changes(struct domain *d,
                               1, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
@@ -779,7 +826,7 @@ static int apply_p2m_changes(struct domain *d,
                               2, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
@@ -798,7 +845,7 @@ static int apply_p2m_changes(struct domain *d,
                               3, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         /* L3 had better have done something! We cannot descend any further */
         BUG_ON(ret == P2M_ONE_DESCEND);
@@ -840,7 +887,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, p2m_ram_rw,
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -852,7 +900,9 @@ int map_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr_mfns),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_mmio_direct);
+                             MATTR_DEV,
+                             p2m_mmio_direct,
+                             d->arch.p2m.default_access);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -864,7 +914,8 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, t,
+                             d->arch.p2m.default_access);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -874,7 +925,8 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid,
+                      d->arch.p2m.default_access);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -977,6 +1029,8 @@ void p2m_teardown(struct domain *d)
 
     p2m_free_vmid(d);
 
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
     spin_unlock(&p2m->lock);
 }
 
@@ -1003,6 +1057,7 @@ int p2m_init(struct domain *d)
     p2m->lowest_mapped_gfn = ULONG_MAX;
 
     p2m->default_access = p2m_access_rwx;
+    radix_tree_init(&p2m->mem_access_settings);
 
 err:
     spin_unlock(&p2m->lock);
@@ -1018,7 +1073,7 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid, p2m->default_access);
 }
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
@@ -1032,12 +1087,16 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
-                             MATTR_MEM, p2m_invalid);
+                             MATTR_MEM,
+                             p2m_invalid, p2m->default_access);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
-    paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
+    paddr_t p;
+    spin_lock(&d->arch.p2m.lock);
+    p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL, NULL);
+    spin_unlock(&d->arch.p2m.lock);
     return p >> PAGE_SHIFT;
 }
 
@@ -1069,6 +1128,234 @@ err:
     return page;
 }
 
+int p2m_mem_access_check(paddr_t gpa, vaddr_t gla,
+                          bool_t access_r, bool_t access_w, bool_t access_x,
+                          bool_t ptw)
+{
+    struct vcpu *v = current;
+    mem_event_request_t *req = NULL;
+    xenmem_access_t xma;
+    bool_t violation;
+    int rc;
+
+    /* If we have no listener, nothing to do */
+    if( !mem_event_check_ring( &v->domain->mem_event->access ) )
+    {
+        return 1;
+    }
+
+    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
+    if ( rc )
+        return rc;
+
+    switch (xma)
+    {
+        default:
+        case XENMEM_access_n:
+            violation = access_r || access_w || access_x;
+            break;
+        case XENMEM_access_r:
+            violation = access_w || access_x;
+            break;
+        case XENMEM_access_w:
+            violation = access_r || access_x;
+            break;
+        case XENMEM_access_x:
+            violation = access_r || access_w;
+            break;
+        case XENMEM_access_rx:
+            violation = access_w;
+            break;
+        case XENMEM_access_wx:
+            violation = access_r;
+            break;
+        case XENMEM_access_rw:
+            violation = access_x;
+            break;
+        case XENMEM_access_rwx:
+            violation = 0;
+            break;
+    }
+
+    if (!violation)
+        return 1;
+
+    req = xzalloc(mem_event_request_t);
+    if ( req )
+    {
+        req->reason = MEM_EVENT_REASON_VIOLATION;
+        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        req->gfn = gpa >> PAGE_SHIFT;
+        req->offset =  gpa & ((1 << PAGE_SHIFT) - 1);
+        req->gla = gla;
+        req->gla_valid = 1;
+        req->access_r = access_r;
+        req->access_w = access_w;
+        req->access_x = access_x;
+        req->vcpu_id = v->vcpu_id;
+
+        mem_access_send_req(v->domain, req);
+        vcpu_pause_nosync(v);
+
+        xfree(req);
+
+        return 0;
+    }
+
+    return 1;
+}
+
+void p2m_mem_access_resume(struct domain *d)
+{
+    mem_event_response_t rsp;
+
+    /* Pull all responses off the ring */
+    while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
+    {
+        struct vcpu *v;
+
+        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+            continue;
+
+        /* Validate the vcpu_id in the response. */
+        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+            continue;
+
+        v = d->vcpu[rsp.vcpu_id];
+
+        /* Unpause domain */
+        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+            mem_event_vcpu_unpause(v);
+    }
+}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_access_t a;
+    long rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+#undef ACCESS
+    };
+
+    switch ( access )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        a = memaccess[access];
+        break;
+    case XENMEM_access_default:
+        a = p2m->default_access;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    /* If request to set default access */
+    if ( pfn == ~0ul )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+    for ( pfn += start; nr > start; ++pfn )
+    {
+
+        unsigned long mfn = p2m_lookup(d, pfn_to_paddr(pfn), &a, NULL);
+        mfn >>= PAGE_SHIFT;
+
+        if ( !mfn_valid(mfn) )
+            break;
+
+        rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
+                                    radix_tree_int_to_ptr(a));
+
+        /* If a setting existed already, change it to the new one */
+        if ( -EEXIST == rc )
+        {
+            radix_tree_replace_slot(
+                radix_tree_lookup_slot(
+                    &p2m->mem_access_settings, pfn),
+                radix_tree_int_to_ptr(a));
+            rc = 0;
+        }
+        else
+        {
+            /* If we fail to save the setting in the Radix tree, we
+             * need to reset the PTE permissions to default. */
+            p2m_lookup(d, pfn_to_paddr(pfn), &p2m->default_access, NULL);
+            break;
+        }
+
+        /* Check for continuation if it's not the last iteration. */
+        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
+        {
+            rc = start;
+            break;
+        }
+    }
+    spin_unlock(&p2m->lock);
+    return rc;
+}
+
+int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
+                       xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    void *i;
+    int index;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+#undef ACCESS
+    };
+
+    /* If request to get default access */
+    if ( gpfn == ~0ull )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+
+    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
+
+    spin_unlock(&p2m->lock);
+
+    if (!i)
+        return -ESRCH;
+
+    index = radix_tree_ptr_to_int(i);
+
+    if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
+        return -ERANGE;
+
+    *access =  memaccess[ (unsigned) index];
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 76a9586..82305c4 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1674,23 +1674,25 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
     uint32_t offset;
     uint32_t *first = NULL, *second = NULL;
 
+    spin_lock(&d->arch.p2m.lock);
+
     printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr);
     printk("    TTBCR: 0x%08"PRIregister"\n", ttbcr);
     printk("    TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n",
-           ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL));
+           ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL, NULL));
 
     if ( ttbcr & TTBCR_EAE )
     {
         printk("Cannot handle LPAE guest PT walk\n");
-        return;
+        goto err;
     }
     if ( (ttbcr & TTBCR_N_MASK) != 0 )
     {
         printk("Cannot handle TTBR1 guest walks\n");
-        return;
+        goto err;
     }
 
-    paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL);
+    paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL, NULL);
     if ( paddr == INVALID_PADDR )
     {
         printk("Failed TTBR0 maddr lookup\n");
@@ -1705,7 +1707,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
          !(first[offset] & 0x2) )
         goto done;
 
-    paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL);
+    paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL, NULL);
 
     if ( paddr == INVALID_PADDR )
     {
@@ -1720,6 +1722,9 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
 done:
     if (second) unmap_domain_page(second);
     if (first) unmap_domain_page(first);
+
+err:
+    spin_unlock(&d->arch.p2m.lock);
 }
 
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
@@ -1749,9 +1754,6 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     info.gva = READ_SYSREG64(FAR_EL2);
 #endif
 
-    if (dabt.s1ptw)
-        goto bad_data_abort;
-
     rc = gva_to_ipa(info.gva, &info.gpa);
     if ( rc == -EFAULT )
         goto bad_data_abort;
@@ -1774,12 +1776,19 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         }
     }
 
+    rc = p2m_mem_access_check(info.gpa, info.gva,
+                         1, info.dabt.write, 0,
+                         info.dabt.s1ptw);
+
     if (handle_mmio(&info))
     {
         advance_pc(regs, hsr);
         return;
     }
 
+    if ( !rc )
+        return;
+
 bad_data_abort:
     inject_dabt_exception(regs, info.gva, hsr.len);
 }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index f3d1f33..c0fc61d 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,6 +2,7 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
+#include <xen/radix-tree.h>
 #include <public/memory.h>
 #include <public/mem_event.h>
 
@@ -39,15 +40,14 @@ typedef enum {
  * default.
  */
 typedef enum {
-    p2m_access_n     = 0, /* No access permissions allowed */
+    p2m_access_n     = 0, /* No access allowed */
     p2m_access_r     = 1,
-    p2m_access_w     = 2, 
+    p2m_access_w     = 2,
     p2m_access_rw    = 3,
-    p2m_access_x     = 4, 
+    p2m_access_x     = 4,
     p2m_access_rx    = 5,
-    p2m_access_wx    = 6, 
-    p2m_access_rwx   = 7
-
+    p2m_access_wx    = 6,
+    p2m_access_rwx   = 7,
     /* NOTE: Assumed to be only 4 bits right now */
 } p2m_access_t;
 
@@ -90,9 +90,13 @@ struct p2m_domain {
      * retyped get this access type.  See definition of p2m_access_t. */
     p2m_access_t default_access;
 
-    /* If true, and an access fault comes in and there is no mem_event listener, 
+    /* If true, and an access fault comes in and there is no mem_event listener,
      * pause domain.  Otherwise, remove access restrictions. */
-    bool_t       access_required;
+    bool_t access_required;
+
+    /* Radix tree to store the p2m_access_t settings as the pte's don't have
+     * enough available bits to store this information. */
+    struct radix_tree_root mem_access_settings;
 
 };
 
@@ -128,7 +132,10 @@ void p2m_restore_state(struct vcpu *n);
 void p2m_dump_info(struct domain *d);
 
 /* Look up the MFN corresponding to a domain's PFN. */
-paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
+paddr_t p2m_lookup(struct domain *d,
+                   paddr_t gpfn,
+                   p2m_access_t *a,
+                   p2m_type_t *t);
 
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
@@ -185,7 +192,7 @@ static inline struct page_info *get_page_from_gfn(
 {
     struct page_info *page;
     p2m_type_t p2mt;
-    paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt);
+    paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), NULL, &p2mt);
     unsigned long mfn = maddr >> PAGE_SHIFT;
 
     if (t)
@@ -234,26 +241,24 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      (&((d)->arch.p2m))
 
+/* Send mem event based on the access (gla is -1ull if not available). Boolean
+ * return value indicates if trap needs to be injected into guest. */
+int p2m_mem_access_check(paddr_t gpa, vaddr_t gla,
+                            bool_t access_r, bool_t access_w, bool_t access_x,
+                            bool_t ptw);
+
 /* Resumes the running of the VCPU, restarting the last instruction */
-static inline void p2m_mem_access_resume(struct domain *d) {}
+void p2m_mem_access_resume(struct domain *d);
 
 /* Set access type for a region of pfns.
  * If start_pfn == -1ul, sets the default access type */
-static inline
 long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
-                        uint32_t start, uint32_t mask, xenmem_access_t access)
-{
-    return -ENOSYS;
-}
+                        uint32_t start, uint32_t mask, xenmem_access_t access);
 
 /* Get access type for a pfn
  * If pfn == -1ul, gets the default access type */
-static inline
 int p2m_get_mem_access(struct domain *d, unsigned long pfn,
-                       xenmem_access_t *access)
-{
-    return -ENOSYS;
-}
+                       xenmem_access_t *access);
 
 #endif /* _XEN_P2M_H */
 
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC 6/7] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2014-08-22  9:30 ` [PATCH RFC 5/7] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  2014-08-22  9:30 ` [PATCH RFC 7/7] tools/tests: Enable xen-access on ARM Tamas K Lengyel
  6 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

Add missing structure definition for iabt and update the trap handling
mechanism to only inject the exception if the mem_access checker
decides to do so.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/arm/traps.c            | 26 +++++++++++++++++++++++++-
 xen/include/asm-arm/processor.h | 10 +++++++++-
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 82305c4..16139fd 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1730,7 +1730,31 @@ err:
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
-    register_t addr = READ_SYSREG(FAR_EL2);
+    struct hsr_iabt iabt = hsr.iabt;
+    int rc;
+    register_t addr;
+    vaddr_t gva;
+    paddr_t gpa;
+
+#ifdef CONFIG_ARM_32
+    gva = READ_CP32(HIFAR);
+#else
+    /* TODO: is this the correct register? */
+    info.gva = READ_SYSREG64(FAR_EL2);
+#endif
+
+    rc = gva_to_ipa(gva, &gpa);
+    if ( rc == -EFAULT )
+        return;
+
+    rc = p2m_mem_access_check(gpa, gva,
+                         1, 0, 1,
+                         iabt.s1ptw);
+
+    if ( !rc )
+        return;
+
+    addr = READ_SYSREG(FAR_EL2);
     inject_iabt_exception(regs, addr, hsr.len);
 }
 
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 9d230f3..c906c94 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -338,10 +338,18 @@ union hsr {
     } sysreg; /* HSR_EC_SYSREG */
 #endif
 
+    struct hsr_iabt {
+        unsigned long ifsc:6;   /* Instruction fault status code */
+        unsigned long res0:1;
+        unsigned long s1ptw:1;  /* Fault during a stage 1 translation table walk */
+        unsigned long res1:1;
+        unsigned long ea:1;     /* External abort type */
+    } iabt; /* HSR_EC_INSTR_ABORT_* */
+
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-        unsigned long s1ptw:1; /* */
+        unsigned long s1ptw:1; /* Fault during a stage 1 translation table walk */
         unsigned long cache:1; /* Cache Maintenance */
         unsigned long eat:1;   /* External Abort Type */
 #ifdef CONFIG_ARM_32
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC 7/7] tools/tests: Enable xen-access on ARM
  2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2014-08-22  9:30 ` [PATCH RFC 6/7] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-08-22  9:30 ` Tamas K Lengyel
  6 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-22  9:30 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, Tamas K Lengyel, dgdegra

On ARM the guest memory doesn't start from 0, thus we include the
required headers and define GUEST_RAM_BASE_PFN in both architecture
to be passed to mem_access as the starting pfn.
We also define the ARM specific test_and_set_bit function.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 tools/tests/xen-access/Makefile     |  4 +--
 tools/tests/xen-access/xen-access.c | 53 +++++++++++++++++++++++++++++--------
 2 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
index 65eef99..698355c 100644
--- a/tools/tests/xen-access/Makefile
+++ b/tools/tests/xen-access/Makefile
@@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_xeninclude)
 
-TARGETS-y := 
-TARGETS-$(CONFIG_X86) += xen-access
-TARGETS := $(TARGETS-y)
+TARGETS := xen-access
 
 .PHONY: all
 all: build
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 090df5f..187c72f 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -41,22 +41,16 @@
 #include <xenctrl.h>
 #include <xen/mem_event.h>
 
-#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
-#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
-#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
-
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
+#ifdef CONFIG_X86
 
+#define GUEST_RAM_BASE_PFN 0ULL
 #define ADDR (*(volatile long *) addr)
+
 /**
  * test_and_set_bit - Set a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
  */
 static inline int test_and_set_bit(int nr, volatile void *addr)
 {
@@ -69,6 +63,43 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
     return oldbit;
 }
 
+#else /* CONFIG_X86 */
+
+#include <xen/arch-arm.h>
+
+#define PAGE_SHIFT              12
+#define GUEST_RAM_BASE_PFN      GUEST_RAM_BASE >> PAGE_SHIFT
+#define BITS_PER_WORD           32
+#define BIT_MASK(nr)            (1UL << ((nr) % BITS_PER_WORD))
+#define BIT_WORD(nr)            ((nr) / BITS_PER_WORD)
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ */
+static inline int test_and_set_bit(int nr, volatile void *addr)
+{
+        unsigned int mask = BIT_MASK(nr);
+        volatile unsigned int *p =
+                ((volatile unsigned int *)addr) + BIT_WORD(nr);
+        unsigned int old = *p;
+
+        *p = old | mask;
+        return (old & mask) != 0;
+}
+
+#endif
+
+#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
+#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
+#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
+
+/* Spinlock and mem event definitions */
+
+#define SPIN_LOCK_UNLOCKED 0
+
 typedef int spinlock_t;
 
 static inline void spin_lock(spinlock_t *lock)
@@ -492,7 +523,7 @@ int main(int argc, char *argv[])
         goto exit;
     }
 
-    rc = xc_set_mem_access(xch, domain_id, default_access, 0,
+    rc = xc_set_mem_access(xch, domain_id, default_access, GUEST_RAM_BASE_PFN,
                            xenaccess->domain_info->max_pages);
     if ( rc < 0 )
     {
@@ -520,7 +551,7 @@ int main(int argc, char *argv[])
 
             /* Unregister for every event */
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
-            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
+            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, GUEST_RAM_BASE_PFN,
                                    xenaccess->domain_info->max_pages);
             rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
 
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-22  9:30 ` [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-08-25 17:19   ` Andres Lagar Cavilla
  2014-08-26 10:52     ` Tamas K Lengyel
  2014-08-26 13:34   ` Jan Beulich
  1 sibling, 1 reply; 26+ messages in thread
From: Andres Lagar Cavilla @ 2014-08-25 17:19 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: keir, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	stefano.stabellini, Jan Beulich, dgdegra


[-- Attachment #1.1: Type: text/plain, Size: 2242 bytes --]

On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

> In preparation to add support for ARM LPAE mem_event, relocate mem_access
> and mem_event into common Xen code. This patch makes no functional changes
> to the X86 side, for ARM mem_event and mem_access functions are just
> placeholder stubs.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>
Big patch and assuming code motion LGTM. Couple of observations though.

Snip...


> -        req->flags |= MEM_EVENT_FLAG_FOREIGN;
> -        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
>
Take the opportunity to downgrade the aggressiveness of this at some point
in this series. (I'd prefer to keep code motion patches as purely code
motion).

A faulty tool stack can brick a debug hypervisor. Unpleasant while dev/test.

Snip ...

>
> +++ b/xen/common/mem_access.c
> @@ -0,0 +1,137 @@
>
> +/******************************************************************************
> + * mem_access.c
> + *
> + * Memory access support.
> + *
> + * Copyright (c) 2011 Virtuata, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307
> USA
> + */
> +
> +
> +#include <xen/sched.h>
> +#include <xen/guest_access.h>
> +#include <xen/hypercall.h>
> +#include <asm/p2m.h>
> +#include <public/memory.h>
> +#include <xen/mem_event.h>
> +#include <xsm/xsm.h>
> +
> +#ifdef CONFIG_X86
>

Presumably this [will later be|should be] changed to consider CONFIG_ARM?

Many more like that one below with the same comment.

Thanks
Andres

>
> --
> 2.0.1
>
>

[-- Attachment #1.2: Type: text/html, Size: 3382 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces
  2014-08-22  9:30 ` [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
@ 2014-08-25 17:20   ` Andres Lagar Cavilla
  2014-08-26 13:35   ` Jan Beulich
  1 sibling, 0 replies; 26+ messages in thread
From: Andres Lagar Cavilla @ 2014-08-25 17:20 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: keir, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	stefano.stabellini, Jan Beulich, dgdegra


[-- Attachment #1.1: Type: text/plain, Size: 3549 bytes --]

On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>

Thanks. Trivial. Necessary at times.

Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

> ---
>  xen/common/mem_event.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
> index 38c6697..a94ddf6 100644
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -58,7 +58,7 @@ static int mem_event_enable(
>      if ( med->ring_page )
>          return -EBUSY;
>
> -    /* The parameter defaults to zero, and it should be
> +    /* The parameter defaults to zero, and it should be
>       * set to something */
>      if ( ring_gfn == 0 )
>          return -ENOSYS;
> @@ -66,7 +66,7 @@ static int mem_event_enable(
>      mem_event_ring_lock_init(med);
>      mem_event_ring_lock(med);
>
> -    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
> +    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
>                                      &med->ring_page);
>      if ( rc < 0 )
>          goto err;
> @@ -98,7 +98,7 @@ static int mem_event_enable(
>      return 0;
>
>   err:
> -    destroy_ring_for_helper(&med->ring_page,
> +    destroy_ring_for_helper(&med->ring_page,
>                              med->ring_pg_struct);
>      mem_event_ring_unlock(med);
>
> @@ -227,7 +227,7 @@ static int mem_event_disable(struct domain *d, struct
> mem_event_domain *med)
>              }
>          }
>
> -        destroy_ring_for_helper(&med->ring_page,
> +        destroy_ring_for_helper(&med->ring_page,
>                                  med->ring_pg_struct);
>          mem_event_ring_unlock(med);
>      }
> @@ -480,7 +480,7 @@ void mem_event_cleanup(struct domain *d)
>           * the disable routine to complete. It will also drop
>           * all domain refs the wait-queued vcpus are holding.
>           * Finally, because this code path involves previously
> -         * pausing the domain (domain_kill), unpausing the
> +         * pausing the domain (domain_kill), unpausing the
>           * vcpus causes no harm. */
>          destroy_waitqueue_head(&d->mem_event->paging.wq);
>          (void)mem_event_disable(d, &d->mem_event->paging);
> @@ -598,7 +598,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>              if ( p2m->pod.entry_count )
>                  break;
>
> -            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
> +            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
>                                      HVM_PARAM_PAGING_RING_PFN,
>                                      mem_paging_notification);
>          }
> @@ -618,7 +618,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>      }
>      break;
>
> -    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
> +    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
>      {
>          struct mem_event_domain *med = &d->mem_event->share;
>          rc = -EINVAL;
> @@ -637,7 +637,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>              if ( !hap_enabled(d) )
>                  break;
>
> -            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
> +            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
>                                      HVM_PARAM_SHARING_RING_PFN,
>                                      mem_sharing_notification);
>          }
> --
> 2.0.1
>
>

[-- Attachment #1.2: Type: text/html, Size: 4859 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-08-22  9:30 ` [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-08-25 17:25   ` Andres Lagar Cavilla
  2014-08-26  8:32     ` Tamas K Lengyel
  2014-08-26 13:51   ` Jan Beulich
  1 sibling, 1 reply; 26+ messages in thread
From: Andres Lagar Cavilla @ 2014-08-25 17:25 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: keir, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	stefano.stabellini, Jan Beulich, dgdegra


[-- Attachment #1.1: Type: text/plain, Size: 21823 bytes --]

On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

> This patch sets up the infrastructure to support mem_access and mem_event
>  on ARM and turns on compilation. We define the required XSM functions,
> handling of domctl copyback, and the required p2m types and stub-functions
> in this patch.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>
Non-ARM bits LGTM to me. I see here the disablement of CONFIG_X86.

If Xen were to ever support another architecture (hello IA64), it might be
more reasonable to keep an #ifdef CONFIG_X86 && CONFIG_ARM. I don't know
how unlikely that future direction might be.

Andres

> ---
>  xen/arch/arm/domctl.c        |  36 ++++++++++++++--
>  xen/arch/arm/mm.c            |  18 ++++++--
>  xen/arch/arm/p2m.c           |   5 +++
>  xen/common/mem_access.c      |   6 +--
>  xen/common/mem_event.c       |  15 +++++--
>  xen/include/asm-arm/p2m.h    | 100
> ++++++++++++++++++++++++++++++++++---------
>  xen/include/xen/mem_access.h |  19 --------
>  xen/include/xen/mem_event.h  |  53 +++--------------------
>  xen/include/xen/sched.h      |   1 -
>  xen/include/xsm/dummy.h      |  24 +++++------
>  xen/include/xsm/xsm.h        |  25 +++++------
>  xen/xsm/dummy.c              |   4 +-
>  12 files changed, 178 insertions(+), 128 deletions(-)
>
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 45974e7..bb0b8d3 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -11,10 +11,17 @@
>  #include <xen/sched.h>
>  #include <xen/hypercall.h>
>  #include <public/domctl.h>
> +#include <asm/guest_access.h>
> +#include <xen/mem_event.h>
> +#include <public/mem_event.h>
>
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> +
> +    long ret;
> +    bool_t copyback = 0;
> +
>      switch ( domctl->cmd )
>      {
>      case XEN_DOMCTL_cacheflush:
> @@ -23,17 +30,38 @@ long arch_do_domctl(struct xen_domctl *domctl, struct
> domain *d,
>          unsigned long e = s + domctl->u.cacheflush.nr_pfns;
>
>          if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> -            return -EINVAL;
> +        {
> +            ret = -EINVAL;
> +            break;
> +        }
>
>          if ( e < s )
> -            return -EINVAL;
> +        {
> +            ret = -EINVAL;
> +            break;
> +        }
>
> -        return p2m_cache_flush(d, s, e);
> +        ret = p2m_cache_flush(d, s, e);
>      }
> +    break;
> +
> +    case XEN_DOMCTL_mem_event_op:
> +    {
> +        ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> +                              guest_handle_cast(u_domctl, void));
> +        copyback = 1;
> +    }
> +    break;
>
>      default:
> -        return subarch_do_domctl(domctl, d, u_domctl);
> +        ret = subarch_do_domctl(domctl, d, u_domctl);
> +        break;
>      }
> +
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
> +    return ret;
>  }
>
>  void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0a243b0..cd04dec 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -35,6 +35,9 @@
>  #include <asm/current.h>
>  #include <asm/flushtlb.h>
>  #include <public/memory.h>
> +#include <xen/mem_event.h>
> +#include <xen/mem_access.h>
> +#include <xen/hypercall.h>
>  #include <xen/sched.h>
>  #include <xen/vmap.h>
>  #include <xsm/xsm.h>
> @@ -1111,18 +1114,27 @@ int xenmem_add_to_physmap_one(
>
>  long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    switch ( op )
> +
> +    long rc;
> +
> +    switch ( op & MEMOP_CMD_MASK )
>      {
>      /* XXX: memsharing not working yet */
>      case XENMEM_get_sharing_shared_pages:
>      case XENMEM_get_sharing_freed_pages:
>          return 0;
> +    case XENMEM_access_op:
> +    {
> +        rc = mem_access_memop(op, guest_handle_cast(arg,
> xen_mem_access_op_t));
> +        break;
> +    }
>
>      default:
> -        return -ENOSYS;
> +        rc = -ENOSYS;
> +        break;
>      }
>
> -    return 0;
> +    return rc;
>  }
>
>  struct domain *page_get_owner_and_reference(struct page_info *page)
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 143199b..0ca0d2f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -10,6 +10,9 @@
>  #include <asm/event.h>
>  #include <asm/hardirq.h>
>  #include <asm/page.h>
> +#include <xen/mem_event.h>
> +#include <public/mem_event.h>
> +#include <xen/mem_access.h>
>
>  /* First level P2M is 2 consecutive pages */
>  #define P2M_FIRST_ORDER 1
> @@ -999,6 +1002,8 @@ int p2m_init(struct domain *d)
>      p2m->max_mapped_gfn = 0;
>      p2m->lowest_mapped_gfn = ULONG_MAX;
>
> +    p2m->default_access = p2m_access_rwx;
> +
>  err:
>      spin_unlock(&p2m->lock);
>
> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
> index 84acdf9..6bb9cf4 100644
> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -29,8 +29,6 @@
>  #include <xen/mem_event.h>
>  #include <xsm/xsm.h>
>
> -#ifdef CONFIG_X86
> -
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
>  {
> @@ -45,9 +43,11 @@ int mem_access_memop(unsigned long cmd,
>      if ( rc )
>          return rc;
>
> +#ifdef CONFIG_X86
>      rc = -EINVAL;
>      if ( !is_hvm_domain(d) )
>          goto out;
> +#endif
>
>      rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
>      if ( rc )
> @@ -125,8 +125,6 @@ int mem_access_send_req(struct domain *d,
> mem_event_request_t *req)
>      return 0;
>  }
>
> -#endif /* CONFIG_X86 */
> -
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
> index a94ddf6..2a91928 100644
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -20,16 +20,19 @@
>   * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307
> USA
>   */
>
> -#ifdef CONFIG_X86
> -
> +#include <xen/sched.h>
>  #include <asm/domain.h>
>  #include <xen/event.h>
>  #include <xen/wait.h>
>  #include <asm/p2m.h>
>  #include <xen/mem_event.h>
>  #include <xen/mem_access.h>
> +
> +#ifdef CONFIG_X86
>  #include <asm/mem_paging.h>
>  #include <asm/mem_sharing.h>
> +#endif
> +
>  #include <xsm/xsm.h>
>
>  /* for public/io/ring.h macros */
> @@ -427,6 +430,7 @@ static void mem_access_notification(struct vcpu *v,
> unsigned int port)
>          p2m_mem_access_resume(v->domain);
>  }
>
> +#ifdef CONFIG_X86
>  /* Registered with Xen-bound event channel for incoming notifications. */
>  static void mem_paging_notification(struct vcpu *v, unsigned int port)
>  {
> @@ -470,6 +474,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
>      rcu_unlock_domain(d);
>      return ret;
>  }
> +#endif
>
>  /* Clean up on domain destruction */
>  void mem_event_cleanup(struct domain *d)
> @@ -538,6 +543,8 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>          {
>          case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
>          {
> +
> +#ifdef CONFIG_X86
>              rc = -ENODEV;
>              /* Only HAP is supported */
>              if ( !hap_enabled(d) )
> @@ -546,6 +553,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>              /* Currently only EPT is supported */
>              if ( !cpu_has_vmx )
>                  break;
> +#endif
>
>              rc = mem_event_enable(d, mec, med, _VPF_mem_access,
>                                      HVM_PARAM_ACCESS_RING_PFN,
> @@ -567,6 +575,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>      }
>      break;
>
> +#ifdef CONFIG_X86
>      case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
>      {
>          struct mem_event_domain *med = &d->mem_event->paging;
> @@ -656,6 +665,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>          }
>      }
>      break;
> +#endif
>
>      default:
>          rc = -ENOSYS;
> @@ -695,7 +705,6 @@ void mem_event_vcpu_unpause(struct vcpu *v)
>
>      vcpu_unpause(v);
>  }
> -#endif
>
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 06c93a0..f3d1f33 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -2,9 +2,55 @@
>  #define _XEN_P2M_H
>
>  #include <xen/mm.h>
> +#include <public/memory.h>
> +#include <public/mem_event.h>
>
>  struct domain;
>
> +/* List of possible type for each page in the p2m entry.
> + * The number of available bit per page in the pte for this purpose is 4
> bits.
> + * So it's possible to only have 16 fields. If we run out of value in the
> + * future, it's possible to use higher value for pseudo-type and don't
> store
> + * them in the p2m entry.
> + */
> +typedef enum {
> +    p2m_invalid = 0,    /* Nothing mapped here */
> +    p2m_ram_rw,         /* Normal read/write guest RAM */
> +    p2m_ram_ro,         /* Read-only; writes are silently dropped */
> +    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
> +    p2m_map_foreign,    /* Ram pages from foreign domain */
> +    p2m_grant_map_rw,   /* Read/write grant mapping */
> +    p2m_grant_map_ro,   /* Read-only grant mapping */
> +    /* The types below are only used to decide the page attribute in the
> P2M */
> +    p2m_iommu_map_rw,   /* Read/write iommu mapping */
> +    p2m_iommu_map_ro,   /* Read-only iommu mapping */
> +    p2m_max_real_type,  /* Types after this won't be store in the p2m */
> +} p2m_type_t;
> +
> +/*
> + * Additional access types, which are used to further restrict
> + * the permissions given by the p2m_type_t memory type.  Violations
> + * caused by p2m_access_t restrictions are sent to the mem_event
> + * interface.
> + *
> + * The access permissions are soft state: when any ambigious change of
> page
> + * type or use occurs, or when pages are flushed, swapped, or at any other
> + * convenient type, the access permissions can get reset to the p2m_domain
> + * default.
> + */
> +typedef enum {
> +    p2m_access_n     = 0, /* No access permissions allowed */
> +    p2m_access_r     = 1,
> +    p2m_access_w     = 2,
> +    p2m_access_rw    = 3,
> +    p2m_access_x     = 4,
> +    p2m_access_rx    = 5,
> +    p2m_access_wx    = 6,
> +    p2m_access_rwx   = 7
> +
> +    /* NOTE: Assumed to be only 4 bits right now */
> +} p2m_access_t;
> +
>  /* Per-p2m-table state */
>  struct p2m_domain {
>      /* Lock that protects updates to the p2m */
> @@ -38,27 +84,17 @@ struct p2m_domain {
>           * at each p2m tree level. */
>          unsigned long shattered[4];
>      } stats;
> -};
>
> -/* List of possible type for each page in the p2m entry.
> - * The number of available bit per page in the pte for this purpose is 4
> bits.
> - * So it's possible to only have 16 fields. If we run out of value in the
> - * future, it's possible to use higher value for pseudo-type and don't
> store
> - * them in the p2m entry.
> - */
> -typedef enum {
> -    p2m_invalid = 0,    /* Nothing mapped here */
> -    p2m_ram_rw,         /* Normal read/write guest RAM */
> -    p2m_ram_ro,         /* Read-only; writes are silently dropped */
> -    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
> -    p2m_map_foreign,    /* Ram pages from foreign domain */
> -    p2m_grant_map_rw,   /* Read/write grant mapping */
> -    p2m_grant_map_ro,   /* Read-only grant mapping */
> -    /* The types below are only used to decide the page attribute in the
> P2M */
> -    p2m_iommu_map_rw,   /* Read/write iommu mapping */
> -    p2m_iommu_map_ro,   /* Read-only iommu mapping */
> -    p2m_max_real_type,  /* Types after this won't be store in the p2m */
> -} p2m_type_t;
> +    /* Default P2M access type for each page in the the domain: new pages,
> +     * swapped in pages, cleared pages, and pages that are ambiquously
> +     * retyped get this access type.  See definition of p2m_access_t. */
> +    p2m_access_t default_access;
> +
> +    /* If true, and an access fault comes in and there is no mem_event
> listener,
> +     * pause domain.  Otherwise, remove access restrictions. */
> +    bool_t       access_required;
> +
> +};
>
>  #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
>  #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
> @@ -195,6 +231,30 @@ static inline int get_page_and_type(struct page_info
> *page,
>      return rc;
>  }
>
> +/* get host p2m table */
> +#define p2m_get_hostp2m(d)      (&((d)->arch.p2m))
> +
> +/* Resumes the running of the VCPU, restarting the last instruction */
> +static inline void p2m_mem_access_resume(struct domain *d) {}
> +
> +/* Set access type for a region of pfns.
> + * If start_pfn == -1ul, sets the default access type */
> +static inline
> +long p2m_set_mem_access(struct domain *d, unsigned long start_pfn,
> uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t
> access)
> +{
> +    return -ENOSYS;
> +}
> +
> +/* Get access type for a pfn
> + * If pfn == -1ul, gets the default access type */
> +static inline
> +int p2m_get_mem_access(struct domain *d, unsigned long pfn,
> +                       xenmem_access_t *access)
> +{
> +    return -ENOSYS;
> +}
> +
>  #endif /* _XEN_P2M_H */
>
>  /*
> diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
> index ded5441..5c7c5fd 100644
> --- a/xen/include/xen/mem_access.h
> +++ b/xen/include/xen/mem_access.h
> @@ -23,29 +23,10 @@
>  #ifndef _XEN_ASM_MEM_ACCESS_H
>  #define _XEN_ASM_MEM_ACCESS_H
>
> -#ifdef CONFIG_X86
> -
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
>  int mem_access_send_req(struct domain *d, mem_event_request_t *req);
>
> -#else
> -
> -static inline
> -int mem_access_memop(unsigned long cmd,
> -                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
> -{
> -    return -ENOSYS;
> -}
> -
> -static inline
> -int mem_access_send_req(struct domain *d, mem_event_request_t *req)
> -{
> -    return -ENOSYS;
> -}
> -
> -#endif /* CONFIG_X86 */
> -
>  #endif /* _XEN_ASM_MEM_ACCESS_H */
>
>  /*
> diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
> index a28d453..e2a9d4d 100644
> --- a/xen/include/xen/mem_event.h
> +++ b/xen/include/xen/mem_event.h
> @@ -24,8 +24,6 @@
>  #ifndef __MEM_EVENT_H__
>  #define __MEM_EVENT_H__
>
> -#ifdef CONFIG_X86
> -
>  /* Clean up on domain destruction */
>  void mem_event_cleanup(struct domain *d);
>
> @@ -67,66 +65,25 @@ void mem_event_put_request(struct domain *d, struct
> mem_event_domain *med,
>  int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
>                             mem_event_response_t *rsp);
>
> -int do_mem_event_op(int op, uint32_t domain, void *arg);
>  int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
>                       XEN_GUEST_HANDLE_PARAM(void) u_domctl);
>
>  void mem_event_vcpu_pause(struct vcpu *v);
>  void mem_event_vcpu_unpause(struct vcpu *v);
>
> -#else
> -
> -static inline void mem_event_cleanup(struct domain *d) {}
> -
> -static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
> -{
> -    return 0;
> -}
> -
> -static inline int mem_event_claim_slot(struct domain *d,
> -                                        struct mem_event_domain *med)
> -{
> -    return -ENOSYS;
> -}
> -
> -static inline int mem_event_claim_slot_nosleep(struct domain *d,
> -                                        struct mem_event_domain *med)
> -{
> -    return -ENOSYS;
> -}
> -
> -static inline
> -void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
> -{}
> -
> -static inline
> -void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
> -                            mem_event_request_t *req)
> -{}
> +#ifdef CONFIG_X86
>
> -static inline
> -int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
> -                           mem_event_response_t *rsp)
> -{
> -    return -ENOSYS;
> -}
> +int do_mem_event_op(int op, uint32_t domain, void *arg);
>
> -static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
> -{
> -    return -ENOSYS;
> -}
> +#else
>
>  static inline
> -int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
> -                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
> +int do_mem_event_op(int op, uint32_t domain, void *arg)
>  {
>      return -ENOSYS;
>  }
>
> -static inline void mem_event_vcpu_pause(struct vcpu *v) {}
> -static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
> -
> -#endif /* CONFIG_X86 */
> +#endif
>
>  #endif /* __MEM_EVENT_H__ */
>
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 4575dda..2365fad 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1,4 +1,3 @@
> -
>  #ifndef __SCHED_H__
>  #define __SCHED_H__
>
> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
> index c5aa316..61677ea 100644
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -507,6 +507,18 @@ static XSM_INLINE int
> xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
>      return xsm_default_action(action, current->domain, d);
>  }
>
> +static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain
> *d, int mode, int op)
> +{
> +    XSM_ASSERT_ACTION(XSM_PRIV);
> +    return xsm_default_action(action, current->domain, d);
> +}
> +
> +static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d,
> int op)
> +{
> +    XSM_ASSERT_ACTION(XSM_DM_PRIV);
> +    return xsm_default_action(action, current->domain, d);
> +}
> +
>  #ifdef CONFIG_X86
>  static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
>  {
> @@ -550,18 +562,6 @@ static XSM_INLINE int
> xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
>      return xsm_default_action(action, current->domain, d);
>  }
>
> -static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain
> *d, int mode, int op)
> -{
> -    XSM_ASSERT_ACTION(XSM_PRIV);
> -    return xsm_default_action(action, current->domain, d);
> -}
> -
> -static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d,
> int op)
> -{
> -    XSM_ASSERT_ACTION(XSM_DM_PRIV);
> -    return xsm_default_action(action, current->domain, d);
> -}
> -
>  static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain
> *d, struct domain *cd, int op)
>  {
>      XSM_ASSERT_ACTION(XSM_DM_PRIV);
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index a85045d..64289cd 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -140,6 +140,9 @@ struct xsm_operations {
>      int (*hvm_control) (struct domain *d, unsigned long op);
>      int (*hvm_param_nested) (struct domain *d);
>
> +    int (*mem_event_control) (struct domain *d, int mode, int op);
> +    int (*mem_event_op) (struct domain *d, int op);
> +
>  #ifdef CONFIG_X86
>      int (*do_mca) (void);
>      int (*shadow_control) (struct domain *d, uint32_t op);
> @@ -148,8 +151,6 @@ struct xsm_operations {
>      int (*hvm_set_pci_link_route) (struct domain *d);
>      int (*hvm_inject_msi) (struct domain *d);
>      int (*hvm_ioreq_server) (struct domain *d, int op);
> -    int (*mem_event_control) (struct domain *d, int mode, int op);
> -    int (*mem_event_op) (struct domain *d, int op);
>      int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
>      int (*apic) (struct domain *d, int cmd);
>      int (*memtype) (uint32_t access);
> @@ -534,6 +535,16 @@ static inline int xsm_hvm_param_nested (xsm_default_t
> def, struct domain *d)
>      return xsm_ops->hvm_param_nested(d);
>  }
>
> +static inline int xsm_mem_event_control (xsm_default_t def, struct domain
> *d, int mode, int op)
> +{
> +    return xsm_ops->mem_event_control(d, mode, op);
> +}
> +
> +static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d,
> int op)
> +{
> +    return xsm_ops->mem_event_op(d, op);
> +}
> +
>  #ifdef CONFIG_X86
>  static inline int xsm_do_mca(xsm_default_t def)
>  {
> @@ -570,16 +581,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t
> def, struct domain *d, int
>      return xsm_ops->hvm_ioreq_server(d, op);
>  }
>
> -static inline int xsm_mem_event_control (xsm_default_t def, struct domain
> *d, int mode, int op)
> -{
> -    return xsm_ops->mem_event_control(d, mode, op);
> -}
> -
> -static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d,
> int op)
> -{
> -    return xsm_ops->mem_event_op(d, op);
> -}
> -
>  static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain
> *d, struct domain *cd, int op)
>  {
>      return xsm_ops->mem_sharing_op(d, cd, op);
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> index c95c803..9df9d81 100644
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -116,6 +116,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>      set_to_dummy_if_null(ops, add_to_physmap);
>      set_to_dummy_if_null(ops, remove_from_physmap);
>      set_to_dummy_if_null(ops, map_gmfn_foreign);
> +    set_to_dummy_if_null(ops, mem_event_control);
> +    set_to_dummy_if_null(ops, mem_event_op);
>
>  #ifdef CONFIG_X86
>      set_to_dummy_if_null(ops, do_mca);
> @@ -125,8 +127,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>      set_to_dummy_if_null(ops, hvm_set_pci_link_route);
>      set_to_dummy_if_null(ops, hvm_inject_msi);
>      set_to_dummy_if_null(ops, hvm_ioreq_server);
> -    set_to_dummy_if_null(ops, mem_event_control);
> -    set_to_dummy_if_null(ops, mem_event_op);
>      set_to_dummy_if_null(ops, mem_sharing_op);
>      set_to_dummy_if_null(ops, apic);
>      set_to_dummy_if_null(ops, platform_op);
> --
> 2.0.1
>
>

[-- Attachment #1.2: Type: text/html, Size: 25774 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-08-25 17:25   ` Andres Lagar Cavilla
@ 2014-08-26  8:32     ` Tamas K Lengyel
  0 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26  8:32 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	Stefano Stabellini, Jan Beulich, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 853 bytes --]

On Mon, Aug 25, 2014 at 7:25 PM, Andres Lagar Cavilla <
andres@lagarcavilla.org> wrote:

> On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <
> tamas.lengyel@zentific.com> wrote:
>
>> This patch sets up the infrastructure to support mem_access and mem_event
>>  on ARM and turns on compilation. We define the required XSM functions,
>> handling of domctl copyback, and the required p2m types and stub-functions
>> in this patch.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>
> Non-ARM bits LGTM to me. I see here the disablement of CONFIG_X86.
>
> If Xen were to ever support another architecture (hello IA64), it might be
> more reasonable to keep an #ifdef CONFIG_X86 && CONFIG_ARM. I don't know
> how unlikely that future direction might be.
>
> Andres
>


Certainly, that would make sure it is more forward compatible.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1744 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-25 17:19   ` Andres Lagar Cavilla
@ 2014-08-26 10:52     ` Tamas K Lengyel
  2014-08-26 12:42       ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26 10:52 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	Stefano Stabellini, Jan Beulich, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 2824 bytes --]

On Mon, Aug 25, 2014 at 7:19 PM, Andres Lagar Cavilla <
andres@lagarcavilla.org> wrote:

> On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <
> tamas.lengyel@zentific.com> wrote:
>
>> In preparation to add support for ARM LPAE mem_event, relocate mem_access
>> and mem_event into common Xen code. This patch makes no functional changes
>> to the X86 side, for ARM mem_event and mem_access functions are just
>> placeholder stubs.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>
> Big patch and assuming code motion LGTM. Couple of observations though.
>
> Snip...
>
>
>> -        req->flags |= MEM_EVENT_FLAG_FOREIGN;
>> -        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
>>
> Take the opportunity to downgrade the aggressiveness of this at some point
> in this series. (I'd prefer to keep code motion patches as purely code
> motion).
>
> A faulty tool stack can brick a debug hypervisor. Unpleasant while
> dev/test.
>

I'm a little bit hazy in what situations this could arise and what it is
trying to protect against. I could wrap the condition into an unlikely()
and have a debug message printed rather than bricking the VMM with ASSERT()
and/or enable the ASSERT only when we are building with debug=y.


>
> Snip ...
>
>>
>> +++ b/xen/common/mem_access.c
>> @@ -0,0 +1,137 @@
>>
>> +/******************************************************************************
>> + * mem_access.c
>> + *
>> + * Memory access support.
>> + *
>> + * Copyright (c) 2011 Virtuata, Inc.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program; if not, write to the Free Software
>> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307
>> USA
>> + */
>> +
>> +
>> +#include <xen/sched.h>
>> +#include <xen/guest_access.h>
>> +#include <xen/hypercall.h>
>> +#include <asm/p2m.h>
>> +#include <public/memory.h>
>> +#include <xen/mem_event.h>
>> +#include <xsm/xsm.h>
>> +
>> +#ifdef CONFIG_X86
>>
>
> Presumably this [will later be|should be] changed to consider CONFIG_ARM?
>
> Many more like that one below with the same comment.
>
> Thanks
> Andres
>

Right, this code motion was already big enough that I figured it would make
for an easier review if I split the series here.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 4402 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-26 10:52     ` Tamas K Lengyel
@ 2014-08-26 12:42       ` Jan Beulich
  2014-08-26 13:25         ` Tamas K Lengyel
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 12:42 UTC (permalink / raw)
  To: Andres Lagar Cavilla, Tamas K Lengyel
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	Stefano Stabellini, Daniel De Graaf

>>> On 26.08.14 at 12:52, <tamas.lengyel@zentific.com> wrote:
> On Mon, Aug 25, 2014 at 7:19 PM, Andres Lagar Cavilla <
> andres@lagarcavilla.org> wrote:
>> On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <
>>> -        req->flags |= MEM_EVENT_FLAG_FOREIGN;
>>> -        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
>>>
>> Take the opportunity to downgrade the aggressiveness of this at some point
>> in this series. (I'd prefer to keep code motion patches as purely code
>> motion).
>>
>> A faulty tool stack can brick a debug hypervisor. Unpleasant while
>> dev/test.
>>
> 
> I'm a little bit hazy in what situations this could arise and what it is
> trying to protect against. I could wrap the condition into an unlikely()
> and have a debug message printed rather than bricking the VMM with ASSERT()
> and/or enable the ASSERT only when we are building with debug=y.

ASSERT()s are enabled only when debug=y (i.e. NDEBUG not defined
at the C level).

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-26 12:42       ` Jan Beulich
@ 2014-08-26 13:25         ` Tamas K Lengyel
  0 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26 13:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
	Stefano Stabellini, Andres Lagar Cavilla, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 1222 bytes --]

On Tue, Aug 26, 2014 at 2:42 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 26.08.14 at 12:52, <tamas.lengyel@zentific.com> wrote:
> > On Mon, Aug 25, 2014 at 7:19 PM, Andres Lagar Cavilla <
> > andres@lagarcavilla.org> wrote:
> >> On Fri, Aug 22, 2014 at 2:30 AM, Tamas K Lengyel <
> >>> -        req->flags |= MEM_EVENT_FLAG_FOREIGN;
> >>> -        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
> >>>
> >> Take the opportunity to downgrade the aggressiveness of this at some
> point
> >> in this series. (I'd prefer to keep code motion patches as purely code
> >> motion).
> >>
> >> A faulty tool stack can brick a debug hypervisor. Unpleasant while
> >> dev/test.
> >>
> >
> > I'm a little bit hazy in what situations this could arise and what it is
> > trying to protect against. I could wrap the condition into an unlikely()
> > and have a debug message printed rather than bricking the VMM with
> ASSERT()
> > and/or enable the ASSERT only when we are building with debug=y.
>
> ASSERT()s are enabled only when debug=y (i.e. NDEBUG not defined
> at the C level).
>
> Jan
>

Ah, OK, I'll just switch it to a debug print in a #ifndef NDEBUG block
then. Thanks for clarifying that ASSERT behavior!

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1962 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-22  9:30 ` [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
  2014-08-25 17:19   ` Andres Lagar Cavilla
@ 2014-08-26 13:34   ` Jan Beulich
  2014-08-26 14:42     ` Tamas K Lengyel
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 13:34 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: keir, ian.campbell, tim, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> In preparation to add support for ARM LPAE mem_event, relocate mem_access
> and mem_event into common Xen code. This patch makes no functional changes
> to the X86 side, for ARM mem_event and mem_access functions are just
> placeholder stubs.

"Makes no functional changes" is a little too weak a statement to
efficiently review such a big patch: Please clarify how much of the
non-benign changes (like asm/ -> xen/ include file path adjustments)
is really just code movement vs. where you needed to actually alter
the code.

>  xen/arch/x86/domctl.c            |   2 +-
>  xen/arch/x86/hvm/hvm.c           |  61 +---
>  xen/arch/x86/mm/Makefile         |   2 -
>  xen/arch/x86/mm/hap/nested_ept.c |   2 +-
>  xen/arch/x86/mm/hap/nested_hap.c |   2 +-
>  xen/arch/x86/mm/mem_access.c     | 133 --------
>  xen/arch/x86/mm/mem_event.c      | 705 --------------------------------------
>  xen/arch/x86/mm/mem_paging.c     |   2 +-
>  xen/arch/x86/mm/mem_sharing.c    |   2 +-
>  xen/arch/x86/mm/p2m-pod.c        |   2 +-
>  xen/arch/x86/mm/p2m-pt.c         |   2 +-
>  xen/arch/x86/mm/p2m.c            |   2 +-
>  xen/arch/x86/x86_64/compat/mm.c  |   4 +-
>  xen/arch/x86/x86_64/mm.c         |   4 +-
>  xen/common/Makefile              |   2 +
>  xen/common/domain.c              |   1 +
>  xen/common/mem_access.c          | 137 ++++++++
>  xen/common/mem_event.c           | 707 +++++++++++++++++++++++++++++++++++++++
>  xen/common/memory.c              |  62 ++++
>  xen/include/asm-arm/mm.h         |   1 -
>  xen/include/asm-x86/hvm/hvm.h    |   6 -
>  xen/include/asm-x86/mem_access.h |  39 ---
>  xen/include/asm-x86/mem_event.h  |  82 -----
>  xen/include/asm-x86/mm.h         |   2 -
>  xen/include/xen/mem_access.h     |  58 ++++
>  xen/include/xen/mem_event.h      | 141 ++++++++
>  xen/include/xen/mm.h             |   6 +
>  27 files changed, 1128 insertions(+), 1041 deletions(-)

This changes maintainership of a couple of files without also
adjusting ./MAINTAINERS.

>  delete mode 100644 xen/arch/x86/mm/mem_access.c
>  delete mode 100644 xen/arch/x86/mm/mem_event.c
>  create mode 100644 xen/common/mem_access.c
>  create mode 100644 xen/common/mem_event.c
>  delete mode 100644 xen/include/asm-x86/mem_access.h
>  delete mode 100644 xen/include/asm-x86/mem_event.h
>  create mode 100644 xen/include/xen/mem_access.h
>  create mode 100644 xen/include/xen/mem_event.h

Doesn't git have a mode where moves can be reflected as ordinary
diffs rather than as deletes/creates? While I'm not sure my scripts
would cope with applying such a patch, it surely would make review
easier (and perhaps even eliminate the question raised at the top).

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces
  2014-08-22  9:30 ` [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
  2014-08-25 17:20   ` Andres Lagar Cavilla
@ 2014-08-26 13:35   ` Jan Beulich
  2014-08-26 13:59     ` Tamas K Lengyel
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 13:35 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: keir, ian.campbell, tim, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
>  xen/common/mem_event.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)

This is actually something that I think you could legitimately do while
moving the file. But in the end Tim will have to tell you how he likes
it best.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-08-22  9:30 ` [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
  2014-08-25 17:25   ` Andres Lagar Cavilla
@ 2014-08-26 13:51   ` Jan Beulich
       [not found]     ` <CAErYnshbvgxzBVSPu0mM3UUc0kr_zfENiHw9KmT=30-kpy_DZA@mail.gmail.com>
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 13:51 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: keir, ian.campbell, tim, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -11,10 +11,17 @@
>  #include <xen/sched.h>
>  #include <xen/hypercall.h>
>  #include <public/domctl.h>
> +#include <asm/guest_access.h>
> +#include <xen/mem_event.h>
> +#include <public/mem_event.h>
>  
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> +
> +    long ret;
> +    bool_t copyback = 0;
> +
>      switch ( domctl->cmd )
>      {
>      case XEN_DOMCTL_cacheflush:
> @@ -23,17 +30,38 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>          unsigned long e = s + domctl->u.cacheflush.nr_pfns;
>  
>          if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> -            return -EINVAL;
> +        {
> +            ret = -EINVAL;
> +            break;
> +        }
>  
>          if ( e < s )
> -            return -EINVAL;
> +        {
> +            ret = -EINVAL;
> +            break;
> +        }
>  
> -        return p2m_cache_flush(d, s, e);
> +        ret = p2m_cache_flush(d, s, e);
>      }
> +    break;

What's wrong with the code above?

> +
> +    case XEN_DOMCTL_mem_event_op:
> +    {
> +        ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> +                              guest_handle_cast(u_domctl, void));
> +        copyback = 1;
> +    }

Pointless curly braces. Furthermore this already goes beyond what
the subject says, so you may want to adjust the title.

> +    break;
>  
>      default:
> -        return subarch_do_domctl(domctl, d, u_domctl);
> +        ret = subarch_do_domctl(domctl, d, u_domctl);
> +        break;

Again an unnecessary change.

> @@ -1111,18 +1114,27 @@ int xenmem_add_to_physmap_one(
>  
>  long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    switch ( op )
> +
> +    long rc;
> +
> +    switch ( op & MEMOP_CMD_MASK )
>      {
>      /* XXX: memsharing not working yet */
>      case XENMEM_get_sharing_shared_pages:
>      case XENMEM_get_sharing_freed_pages:
>          return 0;
> +    case XENMEM_access_op:

Missing blank line before new case.

> +    {
> +        rc = mem_access_memop(op, guest_handle_cast(arg, xen_mem_access_op_t));
> +        break;
> +    }

Pointless curly braces again.

>      default:
> -        return -ENOSYS;
> +        rc = -ENOSYS;
> +        break;

Unnecessary change again.

> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -29,8 +29,6 @@
>  #include <xen/mem_event.h>
>  #include <xsm/xsm.h>
>  
> -#ifdef CONFIG_X86
> -
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
>  {
> @@ -45,9 +43,11 @@ int mem_access_memop(unsigned long cmd,
>      if ( rc )
>          return rc;
>  
> +#ifdef CONFIG_X86
>      rc = -EINVAL;
>      if ( !is_hvm_domain(d) )
>          goto out;
> +#endif

Ugly, but well, I don't see a nice alternative.

> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -20,16 +20,19 @@
>   * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
>   */
>  
> -#ifdef CONFIG_X86
> -
> +#include <xen/sched.h>
>  #include <asm/domain.h>
>  #include <xen/event.h>
>  #include <xen/wait.h>
>  #include <asm/p2m.h>
>  #include <xen/mem_event.h>
>  #include <xen/mem_access.h>
> +
> +#ifdef CONFIG_X86
>  #include <asm/mem_paging.h>
>  #include <asm/mem_sharing.h>
> +#endif

Wouldn't that warrant introduction of HAVE_MEM_SHARING and
HAVE_MEM_PAGING?

> @@ -538,6 +543,8 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
>          {
>          case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
>          {
> +
> +#ifdef CONFIG_X86

Bogus blank line above the #ifdef.

>              rc = -ENODEV;
>              /* Only HAP is supported */
>              if ( !hap_enabled(d) )
> @@ -546,6 +553,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
>              /* Currently only EPT is supported */
>              if ( !cpu_has_vmx )
>                  break;
> +#endif

Code like what's getting enclosed in the #ifdef here should really
get abstracted out up front.

> +typedef enum {
> +    p2m_access_n     = 0, /* No access permissions allowed */
> +    p2m_access_r     = 1,
> +    p2m_access_w     = 2, 
> +    p2m_access_rw    = 3,
> +    p2m_access_x     = 4, 
> +    p2m_access_rx    = 5,
> +    p2m_access_wx    = 6, 
> +    p2m_access_rwx   = 7
> +
> +    /* NOTE: Assumed to be only 4 bits right now */

The comment seems bogus (i.e. blindly copied from x86). Furthermore
I think all the types above are really generic, i.e. may warrant placing
in a common header (with a per-arch define of extra types to be
added).

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces
  2014-08-26 13:35   ` Jan Beulich
@ 2014-08-26 13:59     ` Tamas K Lengyel
  0 siblings, 0 replies; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26 13:59 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 589 bytes --]

On Tue, Aug 26, 2014 at 3:35 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> > ---
> >  xen/common/mem_event.c | 16 ++++++++--------
> >  1 file changed, 8 insertions(+), 8 deletions(-)
>
> This is actually something that I think you could legitimately do while
> moving the file. But in the end Tim will have to tell you how he likes
> it best.
>
> Jan
>

Certainly, I just felt it might be easier to review the code movement
without mixing this into it.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1155 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
       [not found]     ` <CAErYnshbvgxzBVSPu0mM3UUc0kr_zfENiHw9KmT=30-kpy_DZA@mail.gmail.com>
@ 2014-08-26 14:38       ` Jan Beulich
  2014-08-26 15:21         ` Tamas K Lengyel
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 14:38 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel

>>> On 26.08.14 at 16:23, <tamas.lengyel@zentific.com> wrote:

(looks like you dropped Cc-s)

> On Tue, Aug 26, 2014 at 3:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
>> > --- a/xen/common/mem_event.c
>> > +++ b/xen/common/mem_event.c
>> > @@ -20,16 +20,19 @@
>> >   * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307  USA
>> >   */
>> >
>> > -#ifdef CONFIG_X86
>> > -
>> > +#include <xen/sched.h>
>> >  #include <asm/domain.h>
>> >  #include <xen/event.h>
>> >  #include <xen/wait.h>
>> >  #include <asm/p2m.h>
>> >  #include <xen/mem_event.h>
>> >  #include <xen/mem_access.h>
>> > +
>> > +#ifdef CONFIG_X86
>> >  #include <asm/mem_paging.h>
>> >  #include <asm/mem_sharing.h>
>> > +#endif
>>
>> Wouldn't that warrant introduction of HAVE_MEM_SHARING and
>> HAVE_MEM_PAGING?
> 
> Can you please elaborate what you mean? I'm not sure how to address this.

We've already got a number of CONFIG_ and HAVE_ manifest
constants (actually I think we use HAVE_* in makefiles and prefer
CONFIG_* in actual sources), which is what I'd prefer here over
explicit use of CONFIG_X86.

>> > @@ -546,6 +553,7 @@ int mem_event_domctl(struct domain *d,
>> xen_domctl_mem_event_op_t *mec,
>> >              /* Currently only EPT is supported */
>> >              if ( !cpu_has_vmx )
>> >                  break;
>> > +#endif
>>
>> Code like what's getting enclosed in the #ifdef here should really
>> get abstracted out up front.
>>
> 
> Can you specify what you mean by "up front"?

In a prereq patch.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-26 13:34   ` Jan Beulich
@ 2014-08-26 14:42     ` Tamas K Lengyel
  2014-08-26 15:32       ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26 14:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 3422 bytes --]

On Tue, Aug 26, 2014 at 3:34 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> > In preparation to add support for ARM LPAE mem_event, relocate mem_access
> > and mem_event into common Xen code. This patch makes no functional
> changes
> > to the X86 side, for ARM mem_event and mem_access functions are just
> > placeholder stubs.
>
> "Makes no functional changes" is a little too weak a statement to
> efficiently review such a big patch: Please clarify how much of the
> non-benign changes (like asm/ -> xen/ include file path adjustments)
> is really just code movement vs. where you needed to actually alter
> the code.
>

OK, I'll do that in the next version!


>
> >  xen/arch/x86/domctl.c            |   2 +-
> >  xen/arch/x86/hvm/hvm.c           |  61 +---
> >  xen/arch/x86/mm/Makefile         |   2 -
> >  xen/arch/x86/mm/hap/nested_ept.c |   2 +-
> >  xen/arch/x86/mm/hap/nested_hap.c |   2 +-
> >  xen/arch/x86/mm/mem_access.c     | 133 --------
> >  xen/arch/x86/mm/mem_event.c      | 705
> --------------------------------------
> >  xen/arch/x86/mm/mem_paging.c     |   2 +-
> >  xen/arch/x86/mm/mem_sharing.c    |   2 +-
> >  xen/arch/x86/mm/p2m-pod.c        |   2 +-
> >  xen/arch/x86/mm/p2m-pt.c         |   2 +-
> >  xen/arch/x86/mm/p2m.c            |   2 +-
> >  xen/arch/x86/x86_64/compat/mm.c  |   4 +-
> >  xen/arch/x86/x86_64/mm.c         |   4 +-
> >  xen/common/Makefile              |   2 +
> >  xen/common/domain.c              |   1 +
> >  xen/common/mem_access.c          | 137 ++++++++
> >  xen/common/mem_event.c           | 707
> +++++++++++++++++++++++++++++++++++++++
> >  xen/common/memory.c              |  62 ++++
> >  xen/include/asm-arm/mm.h         |   1 -
> >  xen/include/asm-x86/hvm/hvm.h    |   6 -
> >  xen/include/asm-x86/mem_access.h |  39 ---
> >  xen/include/asm-x86/mem_event.h  |  82 -----
> >  xen/include/asm-x86/mm.h         |   2 -
> >  xen/include/xen/mem_access.h     |  58 ++++
> >  xen/include/xen/mem_event.h      | 141 ++++++++
> >  xen/include/xen/mm.h             |   6 +
> >  27 files changed, 1128 insertions(+), 1041 deletions(-
> This changes maintainership of a couple of files without also
> adjusting ./MAINTAINERS.
>

The only code that is shifted is the mem_event/mem_access code and I don't
see entries for them in MAINTAINERS, so I think they would still just fall
under "THE REST". If there are other maintainership changes caused by this
patch that I'm unaware of please let me know and I'll adjust the
MAINTAINERS.


>
> >  delete mode 100644 xen/arch/x86/mm/mem_access.c
> >  delete mode 100644 xen/arch/x86/mm/mem_event.c
> >  create mode 100644 xen/common/mem_access.c
> >  create mode 100644 xen/common/mem_event.c
> >  delete mode 100644 xen/include/asm-x86/mem_access.h
> >  delete mode 100644 xen/include/asm-x86/mem_event.h
> >  create mode 100644 xen/include/xen/mem_access.h
> >  create mode 100644 xen/include/xen/mem_event.h
>
> Doesn't git have a mode where moves can be reflected as ordinary
> diffs rather than as deletes/creates? While I'm not sure my scripts
> would cope with applying such a patch, it surely would make review
> easier (and perhaps even eliminate the question raised at the top).
>
> Jan
>

I did git mv to relocate the files and that does delete mode/create mode. I
don't know if there is way to do what you describe but I'll look into it.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 4604 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-08-26 14:38       ` Jan Beulich
@ 2014-08-26 15:21         ` Tamas K Lengyel
  2014-08-26 15:33           ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26 15:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1936 bytes --]

On Tue, Aug 26, 2014 at 4:38 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 26.08.14 at 16:23, <tamas.lengyel@zentific.com> wrote:
>
> (looks like you dropped Cc-s)
>
> > On Tue, Aug 26, 2014 at 3:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> >> > --- a/xen/common/mem_event.c
> >> > +++ b/xen/common/mem_event.c
> >> > @@ -20,16 +20,19 @@
> >> >   * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307  USA
> >> >   */
> >> >
> >> > -#ifdef CONFIG_X86
> >> > -
> >> > +#include <xen/sched.h>
> >> >  #include <asm/domain.h>
> >> >  #include <xen/event.h>
> >> >  #include <xen/wait.h>
> >> >  #include <asm/p2m.h>
> >> >  #include <xen/mem_event.h>
> >> >  #include <xen/mem_access.h>
> >> > +
> >> > +#ifdef CONFIG_X86
> >> >  #include <asm/mem_paging.h>
> >> >  #include <asm/mem_sharing.h>
> >> > +#endif
> >>
> >> Wouldn't that warrant introduction of HAVE_MEM_SHARING and
> >> HAVE_MEM_PAGING?
> >
> > Can you please elaborate what you mean? I'm not sure how to address this.
>
> We've already got a number of CONFIG_ and HAVE_ manifest
> constants (actually I think we use HAVE_* in makefiles and prefer
> CONFIG_* in actual sources), which is what I'd prefer here over
> explicit use of CONFIG_X86.
>

Ack.


>
> >> > @@ -546,6 +553,7 @@ int mem_event_domctl(struct domain *d,
> >> xen_domctl_mem_event_op_t *mec,
> >> >              /* Currently only EPT is supported */
> >> >              if ( !cpu_has_vmx )
> >> >                  break;
> >> > +#endif
> >>
> >> Code like what's getting enclosed in the #ifdef here should really
> >> get abstracted out up front.
> >>
> >
> > Can you specify what you mean by "up front"?
>
> In a prereq patch.
>
> Jan
>

I could move these checks into a separate static inline function in a
prereq patch and here just add an empty version for the check for ARM. If
that is what you mean.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3242 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-26 14:42     ` Tamas K Lengyel
@ 2014-08-26 15:32       ` Jan Beulich
  2014-08-26 16:30         ` Tamas K Lengyel
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 15:32 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf

>>> On 26.08.14 at 16:42, <tamas.lengyel@zentific.com> wrote:
> On Tue, Aug 26, 2014 at 3:34 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
>> >  xen/arch/x86/domctl.c            |   2 +-
>> >  xen/arch/x86/hvm/hvm.c           |  61 +---
>> >  xen/arch/x86/mm/Makefile         |   2 -
>> >  xen/arch/x86/mm/hap/nested_ept.c |   2 +-
>> >  xen/arch/x86/mm/hap/nested_hap.c |   2 +-
>> >  xen/arch/x86/mm/mem_access.c     | 133 --------
>> >  xen/arch/x86/mm/mem_event.c      | 705
>> --------------------------------------
>> >  xen/arch/x86/mm/mem_paging.c     |   2 +-
>> >  xen/arch/x86/mm/mem_sharing.c    |   2 +-
>> >  xen/arch/x86/mm/p2m-pod.c        |   2 +-
>> >  xen/arch/x86/mm/p2m-pt.c         |   2 +-
>> >  xen/arch/x86/mm/p2m.c            |   2 +-
>> >  xen/arch/x86/x86_64/compat/mm.c  |   4 +-
>> >  xen/arch/x86/x86_64/mm.c         |   4 +-
>> >  xen/common/Makefile              |   2 +
>> >  xen/common/domain.c              |   1 +
>> >  xen/common/mem_access.c          | 137 ++++++++
>> >  xen/common/mem_event.c           | 707
>> +++++++++++++++++++++++++++++++++++++++
>> >  xen/common/memory.c              |  62 ++++
>> >  xen/include/asm-arm/mm.h         |   1 -
>> >  xen/include/asm-x86/hvm/hvm.h    |   6 -
>> >  xen/include/asm-x86/mem_access.h |  39 ---
>> >  xen/include/asm-x86/mem_event.h  |  82 -----
>> >  xen/include/asm-x86/mm.h         |   2 -
>> >  xen/include/xen/mem_access.h     |  58 ++++
>> >  xen/include/xen/mem_event.h      | 141 ++++++++
>> >  xen/include/xen/mm.h             |   6 +
>> >  27 files changed, 1128 insertions(+), 1041 deletions(-
>> This changes maintainership of a couple of files without also
>> adjusting ./MAINTAINERS.
>>
> 
> The only code that is shifted is the mem_event/mem_access code and I don't
> see entries for them in MAINTAINERS, so I think they would still just fall
> under "THE REST". If there are other maintainership changes caused by this
> patch that I'm unaware of please let me know and I'll adjust the
> MAINTAINERS.

No, the current files fall under

X86 MEMORY MANAGEMENT
M:	Tim Deegan <tim@xen.org>
S:	Supported
F:	xen/arch/x86/mm/

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-08-26 15:21         ` Tamas K Lengyel
@ 2014-08-26 15:33           ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2014-08-26 15:33 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel

>>> On 26.08.14 at 17:21, <tamas.lengyel@zentific.com> wrote:
> On Tue, Aug 26, 2014 at 4:38 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 26.08.14 at 16:23, <tamas.lengyel@zentific.com> wrote:
>> > On Tue, Aug 26, 2014 at 3:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
>> >> > @@ -546,6 +553,7 @@ int mem_event_domctl(struct domain *d,
>> >> xen_domctl_mem_event_op_t *mec,
>> >> >              /* Currently only EPT is supported */
>> >> >              if ( !cpu_has_vmx )
>> >> >                  break;
>> >> > +#endif
>> >>
>> >> Code like what's getting enclosed in the #ifdef here should really
>> >> get abstracted out up front.
>> >>
>> >
>> > Can you specify what you mean by "up front"?
>>
>> In a prereq patch.
> 
> I could move these checks into a separate static inline function in a
> prereq patch and here just add an empty version for the check for ARM. If
> that is what you mean.

Something along those lines, yes.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-26 15:32       ` Jan Beulich
@ 2014-08-26 16:30         ` Tamas K Lengyel
  2014-08-27  6:29           ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Tamas K Lengyel @ 2014-08-26 16:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 2532 bytes --]

On Tue, Aug 26, 2014 at 5:32 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 26.08.14 at 16:42, <tamas.lengyel@zentific.com> wrote:
> > On Tue, Aug 26, 2014 at 3:34 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
> >> >  xen/arch/x86/domctl.c            |   2 +-
> >> >  xen/arch/x86/hvm/hvm.c           |  61 +---
> >> >  xen/arch/x86/mm/Makefile         |   2 -
> >> >  xen/arch/x86/mm/hap/nested_ept.c |   2 +-
> >> >  xen/arch/x86/mm/hap/nested_hap.c |   2 +-
> >> >  xen/arch/x86/mm/mem_access.c     | 133 --------
> >> >  xen/arch/x86/mm/mem_event.c      | 705
> >> --------------------------------------
> >> >  xen/arch/x86/mm/mem_paging.c     |   2 +-
> >> >  xen/arch/x86/mm/mem_sharing.c    |   2 +-
> >> >  xen/arch/x86/mm/p2m-pod.c        |   2 +-
> >> >  xen/arch/x86/mm/p2m-pt.c         |   2 +-
> >> >  xen/arch/x86/mm/p2m.c            |   2 +-
> >> >  xen/arch/x86/x86_64/compat/mm.c  |   4 +-
> >> >  xen/arch/x86/x86_64/mm.c         |   4 +-
> >> >  xen/common/Makefile              |   2 +
> >> >  xen/common/domain.c              |   1 +
> >> >  xen/common/mem_access.c          | 137 ++++++++
> >> >  xen/common/mem_event.c           | 707
> >> +++++++++++++++++++++++++++++++++++++++
> >> >  xen/common/memory.c              |  62 ++++
> >> >  xen/include/asm-arm/mm.h         |   1 -
> >> >  xen/include/asm-x86/hvm/hvm.h    |   6 -
> >> >  xen/include/asm-x86/mem_access.h |  39 ---
> >> >  xen/include/asm-x86/mem_event.h  |  82 -----
> >> >  xen/include/asm-x86/mm.h         |   2 -
> >> >  xen/include/xen/mem_access.h     |  58 ++++
> >> >  xen/include/xen/mem_event.h      | 141 ++++++++
> >> >  xen/include/xen/mm.h             |   6 +
> >> >  27 files changed, 1128 insertions(+), 1041 deletions(-
> >> This changes maintainership of a couple of files without also
> >> adjusting ./MAINTAINERS.
> >>
> >
> > The only code that is shifted is the mem_event/mem_access code and I
> don't
> > see entries for them in MAINTAINERS, so I think they would still just
> fall
> > under "THE REST". If there are other maintainership changes caused by
> this
> > patch that I'm unaware of please let me know and I'll adjust the
> > MAINTAINERS.
>
> No, the current files fall under
>
> X86 MEMORY MANAGEMENT
> M:      Tim Deegan <tim@xen.org>
> S:      Supported
> F:      xen/arch/x86/mm/
>
> Jan
>

Ah, I see. It's also not clear to me who would be the new maintainer or if
Tim would remain the maintainer of these files under /common.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3675 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common.
  2014-08-26 16:30         ` Tamas K Lengyel
@ 2014-08-27  6:29           ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2014-08-27  6:29 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Keir Fraser, Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf

>>> On 26.08.14 at 18:30, <tamas.lengyel@zentific.com> wrote:
> On Tue, Aug 26, 2014 at 5:32 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 26.08.14 at 16:42, <tamas.lengyel@zentific.com> wrote:
>> > On Tue, Aug 26, 2014 at 3:34 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>> On 22.08.14 at 11:30, <tamas.lengyel@zentific.com> wrote:
>> >> >  xen/arch/x86/domctl.c            |   2 +-
>> >> >  xen/arch/x86/hvm/hvm.c           |  61 +---
>> >> >  xen/arch/x86/mm/Makefile         |   2 -
>> >> >  xen/arch/x86/mm/hap/nested_ept.c |   2 +-
>> >> >  xen/arch/x86/mm/hap/nested_hap.c |   2 +-
>> >> >  xen/arch/x86/mm/mem_access.c     | 133 --------
>> >> >  xen/arch/x86/mm/mem_event.c      | 705
>> >> --------------------------------------
>> >> >  xen/arch/x86/mm/mem_paging.c     |   2 +-
>> >> >  xen/arch/x86/mm/mem_sharing.c    |   2 +-
>> >> >  xen/arch/x86/mm/p2m-pod.c        |   2 +-
>> >> >  xen/arch/x86/mm/p2m-pt.c         |   2 +-
>> >> >  xen/arch/x86/mm/p2m.c            |   2 +-
>> >> >  xen/arch/x86/x86_64/compat/mm.c  |   4 +-
>> >> >  xen/arch/x86/x86_64/mm.c         |   4 +-
>> >> >  xen/common/Makefile              |   2 +
>> >> >  xen/common/domain.c              |   1 +
>> >> >  xen/common/mem_access.c          | 137 ++++++++
>> >> >  xen/common/mem_event.c           | 707
>> >> +++++++++++++++++++++++++++++++++++++++
>> >> >  xen/common/memory.c              |  62 ++++
>> >> >  xen/include/asm-arm/mm.h         |   1 -
>> >> >  xen/include/asm-x86/hvm/hvm.h    |   6 -
>> >> >  xen/include/asm-x86/mem_access.h |  39 ---
>> >> >  xen/include/asm-x86/mem_event.h  |  82 -----
>> >> >  xen/include/asm-x86/mm.h         |   2 -
>> >> >  xen/include/xen/mem_access.h     |  58 ++++
>> >> >  xen/include/xen/mem_event.h      | 141 ++++++++
>> >> >  xen/include/xen/mm.h             |   6 +
>> >> >  27 files changed, 1128 insertions(+), 1041 deletions(-
>> >> This changes maintainership of a couple of files without also
>> >> adjusting ./MAINTAINERS.
>> >>
>> >
>> > The only code that is shifted is the mem_event/mem_access code and I
>> don't
>> > see entries for them in MAINTAINERS, so I think they would still just
>> fall
>> > under "THE REST". If there are other maintainership changes caused by
>> this
>> > patch that I'm unaware of please let me know and I'll adjust the
>> > MAINTAINERS.
>>
>> No, the current files fall under
>>
>> X86 MEMORY MANAGEMENT
>> M:      Tim Deegan <tim@xen.org>
>> S:      Supported
>> F:      xen/arch/x86/mm/
> 
> Ah, I see. It's also not clear to me who would be the new maintainer or if
> Tim would remain the maintainer of these files under /common.

My take on it would be that maintainership shouldn't change when
files get moved around. But of course that's something primarily
Tim will have to decide.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2014-08-27  6:29 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-22  9:30 [PATCH RFC 0/7] Mem_event and mem_access for ARM Tamas K Lengyel
2014-08-22  9:30 ` [PATCH RFC 1/7] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
2014-08-25 17:19   ` Andres Lagar Cavilla
2014-08-26 10:52     ` Tamas K Lengyel
2014-08-26 12:42       ` Jan Beulich
2014-08-26 13:25         ` Tamas K Lengyel
2014-08-26 13:34   ` Jan Beulich
2014-08-26 14:42     ` Tamas K Lengyel
2014-08-26 15:32       ` Jan Beulich
2014-08-26 16:30         ` Tamas K Lengyel
2014-08-27  6:29           ` Jan Beulich
2014-08-22  9:30 ` [PATCH RFC 2/7] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
2014-08-25 17:20   ` Andres Lagar Cavilla
2014-08-26 13:35   ` Jan Beulich
2014-08-26 13:59     ` Tamas K Lengyel
2014-08-22  9:30 ` [PATCH RFC 3/7] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
2014-08-25 17:25   ` Andres Lagar Cavilla
2014-08-26  8:32     ` Tamas K Lengyel
2014-08-26 13:51   ` Jan Beulich
     [not found]     ` <CAErYnshbvgxzBVSPu0mM3UUc0kr_zfENiHw9KmT=30-kpy_DZA@mail.gmail.com>
2014-08-26 14:38       ` Jan Beulich
2014-08-26 15:21         ` Tamas K Lengyel
2014-08-26 15:33           ` Jan Beulich
2014-08-22  9:30 ` [PATCH RFC 4/7] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
2014-08-22  9:30 ` [PATCH RFC 5/7] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
2014-08-22  9:30 ` [PATCH RFC 6/7] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
2014-08-22  9:30 ` [PATCH RFC 7/7] tools/tests: Enable xen-access on ARM Tamas K Lengyel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.