All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
@ 2016-12-09 19:59 Tamas K Lengyel
  2016-12-09 19:59 ` [PATCH v2 2/2] p2m: split mem_access into separate files Tamas K Lengyel
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-09 19:59 UTC (permalink / raw)
  To: xen-devel; +Cc: Tamas K Lengyel, Julien Grall, Stefano Stabellini

The only caller of this function is get_page_from_gva which already takes
a vcpu pointer as input. Pass this along to make the function in-line with
its intended use-case.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index cc5634b..837be1d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
  * we indeed found a conflicting mem_access setting.
  */
 static struct page_info*
-p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
+p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
+                                  const struct vcpu *v)
 {
     long rc;
     paddr_t ipa;
@@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     xenmem_access_t xma;
     p2m_type_t t;
     struct page_info *page = NULL;
-    struct p2m_domain *p2m = &current->domain->arch.p2m;
+    struct p2m_domain *p2m = &v->domain->arch.p2m;
 
     rc = gva_to_ipa(gva, &ipa, flag);
     if ( rc < 0 )
@@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
      */
-    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
+    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
     if ( rc < 0 )
         goto err;
 
@@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
 
     page = mfn_to_page(mfn_x(mfn));
 
-    if ( unlikely(!get_page(page, current->domain)) )
+    if ( unlikely(!get_page(page, v->domain)) )
         page = NULL;
 
 err:
@@ -1587,7 +1588,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 
 err:
     if ( !page && p2m->mem_access_enabled )
-        page = p2m_mem_access_check_and_get_page(va, flags);
+        page = p2m_mem_access_check_and_get_page(va, flags, v);
 
     p2m_read_unlock(p2m);
 
-- 
2.10.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/2] p2m: split mem_access into separate files
  2016-12-09 19:59 [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Tamas K Lengyel
@ 2016-12-09 19:59 ` Tamas K Lengyel
  2017-01-03 15:31   ` Tamas K Lengyel
  2017-01-30 21:11   ` Stefano Stabellini
  2016-12-12  7:42 ` [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Jan Beulich
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-09 19:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, George Dunlap, Tamas K Lengyel, Julien Grall,
	Jan Beulich, Andrew Cooper

This patch relocates mem_access components that are currently mixed with p2m
code into separate files. This better aligns the code with similar subsystems,
such as mem_sharing and mem_paging, which are already in separate files. There
are no code-changes introduced, the patch is mechanical code movement.

On ARM we also relocate the static inline gfn_next_boundary function to p2m.h
as it is a function the mem_access code needs access to.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

v2: Don't move ARM radix tree functions
    Include asm/mem_accesss.h in xen/mem_access.h
---
 MAINTAINERS                      |   2 +
 xen/arch/arm/Makefile            |   1 +
 xen/arch/arm/mem_access.c        | 431 ++++++++++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c               | 414 +----------------------------------
 xen/arch/arm/traps.c             |   1 +
 xen/arch/x86/mm/Makefile         |   1 +
 xen/arch/x86/mm/mem_access.c     | 462 +++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c            | 421 -----------------------------------
 xen/arch/x86/vm_event.c          |   3 +-
 xen/common/mem_access.c          |   2 +-
 xen/include/asm-arm/mem_access.h |  53 +++++
 xen/include/asm-arm/p2m.h        |  31 ++-
 xen/include/asm-x86/mem_access.h |  61 ++++++
 xen/include/asm-x86/p2m.h        |  24 +-
 xen/include/xen/mem_access.h     |  67 +++++-
 xen/include/xen/p2m-common.h     |  52 -----
 16 files changed, 1089 insertions(+), 937 deletions(-)
 create mode 100644 xen/arch/arm/mem_access.c
 create mode 100644 xen/arch/x86/mm/mem_access.c
 create mode 100644 xen/include/asm-arm/mem_access.h
 create mode 100644 xen/include/asm-x86/mem_access.h

diff --git a/MAINTAINERS b/MAINTAINERS
index f0d0202..fb26be3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -402,6 +402,8 @@ S:	Supported
 F:	tools/tests/xen-access
 F:	xen/arch/*/monitor.c
 F:	xen/arch/*/vm_event.c
+F:	xen/arch/arm/mem_access.c
+F:	xen/arch/x86/mm/mem_access.c
 F:	xen/arch/x86/hvm/monitor.c
 F:	xen/common/mem_access.c
 F:	xen/common/monitor.c
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index da39d39..b095e8a 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -24,6 +24,7 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
 obj-y += p2m.o
diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
new file mode 100644
index 0000000..a6e5bcd
--- /dev/null
+++ b/xen/arch/arm/mem_access.c
@@ -0,0 +1,431 @@
+/*
+ * arch/arm/mem_access.c
+ *
+ * Architecture-specific mem_access handling routines
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/config.h>
+#include <xen/mem_access.h>
+#include <xen/monitor.h>
+#include <xen/sched.h>
+#include <xen/vm_event.h>
+#include <public/vm_event.h>
+#include <asm/event.h>
+
+static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+                                xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    void *i;
+    unsigned int index;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+            ACCESS(rx2rw),
+            ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    ASSERT(p2m_is_locked(p2m));
+
+    /* If no setting was ever set, just return rwx. */
+    if ( !p2m->mem_access_enabled )
+    {
+        *access = XENMEM_access_rwx;
+        return 0;
+    }
+
+    /* If request to get default access. */
+    if ( gfn_eq(gfn, INVALID_GFN) )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    i = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
+
+    if ( !i )
+    {
+        /*
+         * No setting was found in the Radix tree. Check if the
+         * entry exists in the page-tables.
+         */
+        mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
+
+        if ( mfn_eq(mfn, INVALID_MFN) )
+            return -ESRCH;
+
+        /* If entry exists then its rwx. */
+        *access = XENMEM_access_rwx;
+    }
+    else
+    {
+        /* Setting was found in the Radix tree. */
+        index = radix_tree_ptr_to_int(i);
+        if ( index >= ARRAY_SIZE(memaccess) )
+            return -ERANGE;
+
+        *access = memaccess[index];
+    }
+
+    return 0;
+}
+
+/*
+ * If mem_access is in use it might have been the reason why get_page_from_gva
+ * failed to fetch the page, as it uses the MMU for the permission checking.
+ * Only in these cases we do a software-based type check and fetch the page if
+ * we indeed found a conflicting mem_access setting.
+ */
+struct page_info*
+p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
+                                  const struct vcpu *v)
+{
+    long rc;
+    paddr_t ipa;
+    gfn_t gfn;
+    mfn_t mfn;
+    xenmem_access_t xma;
+    p2m_type_t t;
+    struct page_info *page = NULL;
+    struct p2m_domain *p2m = &v->domain->arch.p2m;
+
+    rc = gva_to_ipa(gva, &ipa, flag);
+    if ( rc < 0 )
+        goto err;
+
+    gfn = _gfn(paddr_to_pfn(ipa));
+
+    /*
+     * We do this first as this is faster in the default case when no
+     * permission is set on the page.
+     */
+    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
+    if ( rc < 0 )
+        goto err;
+
+    /* Let's check if mem_access limited the access. */
+    switch ( xma )
+    {
+    default:
+    case XENMEM_access_rwx:
+    case XENMEM_access_rw:
+        /*
+         * If mem_access contains no rw perm restrictions at all then the original
+         * fault was correct.
+         */
+        goto err;
+    case XENMEM_access_n2rwx:
+    case XENMEM_access_n:
+    case XENMEM_access_x:
+        /*
+         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
+         */
+        break;
+    case XENMEM_access_wx:
+    case XENMEM_access_w:
+        /*
+         * If this was a read then it was because of mem_access, but if it was
+         * a write then the original get_page_from_gva fault was correct.
+         */
+        if ( flag == GV2M_READ )
+            break;
+        else
+            goto err;
+    case XENMEM_access_rx2rw:
+    case XENMEM_access_rx:
+    case XENMEM_access_r:
+        /*
+         * If this was a write then it was because of mem_access, but if it was
+         * a read then the original get_page_from_gva fault was correct.
+         */
+        if ( flag == GV2M_WRITE )
+            break;
+        else
+            goto err;
+    }
+
+    /*
+     * We had a mem_access permission limiting the access, but the page type
+     * could also be limiting, so we need to check that as well.
+     */
+    mfn = p2m_get_entry(p2m, gfn, &t, NULL, NULL);
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        goto err;
+
+    if ( !mfn_valid(mfn_x(mfn)) )
+        goto err;
+
+    /*
+     * Base type doesn't allow r/w
+     */
+    if ( t != p2m_ram_rw )
+        goto err;
+
+    page = mfn_to_page(mfn_x(mfn));
+
+    if ( unlikely(!get_page(page, v->domain)) )
+        page = NULL;
+
+err:
+    return page;
+}
+
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
+{
+    int rc;
+    bool_t violation;
+    xenmem_access_t xma;
+    vm_event_request_t *req;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+
+    /* Mem_access is not in use. */
+    if ( !p2m->mem_access_enabled )
+        return true;
+
+    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
+    if ( rc )
+        return true;
+
+    /* Now check for mem_access violation. */
+    switch ( xma )
+    {
+    case XENMEM_access_rwx:
+        violation = false;
+        break;
+    case XENMEM_access_rw:
+        violation = npfec.insn_fetch;
+        break;
+    case XENMEM_access_wx:
+        violation = npfec.read_access;
+        break;
+    case XENMEM_access_rx:
+    case XENMEM_access_rx2rw:
+        violation = npfec.write_access;
+        break;
+    case XENMEM_access_x:
+        violation = npfec.read_access || npfec.write_access;
+        break;
+    case XENMEM_access_w:
+        violation = npfec.read_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_r:
+        violation = npfec.write_access || npfec.insn_fetch;
+        break;
+    default:
+    case XENMEM_access_n:
+    case XENMEM_access_n2rwx:
+        violation = true;
+        break;
+    }
+
+    if ( !violation )
+        return true;
+
+    /* First, handle rx2rw and n2rwx conversion automatically. */
+    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
+    {
+        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
+                                0, ~0, XENMEM_access_rw, 0);
+        return false;
+    }
+    else if ( xma == XENMEM_access_n2rwx )
+    {
+        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
+                                0, ~0, XENMEM_access_rwx, 0);
+    }
+
+    /* Otherwise, check if there is a vm_event monitor subscriber */
+    if ( !vm_event_check_ring(&v->domain->vm_event->monitor) )
+    {
+        /* No listener */
+        if ( p2m->access_required )
+        {
+            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
+                                  "no vm_event listener VCPU %d, dom %d\n",
+                                  v->vcpu_id, v->domain->domain_id);
+            domain_crash(v->domain);
+        }
+        else
+        {
+            /* n2rwx was already handled */
+            if ( xma != XENMEM_access_n2rwx )
+            {
+                /* A listener is not required, so clear the access
+                 * restrictions. */
+                rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
+                                        0, ~0, XENMEM_access_rwx, 0);
+            }
+        }
+
+        /* No need to reinject */
+        return false;
+    }
+
+    req = xzalloc(vm_event_request_t);
+    if ( req )
+    {
+        req->reason = VM_EVENT_REASON_MEM_ACCESS;
+
+        /* Send request to mem access subscriber */
+        req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
+        req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
+        if ( npfec.gla_valid )
+        {
+            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
+            req->u.mem_access.gla = gla;
+
+            if ( npfec.kind == npfec_kind_with_gla )
+                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
+            else if ( npfec.kind == npfec_kind_in_gpt )
+                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
+        }
+        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
+        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
+        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
+
+        if ( monitor_traps(v, (xma != XENMEM_access_n2rwx), req) < 0 )
+            domain_crash(v->domain);
+
+        xfree(req);
+    }
+
+    return false;
+}
+
+/*
+ * Set access type for a region of pfns.
+ * If gfn == INVALID_GFN, sets the default access type.
+ */
+long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access,
+                        unsigned int altp2m_idx)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_access_t a;
+    unsigned int order;
+    long rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    switch ( access )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        a = memaccess[access];
+        break;
+    case XENMEM_access_default:
+        a = p2m->default_access;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    /*
+     * Flip mem_access_enabled to true when a permission is set, as to prevent
+     * allocating or inserting super-pages.
+     */
+    p2m->mem_access_enabled = true;
+
+    /* If request to set default access. */
+    if ( gfn_eq(gfn, INVALID_GFN) )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    p2m_write_lock(p2m);
+
+    for ( gfn = gfn_add(gfn, start); nr > start;
+          gfn = gfn_next_boundary(gfn, order) )
+    {
+        p2m_type_t t;
+        mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL, &order);
+
+
+        if ( !mfn_eq(mfn, INVALID_MFN) )
+        {
+            order = 0;
+            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, a);
+            if ( rc )
+                break;
+        }
+
+        start += gfn_x(gfn_next_boundary(gfn, order)) - gfn_x(gfn);
+        /* Check for continuation if it is not the last iteration */
+        if ( nr > start && !(start & mask) && hypercall_preempt_check() )
+        {
+            rc = start;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+long p2m_set_mem_access_multi(struct domain *d,
+                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
+                              const XEN_GUEST_HANDLE(const_uint8) access_list,
+                              uint32_t nr, uint32_t start, uint32_t mask,
+                              unsigned int altp2m_idx)
+{
+    /* Not yet implemented on ARM. */
+    return -EOPNOTSUPP;
+}
+
+int p2m_get_mem_access(struct domain *d, gfn_t gfn,
+                       xenmem_access_t *access)
+{
+    int ret;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    p2m_read_lock(p2m);
+    ret = __p2m_get_mem_access(d, gfn, access);
+    p2m_read_unlock(p2m);
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 837be1d..4e7ce3d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -7,6 +7,7 @@
 #include <xen/vm_event.h>
 #include <xen/monitor.h>
 #include <xen/iocap.h>
+#include <xen/mem_access.h>
 #include <public/vm_event.h>
 #include <asm/flushtlb.h>
 #include <asm/gic.h>
@@ -58,22 +59,6 @@ static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
     return (level < 3) && p2m_mapping(pte);
 }
 
-/*
- * Return the start of the next mapping based on the order of the
- * current one.
- */
-static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
-{
-    /*
-     * The order corresponds to the order of the mapping (or invalid
-     * range) in the page table. So we need to align the GFN before
-     * incrementing.
-     */
-    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
-
-    return gfn_add(gfn, 1UL << order);
-}
-
 static void p2m_flush_tlb(struct p2m_domain *p2m);
 
 /* Unlock the flush and do a P2M TLB flush if necessary */
@@ -602,73 +587,6 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
     return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
-                                xenmem_access_t *access)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    void *i;
-    unsigned int index;
-
-    static const xenmem_access_t memaccess[] = {
-#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
-            ACCESS(n),
-            ACCESS(r),
-            ACCESS(w),
-            ACCESS(rw),
-            ACCESS(x),
-            ACCESS(rx),
-            ACCESS(wx),
-            ACCESS(rwx),
-            ACCESS(rx2rw),
-            ACCESS(n2rwx),
-#undef ACCESS
-    };
-
-    ASSERT(p2m_is_locked(p2m));
-
-    /* If no setting was ever set, just return rwx. */
-    if ( !p2m->mem_access_enabled )
-    {
-        *access = XENMEM_access_rwx;
-        return 0;
-    }
-
-    /* If request to get default access. */
-    if ( gfn_eq(gfn, INVALID_GFN) )
-    {
-        *access = memaccess[p2m->default_access];
-        return 0;
-    }
-
-    i = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
-
-    if ( !i )
-    {
-        /*
-         * No setting was found in the Radix tree. Check if the
-         * entry exists in the page-tables.
-         */
-        mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
-
-        if ( mfn_eq(mfn, INVALID_MFN) )
-            return -ESRCH;
-
-        /* If entry exists then its rwx. */
-        *access = XENMEM_access_rwx;
-    }
-    else
-    {
-        /* Setting was found in the Radix tree. */
-        index = radix_tree_ptr_to_int(i);
-        if ( index >= ARRAY_SIZE(memaccess) )
-            return -ERANGE;
-
-        *access = memaccess[index];
-    }
-
-    return 0;
-}
-
 static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
                                     p2m_access_t a)
 {
@@ -1454,106 +1372,6 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
     return p2m_lookup(d, gfn, NULL);
 }
 
-/*
- * If mem_access is in use it might have been the reason why get_page_from_gva
- * failed to fetch the page, as it uses the MMU for the permission checking.
- * Only in these cases we do a software-based type check and fetch the page if
- * we indeed found a conflicting mem_access setting.
- */
-static struct page_info*
-p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
-                                  const struct vcpu *v)
-{
-    long rc;
-    paddr_t ipa;
-    gfn_t gfn;
-    mfn_t mfn;
-    xenmem_access_t xma;
-    p2m_type_t t;
-    struct page_info *page = NULL;
-    struct p2m_domain *p2m = &v->domain->arch.p2m;
-
-    rc = gva_to_ipa(gva, &ipa, flag);
-    if ( rc < 0 )
-        goto err;
-
-    gfn = _gfn(paddr_to_pfn(ipa));
-
-    /*
-     * We do this first as this is faster in the default case when no
-     * permission is set on the page.
-     */
-    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
-    if ( rc < 0 )
-        goto err;
-
-    /* Let's check if mem_access limited the access. */
-    switch ( xma )
-    {
-    default:
-    case XENMEM_access_rwx:
-    case XENMEM_access_rw:
-        /*
-         * If mem_access contains no rw perm restrictions at all then the original
-         * fault was correct.
-         */
-        goto err;
-    case XENMEM_access_n2rwx:
-    case XENMEM_access_n:
-    case XENMEM_access_x:
-        /*
-         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
-         */
-        break;
-    case XENMEM_access_wx:
-    case XENMEM_access_w:
-        /*
-         * If this was a read then it was because of mem_access, but if it was
-         * a write then the original get_page_from_gva fault was correct.
-         */
-        if ( flag == GV2M_READ )
-            break;
-        else
-            goto err;
-    case XENMEM_access_rx2rw:
-    case XENMEM_access_rx:
-    case XENMEM_access_r:
-        /*
-         * If this was a write then it was because of mem_access, but if it was
-         * a read then the original get_page_from_gva fault was correct.
-         */
-        if ( flag == GV2M_WRITE )
-            break;
-        else
-            goto err;
-    }
-
-    /*
-     * We had a mem_access permission limiting the access, but the page type
-     * could also be limiting, so we need to check that as well.
-     */
-    mfn = p2m_get_entry(p2m, gfn, &t, NULL, NULL);
-    if ( mfn_eq(mfn, INVALID_MFN) )
-        goto err;
-
-    if ( !mfn_valid(mfn_x(mfn)) )
-        goto err;
-
-    /*
-     * Base type doesn't allow r/w
-     */
-    if ( t != p2m_ram_rw )
-        goto err;
-
-    page = mfn_to_page(mfn_x(mfn));
-
-    if ( unlikely(!get_page(page, v->domain)) )
-        page = NULL;
-
-err:
-    return page;
-}
-
 struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags)
 {
@@ -1666,236 +1484,6 @@ void __init setup_virt_paging(void)
     smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
-bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
-{
-    int rc;
-    bool_t violation;
-    xenmem_access_t xma;
-    vm_event_request_t *req;
-    struct vcpu *v = current;
-    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
-
-    /* Mem_access is not in use. */
-    if ( !p2m->mem_access_enabled )
-        return true;
-
-    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
-    if ( rc )
-        return true;
-
-    /* Now check for mem_access violation. */
-    switch ( xma )
-    {
-    case XENMEM_access_rwx:
-        violation = false;
-        break;
-    case XENMEM_access_rw:
-        violation = npfec.insn_fetch;
-        break;
-    case XENMEM_access_wx:
-        violation = npfec.read_access;
-        break;
-    case XENMEM_access_rx:
-    case XENMEM_access_rx2rw:
-        violation = npfec.write_access;
-        break;
-    case XENMEM_access_x:
-        violation = npfec.read_access || npfec.write_access;
-        break;
-    case XENMEM_access_w:
-        violation = npfec.read_access || npfec.insn_fetch;
-        break;
-    case XENMEM_access_r:
-        violation = npfec.write_access || npfec.insn_fetch;
-        break;
-    default:
-    case XENMEM_access_n:
-    case XENMEM_access_n2rwx:
-        violation = true;
-        break;
-    }
-
-    if ( !violation )
-        return true;
-
-    /* First, handle rx2rw and n2rwx conversion automatically. */
-    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
-    {
-        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
-                                0, ~0, XENMEM_access_rw, 0);
-        return false;
-    }
-    else if ( xma == XENMEM_access_n2rwx )
-    {
-        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
-                                0, ~0, XENMEM_access_rwx, 0);
-    }
-
-    /* Otherwise, check if there is a vm_event monitor subscriber */
-    if ( !vm_event_check_ring(&v->domain->vm_event->monitor) )
-    {
-        /* No listener */
-        if ( p2m->access_required )
-        {
-            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
-                                  "no vm_event listener VCPU %d, dom %d\n",
-                                  v->vcpu_id, v->domain->domain_id);
-            domain_crash(v->domain);
-        }
-        else
-        {
-            /* n2rwx was already handled */
-            if ( xma != XENMEM_access_n2rwx )
-            {
-                /* A listener is not required, so clear the access
-                 * restrictions. */
-                rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
-                                        0, ~0, XENMEM_access_rwx, 0);
-            }
-        }
-
-        /* No need to reinject */
-        return false;
-    }
-
-    req = xzalloc(vm_event_request_t);
-    if ( req )
-    {
-        req->reason = VM_EVENT_REASON_MEM_ACCESS;
-
-        /* Send request to mem access subscriber */
-        req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
-        req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
-        if ( npfec.gla_valid )
-        {
-            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
-            req->u.mem_access.gla = gla;
-
-            if ( npfec.kind == npfec_kind_with_gla )
-                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
-            else if ( npfec.kind == npfec_kind_in_gpt )
-                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
-        }
-        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
-        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
-        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
-
-        if ( monitor_traps(v, (xma != XENMEM_access_n2rwx), req) < 0 )
-            domain_crash(v->domain);
-
-        xfree(req);
-    }
-
-    return false;
-}
-
-/*
- * Set access type for a region of pfns.
- * If gfn == INVALID_GFN, sets the default access type.
- */
-long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
-                        uint32_t start, uint32_t mask, xenmem_access_t access,
-                        unsigned int altp2m_idx)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    p2m_access_t a;
-    unsigned int order;
-    long rc = 0;
-
-    static const p2m_access_t memaccess[] = {
-#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
-        ACCESS(n),
-        ACCESS(r),
-        ACCESS(w),
-        ACCESS(rw),
-        ACCESS(x),
-        ACCESS(rx),
-        ACCESS(wx),
-        ACCESS(rwx),
-        ACCESS(rx2rw),
-        ACCESS(n2rwx),
-#undef ACCESS
-    };
-
-    switch ( access )
-    {
-    case 0 ... ARRAY_SIZE(memaccess) - 1:
-        a = memaccess[access];
-        break;
-    case XENMEM_access_default:
-        a = p2m->default_access;
-        break;
-    default:
-        return -EINVAL;
-    }
-
-    /*
-     * Flip mem_access_enabled to true when a permission is set, as to prevent
-     * allocating or inserting super-pages.
-     */
-    p2m->mem_access_enabled = true;
-
-    /* If request to set default access. */
-    if ( gfn_eq(gfn, INVALID_GFN) )
-    {
-        p2m->default_access = a;
-        return 0;
-    }
-
-    p2m_write_lock(p2m);
-
-    for ( gfn = gfn_add(gfn, start); nr > start;
-          gfn = gfn_next_boundary(gfn, order) )
-    {
-        p2m_type_t t;
-        mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL, &order);
-
-
-        if ( !mfn_eq(mfn, INVALID_MFN) )
-        {
-            order = 0;
-            rc = __p2m_set_entry(p2m, gfn, 0, mfn, t, a);
-            if ( rc )
-                break;
-        }
-
-        start += gfn_x(gfn_next_boundary(gfn, order)) - gfn_x(gfn);
-        /* Check for continuation if it is not the last iteration */
-        if ( nr > start && !(start & mask) && hypercall_preempt_check() )
-        {
-            rc = start;
-            break;
-        }
-    }
-
-    p2m_write_unlock(p2m);
-
-    return rc;
-}
-
-long p2m_set_mem_access_multi(struct domain *d,
-                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
-                              const XEN_GUEST_HANDLE(const_uint8) access_list,
-                              uint32_t nr, uint32_t start, uint32_t mask,
-                              unsigned int altp2m_idx)
-{
-    /* Not yet implemented on ARM. */
-    return -EOPNOTSUPP;
-}
-
-int p2m_get_mem_access(struct domain *d, gfn_t gfn,
-                       xenmem_access_t *access)
-{
-    int ret;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    p2m_read_lock(p2m);
-    ret = __p2m_get_mem_access(d, gfn, access);
-    p2m_read_unlock(p2m);
-
-    return ret;
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 8ff73fe..f2ea083 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -32,6 +32,7 @@
 #include <xen/domain_page.h>
 #include <xen/perfc.h>
 #include <xen/virtual_region.h>
+#include <xen/mem_access.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/debugger.h>
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 9804c3a..e977dd8 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -9,6 +9,7 @@ obj-y += guest_walk_3.o
 obj-y += guest_walk_4.o
 obj-y += mem_paging.o
 obj-y += mem_sharing.o
+obj-y += mem_access.o
 
 guest_walk_%.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
new file mode 100644
index 0000000..34a994d
--- /dev/null
+++ b/xen/arch/x86/mm/mem_access.c
@@ -0,0 +1,462 @@
+/******************************************************************************
+ * arch/x86/mm/mem_access.c
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/guest_access.h> /* copy_from_guest() */
+#include <xen/mem_access.h>
+#include <xen/vm_event.h>
+#include <xen/event.h>
+#include <public/vm_event.h>
+#include <asm/p2m.h>
+#include <asm/altp2m.h>
+#include <asm/vm_event.h>
+
+#include "mm-locks.h"
+
+bool p2m_mem_access_emulate_check(struct vcpu *v,
+                                  const vm_event_response_t *rsp)
+{
+    xenmem_access_t access;
+    bool violation = 1;
+    const struct vm_event_mem_access *data = &rsp->u.mem_access;
+
+    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
+    {
+        switch ( access )
+        {
+        case XENMEM_access_n:
+        case XENMEM_access_n2rwx:
+        default:
+            violation = data->flags & MEM_ACCESS_RWX;
+            break;
+
+        case XENMEM_access_r:
+            violation = data->flags & MEM_ACCESS_WX;
+            break;
+
+        case XENMEM_access_w:
+            violation = data->flags & MEM_ACCESS_RX;
+            break;
+
+        case XENMEM_access_x:
+            violation = data->flags & MEM_ACCESS_RW;
+            break;
+
+        case XENMEM_access_rx:
+        case XENMEM_access_rx2rw:
+            violation = data->flags & MEM_ACCESS_W;
+            break;
+
+        case XENMEM_access_wx:
+            violation = data->flags & MEM_ACCESS_R;
+            break;
+
+        case XENMEM_access_rw:
+            violation = data->flags & MEM_ACCESS_X;
+            break;
+
+        case XENMEM_access_rwx:
+            violation = 0;
+            break;
+        }
+    }
+
+    return violation;
+}
+
+bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
+                            struct npfec npfec,
+                            vm_event_request_t **req_ptr)
+{
+    struct vcpu *v = current;
+    unsigned long gfn = gpa >> PAGE_SHIFT;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = NULL;
+    mfn_t mfn;
+    p2m_type_t p2mt;
+    p2m_access_t p2ma;
+    vm_event_request_t *req;
+    int rc;
+
+    if ( altp2m_active(d) )
+        p2m = p2m_get_altp2m(v);
+    if ( !p2m )
+        p2m = p2m_get_hostp2m(d);
+
+    /* First, handle rx2rw conversion automatically.
+     * These calls to p2m->set_entry() must succeed: we have the gfn
+     * locked and just did a successful get_entry(). */
+    gfn_lock(p2m, gfn, 0);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
+
+    if ( npfec.write_access && p2ma == p2m_access_rx2rw )
+    {
+        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw, -1);
+        ASSERT(rc == 0);
+        gfn_unlock(p2m, gfn, 0);
+        return 1;
+    }
+    else if ( p2ma == p2m_access_n2rwx )
+    {
+        ASSERT(npfec.write_access || npfec.read_access || npfec.insn_fetch);
+        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
+                            p2mt, p2m_access_rwx, -1);
+        ASSERT(rc == 0);
+    }
+    gfn_unlock(p2m, gfn, 0);
+
+    /* Otherwise, check if there is a memory event listener, and send the message along */
+    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr )
+    {
+        /* No listener */
+        if ( p2m->access_required )
+        {
+            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
+                                  "no vm_event listener VCPU %d, dom %d\n",
+                                  v->vcpu_id, d->domain_id);
+            domain_crash(v->domain);
+            return 0;
+        }
+        else
+        {
+            gfn_lock(p2m, gfn, 0);
+            mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
+            if ( p2ma != p2m_access_n2rwx )
+            {
+                /* A listener is not required, so clear the access
+                 * restrictions.  This set must succeed: we have the
+                 * gfn locked and just did a successful get_entry(). */
+                rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
+                                    p2mt, p2m_access_rwx, -1);
+                ASSERT(rc == 0);
+            }
+            gfn_unlock(p2m, gfn, 0);
+            return 1;
+        }
+    }
+
+    *req_ptr = NULL;
+    req = xzalloc(vm_event_request_t);
+    if ( req )
+    {
+        *req_ptr = req;
+
+        req->reason = VM_EVENT_REASON_MEM_ACCESS;
+        req->u.mem_access.gfn = gfn;
+        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
+        if ( npfec.gla_valid )
+        {
+            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
+            req->u.mem_access.gla = gla;
+
+            if ( npfec.kind == npfec_kind_with_gla )
+                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
+            else if ( npfec.kind == npfec_kind_in_gpt )
+                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
+        }
+        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
+        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
+        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
+    }
+
+    /* Return whether vCPU pause is required (aka. sync event) */
+    return (p2ma != p2m_access_n2rwx);
+}
+
+int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
+                              struct p2m_domain *ap2m, p2m_access_t a,
+                              gfn_t gfn)
+{
+    mfn_t mfn;
+    p2m_type_t t;
+    p2m_access_t old_a;
+    unsigned int page_order;
+    unsigned long gfn_l = gfn_x(gfn);
+    int rc;
+
+    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
+
+    /* Check host p2m if no valid entry in alternate */
+    if ( !mfn_valid(mfn_x(mfn)) )
+    {
+
+        mfn = __get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
+                                    P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
+
+        rc = -ESRCH;
+        if ( !mfn_valid(mfn_x(mfn)) || t != p2m_ram_rw )
+            return rc;
+
+        /* If this is a superpage, copy that first */
+        if ( page_order != PAGE_ORDER_4K )
+        {
+            unsigned long mask = ~((1UL << page_order) - 1);
+            unsigned long gfn2_l = gfn_l & mask;
+            mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
+
+            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
+            if ( rc )
+                return rc;
+        }
+    }
+
+    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
+                         (current->domain != d));
+}
+
+static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
+                          struct p2m_domain *ap2m, p2m_access_t a,
+                          gfn_t gfn)
+{
+    int rc = 0;
+
+    if ( ap2m )
+    {
+        rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
+        /* If the corresponding mfn is invalid we will want to just skip it */
+        if ( rc == -ESRCH )
+            rc = 0;
+    }
+    else
+    {
+        mfn_t mfn;
+        p2m_access_t _a;
+        p2m_type_t t;
+        unsigned long gfn_l = gfn_x(gfn);
+
+        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
+        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
+    }
+
+    return rc;
+}
+
+static bool xenmem_access_to_p2m_access(struct p2m_domain *p2m,
+                                        xenmem_access_t xaccess,
+                                        p2m_access_t *paccess)
+{
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    switch ( xaccess )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        *paccess = memaccess[xaccess];
+        break;
+    case XENMEM_access_default:
+        *paccess = p2m->default_access;
+        break;
+    default:
+        return false;
+    }
+
+    return true;
+}
+
+/*
+ * Set access type for a region of gfns.
+ * If gfn == INVALID_GFN, sets the default access type.
+ */
+long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access,
+                        unsigned int altp2m_idx)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
+    p2m_access_t a;
+    unsigned long gfn_l;
+    long rc = 0;
+
+    /* altp2m view 0 is treated as the hostp2m */
+    if ( altp2m_idx )
+    {
+        if ( altp2m_idx >= MAX_ALTP2M ||
+             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
+            return -EINVAL;
+
+        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+    }
+
+    if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
+        return -EINVAL;
+
+    /* If request to set default access. */
+    if ( gfn_eq(gfn, INVALID_GFN) )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    p2m_lock(p2m);
+    if ( ap2m )
+        p2m_lock(ap2m);
+
+    for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l )
+    {
+        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
+
+        if ( rc )
+            break;
+
+        /* Check for continuation if it's not the last iteration. */
+        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
+        {
+            rc = start;
+            break;
+        }
+    }
+
+    if ( ap2m )
+        p2m_unlock(ap2m);
+    p2m_unlock(p2m);
+
+    return rc;
+}
+
+long p2m_set_mem_access_multi(struct domain *d,
+                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
+                              const XEN_GUEST_HANDLE(const_uint8) access_list,
+                              uint32_t nr, uint32_t start, uint32_t mask,
+                              unsigned int altp2m_idx)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
+    long rc = 0;
+
+    /* altp2m view 0 is treated as the hostp2m */
+    if ( altp2m_idx )
+    {
+        if ( altp2m_idx >= MAX_ALTP2M ||
+             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
+            return -EINVAL;
+
+        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+    }
+
+    p2m_lock(p2m);
+    if ( ap2m )
+        p2m_lock(ap2m);
+
+    while ( start < nr )
+    {
+        p2m_access_t a;
+        uint8_t access;
+        uint64_t gfn_l;
+
+        if ( copy_from_guest_offset(&gfn_l, pfn_list, start, 1) ||
+             copy_from_guest_offset(&access, access_list, start, 1) )
+        {
+            rc = -EFAULT;
+            break;
+        }
+
+        if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
+
+        if ( rc )
+            break;
+
+        /* Check for continuation if it's not the last iteration. */
+        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
+        {
+            rc = start;
+            break;
+        }
+    }
+
+    if ( ap2m )
+        p2m_unlock(ap2m);
+    p2m_unlock(p2m);
+
+    return rc;
+}
+
+/*
+ * Get access type for a gfn.
+ * If gfn == INVALID_GFN, gets the default access type.
+ */
+int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_type_t t;
+    p2m_access_t a;
+    mfn_t mfn;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+            ACCESS(rx2rw),
+            ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    /* If request to get default access. */
+    if ( gfn_eq(gfn, INVALID_GFN) )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    gfn_lock(p2m, gfn, 0);
+    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
+    gfn_unlock(p2m, gfn, 0);
+
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        return -ESRCH;
+
+    if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
+        return -ERANGE;
+
+    *access =  memaccess[a];
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6a45185..6299d5a 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1589,433 +1589,12 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
     }
 }
 
-bool p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp)
-{
-    xenmem_access_t access;
-    bool violation = 1;
-    const struct vm_event_mem_access *data = &rsp->u.mem_access;
-
-    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
-    {
-        switch ( access )
-        {
-        case XENMEM_access_n:
-        case XENMEM_access_n2rwx:
-        default:
-            violation = data->flags & MEM_ACCESS_RWX;
-            break;
-
-        case XENMEM_access_r:
-            violation = data->flags & MEM_ACCESS_WX;
-            break;
-
-        case XENMEM_access_w:
-            violation = data->flags & MEM_ACCESS_RX;
-            break;
-
-        case XENMEM_access_x:
-            violation = data->flags & MEM_ACCESS_RW;
-            break;
-
-        case XENMEM_access_rx:
-        case XENMEM_access_rx2rw:
-            violation = data->flags & MEM_ACCESS_W;
-            break;
-
-        case XENMEM_access_wx:
-            violation = data->flags & MEM_ACCESS_R;
-            break;
-
-        case XENMEM_access_rw:
-            violation = data->flags & MEM_ACCESS_X;
-            break;
-
-        case XENMEM_access_rwx:
-            violation = 0;
-            break;
-        }
-    }
-
-    return violation;
-}
-
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 {
     if ( altp2m_active(v->domain) )
         p2m_switch_vcpu_altp2m_by_id(v, idx);
 }
 
-bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
-                            struct npfec npfec,
-                            vm_event_request_t **req_ptr)
-{
-    struct vcpu *v = current;
-    unsigned long gfn = gpa >> PAGE_SHIFT;
-    struct domain *d = v->domain;    
-    struct p2m_domain *p2m = NULL;
-    mfn_t mfn;
-    p2m_type_t p2mt;
-    p2m_access_t p2ma;
-    vm_event_request_t *req;
-    int rc;
-
-    if ( altp2m_active(d) )
-        p2m = p2m_get_altp2m(v);
-    if ( !p2m )
-        p2m = p2m_get_hostp2m(d);
-
-    /* First, handle rx2rw conversion automatically.
-     * These calls to p2m->set_entry() must succeed: we have the gfn
-     * locked and just did a successful get_entry(). */
-    gfn_lock(p2m, gfn, 0);
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
-
-    if ( npfec.write_access && p2ma == p2m_access_rx2rw ) 
-    {
-        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw, -1);
-        ASSERT(rc == 0);
-        gfn_unlock(p2m, gfn, 0);
-        return 1;
-    }
-    else if ( p2ma == p2m_access_n2rwx )
-    {
-        ASSERT(npfec.write_access || npfec.read_access || npfec.insn_fetch);
-        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
-                            p2mt, p2m_access_rwx, -1);
-        ASSERT(rc == 0);
-    }
-    gfn_unlock(p2m, gfn, 0);
-
-    /* Otherwise, check if there is a memory event listener, and send the message along */
-    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr ) 
-    {
-        /* No listener */
-        if ( p2m->access_required ) 
-        {
-            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
-                                  "no vm_event listener VCPU %d, dom %d\n",
-                                  v->vcpu_id, d->domain_id);
-            domain_crash(v->domain);
-            return 0;
-        }
-        else
-        {
-            gfn_lock(p2m, gfn, 0);
-            mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
-            if ( p2ma != p2m_access_n2rwx )
-            {
-                /* A listener is not required, so clear the access
-                 * restrictions.  This set must succeed: we have the
-                 * gfn locked and just did a successful get_entry(). */
-                rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
-                                    p2mt, p2m_access_rwx, -1);
-                ASSERT(rc == 0);
-            }
-            gfn_unlock(p2m, gfn, 0);
-            return 1;
-        }
-    }
-
-    *req_ptr = NULL;
-    req = xzalloc(vm_event_request_t);
-    if ( req )
-    {
-        *req_ptr = req;
-
-        req->reason = VM_EVENT_REASON_MEM_ACCESS;
-        req->u.mem_access.gfn = gfn;
-        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
-        if ( npfec.gla_valid )
-        {
-            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
-            req->u.mem_access.gla = gla;
-
-            if ( npfec.kind == npfec_kind_with_gla )
-                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
-            else if ( npfec.kind == npfec_kind_in_gpt )
-                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
-        }
-        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
-        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
-        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
-    }
-
-    /* Return whether vCPU pause is required (aka. sync event) */
-    return (p2ma != p2m_access_n2rwx);
-}
-
-static inline
-int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
-                              struct p2m_domain *ap2m, p2m_access_t a,
-                              gfn_t gfn)
-{
-    mfn_t mfn;
-    p2m_type_t t;
-    p2m_access_t old_a;
-    unsigned int page_order;
-    unsigned long gfn_l = gfn_x(gfn);
-    int rc;
-
-    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
-
-    /* Check host p2m if no valid entry in alternate */
-    if ( !mfn_valid(mfn) )
-    {
-
-        mfn = __get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
-                                    P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
-
-        rc = -ESRCH;
-        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
-            return rc;
-
-        /* If this is a superpage, copy that first */
-        if ( page_order != PAGE_ORDER_4K )
-        {
-            unsigned long mask = ~((1UL << page_order) - 1);
-            unsigned long gfn2_l = gfn_l & mask;
-            mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
-
-            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
-            if ( rc )
-                return rc;
-        }
-    }
-
-    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
-                         (current->domain != d));
-}
-
-static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
-                          struct p2m_domain *ap2m, p2m_access_t a,
-                          gfn_t gfn)
-{
-    int rc = 0;
-
-    if ( ap2m )
-    {
-        rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
-        /* If the corresponding mfn is invalid we will want to just skip it */
-        if ( rc == -ESRCH )
-            rc = 0;
-    }
-    else
-    {
-        mfn_t mfn;
-        p2m_access_t _a;
-        p2m_type_t t;
-        unsigned long gfn_l = gfn_x(gfn);
-
-        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
-        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
-    }
-
-    return rc;
-}
-
-static bool xenmem_access_to_p2m_access(struct p2m_domain *p2m,
-                                        xenmem_access_t xaccess,
-                                        p2m_access_t *paccess)
-{
-    static const p2m_access_t memaccess[] = {
-#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
-        ACCESS(n),
-        ACCESS(r),
-        ACCESS(w),
-        ACCESS(rw),
-        ACCESS(x),
-        ACCESS(rx),
-        ACCESS(wx),
-        ACCESS(rwx),
-        ACCESS(rx2rw),
-        ACCESS(n2rwx),
-#undef ACCESS
-    };
-
-    switch ( xaccess )
-    {
-    case 0 ... ARRAY_SIZE(memaccess) - 1:
-        *paccess = memaccess[xaccess];
-        break;
-    case XENMEM_access_default:
-        *paccess = p2m->default_access;
-        break;
-    default:
-        return false;
-    }
-
-    return true;
-}
-
-/*
- * Set access type for a region of gfns.
- * If gfn == INVALID_GFN, sets the default access type.
- */
-long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
-                        uint32_t start, uint32_t mask, xenmem_access_t access,
-                        unsigned int altp2m_idx)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
-    p2m_access_t a;
-    unsigned long gfn_l;
-    long rc = 0;
-
-    /* altp2m view 0 is treated as the hostp2m */
-    if ( altp2m_idx )
-    {
-        if ( altp2m_idx >= MAX_ALTP2M ||
-             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
-            return -EINVAL;
-
-        ap2m = d->arch.altp2m_p2m[altp2m_idx];
-    }
-
-    if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
-        return -EINVAL;
-
-    /* If request to set default access. */
-    if ( gfn_eq(gfn, INVALID_GFN) )
-    {
-        p2m->default_access = a;
-        return 0;
-    }
-
-    p2m_lock(p2m);
-    if ( ap2m )
-        p2m_lock(ap2m);
-
-    for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l )
-    {
-        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
-
-        if ( rc )
-            break;
-
-        /* Check for continuation if it's not the last iteration. */
-        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
-        {
-            rc = start;
-            break;
-        }
-    }
-
-    if ( ap2m )
-        p2m_unlock(ap2m);
-    p2m_unlock(p2m);
-
-    return rc;
-}
-
-long p2m_set_mem_access_multi(struct domain *d,
-                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
-                              const XEN_GUEST_HANDLE(const_uint8) access_list,
-                              uint32_t nr, uint32_t start, uint32_t mask,
-                              unsigned int altp2m_idx)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
-    long rc = 0;
-
-    /* altp2m view 0 is treated as the hostp2m */
-    if ( altp2m_idx )
-    {
-        if ( altp2m_idx >= MAX_ALTP2M ||
-             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
-            return -EINVAL;
-
-        ap2m = d->arch.altp2m_p2m[altp2m_idx];
-    }
-
-    p2m_lock(p2m);
-    if ( ap2m )
-        p2m_lock(ap2m);
-
-    while ( start < nr )
-    {
-        p2m_access_t a;
-        uint8_t access;
-        uint64_t gfn_l;
-
-        if ( copy_from_guest_offset(&gfn_l, pfn_list, start, 1) ||
-             copy_from_guest_offset(&access, access_list, start, 1) )
-        {
-            rc = -EFAULT;
-            break;
-        }
-
-        if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
-        {
-            rc = -EINVAL;
-            break;
-        }
-
-        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
-
-        if ( rc )
-            break;
-
-        /* Check for continuation if it's not the last iteration. */
-        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
-        {
-            rc = start;
-            break;
-        }
-    }
-
-    if ( ap2m )
-        p2m_unlock(ap2m);
-    p2m_unlock(p2m);
-
-    return rc;
-}
-
-/*
- * Get access type for a gfn.
- * If gfn == INVALID_GFN, gets the default access type.
- */
-int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    p2m_type_t t;
-    p2m_access_t a;
-    mfn_t mfn;
-
-    static const xenmem_access_t memaccess[] = {
-#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
-            ACCESS(n),
-            ACCESS(r),
-            ACCESS(w),
-            ACCESS(rw),
-            ACCESS(x),
-            ACCESS(rx),
-            ACCESS(wx),
-            ACCESS(rwx),
-            ACCESS(rx2rw),
-            ACCESS(n2rwx),
-#undef ACCESS
-    };
-
-    /* If request to get default access. */
-    if ( gfn_eq(gfn, INVALID_GFN) )
-    {
-        *access = memaccess[p2m->default_access];
-        return 0;
-    }
-
-    gfn_lock(p2m, gfn, 0);
-    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
-    gfn_unlock(p2m, gfn, 0);
-
-    if ( mfn_eq(mfn, INVALID_MFN) )
-        return -ESRCH;
-    
-    if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
-        return -ERANGE;
-
-    *access =  memaccess[a];
-    return 0;
-}
-
 static struct p2m_domain *
 p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
 {
diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 1e88d67..8d8bc4a 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -18,7 +18,8 @@
  * License along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <asm/p2m.h>
+#include <xen/sched.h>
+#include <xen/mem_access.h>
 #include <asm/vm_event.h>
 
 /* Implicitly serialized by the domctl lock. */
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 565a320..19f63bb 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -24,8 +24,8 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/vm_event.h>
+#include <xen/mem_access.h>
 #include <public/memory.h>
-#include <asm/p2m.h>
 #include <xsm/xsm.h>
 
 int mem_access_memop(unsigned long cmd,
diff --git a/xen/include/asm-arm/mem_access.h b/xen/include/asm-arm/mem_access.h
new file mode 100644
index 0000000..3a155f8
--- /dev/null
+++ b/xen/include/asm-arm/mem_access.h
@@ -0,0 +1,53 @@
+/*
+ * mem_access.h: architecture specific mem_access handling routines
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_ARM_MEM_ACCESS_H
+#define _ASM_ARM_MEM_ACCESS_H
+
+static inline
+bool p2m_mem_access_emulate_check(struct vcpu *v,
+                                  const vm_event_response_t *rsp)
+{
+    /* Not supported on ARM. */
+    return 0;
+}
+
+/* vm_event and mem_access are supported on any ARM guest */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
+/*
+ * Send mem event based on the access. Boolean return value indicates if trap
+ * needs to be injected into guest.
+ */
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
+
+struct page_info*
+p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
+                                  const struct vcpu *v);
+
+#endif /* _ASM_ARM_MEM_ACCESS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index fdb6b47..2b22e9a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -4,6 +4,7 @@
 #include <xen/mm.h>
 #include <xen/radix-tree.h>
 #include <xen/rwlock.h>
+#include <xen/mem_access.h>
 #include <public/vm_event.h> /* for vm_event_response_t */
 #include <public/memory.h>
 #include <xen/p2m-common.h>
@@ -139,14 +140,6 @@ typedef enum {
                              p2m_to_mask(p2m_map_foreign)))
 
 static inline
-bool p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp)
-{
-    /* Not supported on ARM. */
-    return 0;
-}
-
-static inline
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 {
     /* Not supported on ARM. */
@@ -343,22 +336,26 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
-/* vm_event and mem_access are supported on any ARM guest */
-static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
-{
-    return 1;
-}
-
 static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
 {
     return 1;
 }
 
 /*
- * Send mem event based on the access. Boolean return value indicates if trap
- * needs to be injected into guest.
+ * Return the start of the next mapping based on the order of the
+ * current one.
  */
-bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
+static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
+{
+    /*
+     * The order corresponds to the order of the mapping (or invalid
+     * range) in the page table. So we need to align the GFN before
+     * incrementing.
+     */
+    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
+
+    return gfn_add(gfn, 1UL << order);
+}
 
 #endif /* _XEN_P2M_H */
 
diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
new file mode 100644
index 0000000..9f7b409
--- /dev/null
+++ b/xen/include/asm-x86/mem_access.h
@@ -0,0 +1,61 @@
+/******************************************************************************
+ * include/asm-x86/mem_access.h
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 GridCentric Inc. (Andres Lagar-Cavilla)
+ * Copyright (c) 2007 Advanced Micro Devices (Wei Huang)
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_X86_MEM_ACCESS_H__
+#define __ASM_X86_MEM_ACCESS_H__
+
+/*
+ * Setup vm_event request based on the access (gla is -1ull if not available).
+ * Handles the rw2rx conversion. Boolean return value indicates if event type
+ * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated,
+ * then the caller should use monitor_traps to send the event on the MONITOR
+ * ring. Once having released get_gfn* locks caller must also xfree the
+ * request.
+ */
+bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
+                            struct npfec npfec,
+                            vm_event_request_t **req_ptr);
+
+/* Check for emulation and mark vcpu for skipping one instruction
+ * upon rescheduling if required. */
+bool p2m_mem_access_emulate_check(struct vcpu *v,
+                                  const vm_event_response_t *rsp);
+
+/* Sanity check for mem_access hardware support */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
+}
+
+#endif /*__ASM_X86_MEM_ACCESS_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 7035860..8964e90 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -29,6 +29,7 @@
 #include <xen/config.h>
 #include <xen/paging.h>
 #include <xen/p2m-common.h>
+#include <xen/mem_access.h>
 #include <asm/mem_sharing.h>
 #include <asm/page.h>    /* for pagetable_t */
 
@@ -663,29 +664,6 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
 /* Resume normal operation (in case a domain was paused) */
 void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp);
 
-/*
- * Setup vm_event request based on the access (gla is -1ull if not available).
- * Handles the rw2rx conversion. Boolean return value indicates if event type
- * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated,
- * then the caller should use monitor_traps to send the event on the MONITOR
- * ring. Once having released get_gfn* locks caller must also xfree the
- * request.
- */
-bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
-                            struct npfec npfec,
-                            vm_event_request_t **req_ptr);
-
-/* Check for emulation and mark vcpu for skipping one instruction
- * upon rescheduling if required. */
-bool p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp);
-
-/* Sanity check for mem_access hardware support */
-static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
-{
-    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
-}
-
 /* 
  * Internal functions, only called by other p2m code
  */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index da36e07..5ab34c1 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -19,29 +19,78 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef _XEN_ASM_MEM_ACCESS_H
-#define _XEN_ASM_MEM_ACCESS_H
+#ifndef _XEN_MEM_ACCESS_H
+#define _XEN_MEM_ACCESS_H
 
+#include <xen/types.h>
+#include <xen/mm.h>
 #include <public/memory.h>
-#include <asm/p2m.h>
+#include <public/vm_event.h>
+#include <asm/mem_access.h>
 
-#ifdef CONFIG_HAS_MEM_ACCESS
+/*
+ * Additional access types, which are used to further restrict
+ * the permissions given my the p2m_type_t memory type.  Violations
+ * caused by p2m_access_t restrictions are sent to the vm_event
+ * interface.
+ *
+ * The access permissions are soft state: when any ambiguous change of page
+ * type or use occurs, or when pages are flushed, swapped, or at any other
+ * convenient type, the access permissions can get reset to the p2m_domain
+ * default.
+ */
+typedef enum {
+    /* Code uses bottom three bits with bitmask semantics */
+    p2m_access_n     = 0, /* No access allowed. */
+    p2m_access_r     = 1 << 0,
+    p2m_access_w     = 1 << 1,
+    p2m_access_x     = 1 << 2,
+    p2m_access_rw    = p2m_access_r | p2m_access_w,
+    p2m_access_rx    = p2m_access_r | p2m_access_x,
+    p2m_access_wx    = p2m_access_w | p2m_access_x,
+    p2m_access_rwx   = p2m_access_r | p2m_access_w | p2m_access_x,
+
+    p2m_access_rx2rw = 8, /* Special: page goes from RX to RW on write */
+    p2m_access_n2rwx = 9, /* Special: page goes from N to RWX on access, *
+                           * generates an event but does not pause the
+                           * vcpu */
+
+    /* NOTE: Assumed to be only 4 bits right now on x86. */
+} p2m_access_t;
+
+/*
+ * Set access type for a region of gfns.
+ * If gfn == INVALID_GFN, sets the default access type.
+ */
+long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access,
+                        unsigned int altp2m_idx);
 
+long p2m_set_mem_access_multi(struct domain *d,
+                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
+                              const XEN_GUEST_HANDLE(const_uint8) access_list,
+                              uint32_t nr, uint32_t start, uint32_t mask,
+                              unsigned int altp2m_idx);
+
+/*
+ * Get access type for a gfn.
+ * If gfn == INVALID_GFN, gets the default access type.
+ */
+int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access);
+
+#ifdef CONFIG_HAS_MEM_ACCESS
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-
 #else
-
 static inline
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
 {
     return -ENOSYS;
 }
+#endif /* CONFIG_HAS_MEM_ACCESS */
 
-#endif /* HAS_MEM_ACCESS */
-
-#endif /* _XEN_ASM_MEM_ACCESS_H */
+#endif /* _XEN_MEM_ACCESS_H */
 
 /*
  * Local variables:
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 3be1e91..8cd5a6b 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -1,38 +1,6 @@
 #ifndef _XEN_P2M_COMMON_H
 #define _XEN_P2M_COMMON_H
 
-#include <public/vm_event.h>
-
-/*
- * Additional access types, which are used to further restrict
- * the permissions given my the p2m_type_t memory type.  Violations
- * caused by p2m_access_t restrictions are sent to the vm_event
- * interface.
- *
- * The access permissions are soft state: when any ambiguous change of page
- * type or use occurs, or when pages are flushed, swapped, or at any other
- * convenient type, the access permissions can get reset to the p2m_domain
- * default.
- */
-typedef enum {
-    /* Code uses bottom three bits with bitmask semantics */
-    p2m_access_n     = 0, /* No access allowed. */
-    p2m_access_r     = 1 << 0,
-    p2m_access_w     = 1 << 1,
-    p2m_access_x     = 1 << 2,
-    p2m_access_rw    = p2m_access_r | p2m_access_w,
-    p2m_access_rx    = p2m_access_r | p2m_access_x,
-    p2m_access_wx    = p2m_access_w | p2m_access_x,
-    p2m_access_rwx   = p2m_access_r | p2m_access_w | p2m_access_x,
-
-    p2m_access_rx2rw = 8, /* Special: page goes from RX to RW on write */
-    p2m_access_n2rwx = 9, /* Special: page goes from N to RWX on access, *
-                           * generates an event but does not pause the
-                           * vcpu */
-
-    /* NOTE: Assumed to be only 4 bits right now on x86. */
-} p2m_access_t;
-
 /* Map MMIO regions in the p2m: start_gfn and nr describe the range in
  *  * the guest physical address space to map, starting from the machine
  *   * frame number mfn. */
@@ -45,24 +13,4 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        mfn_t mfn);
 
-/*
- * Set access type for a region of gfns.
- * If gfn == INVALID_GFN, sets the default access type.
- */
-long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
-                        uint32_t start, uint32_t mask, xenmem_access_t access,
-                        unsigned int altp2m_idx);
-
-long p2m_set_mem_access_multi(struct domain *d,
-                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
-                              const XEN_GUEST_HANDLE(const_uint8) access_list,
-                              uint32_t nr, uint32_t start, uint32_t mask,
-                              unsigned int altp2m_idx);
-
-/*
- * Get access type for a gfn.
- * If gfn == INVALID_GFN, gets the default access type.
- */
-int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access);
-
 #endif /* _XEN_P2M_COMMON_H */
-- 
2.10.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-09 19:59 [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Tamas K Lengyel
  2016-12-09 19:59 ` [PATCH v2 2/2] p2m: split mem_access into separate files Tamas K Lengyel
@ 2016-12-12  7:42 ` Jan Beulich
  2016-12-12  7:47   ` Tamas K Lengyel
  2017-01-03 15:29 ` Tamas K Lengyel
  2017-01-30 21:11 ` Stefano Stabellini
  3 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2016-12-12  7:42 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Julien Grall, Stefano Stabellini

>>> On 09.12.16 at 20:59, <tamas.lengyel@zentific.com> wrote:
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>   * we indeed found a conflicting mem_access setting.
>   */
>  static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v)
>  {
>      long rc;
>      paddr_t ipa;
> @@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>      xenmem_access_t xma;
>      p2m_type_t t;
>      struct page_info *page = NULL;
> -    struct p2m_domain *p2m = &current->domain->arch.p2m;
> +    struct p2m_domain *p2m = &v->domain->arch.p2m;
>  
>      rc = gva_to_ipa(gva, &ipa, flag);
>      if ( rc < 0 )
> @@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
>       */
> -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
> +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
>      if ( rc < 0 )
>          goto err;
>  
> @@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>  
>      page = mfn_to_page(mfn_x(mfn));
>  
> -    if ( unlikely(!get_page(page, current->domain)) )
> +    if ( unlikely(!get_page(page, v->domain)) )
>          page = NULL;
>  
>  err:

Looking at these changes, wouldn't it be more reasonable to hand
the function a struct domain *?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12  7:42 ` [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Jan Beulich
@ 2016-12-12  7:47   ` Tamas K Lengyel
  2016-12-12 11:46     ` Julien Grall
  0 siblings, 1 reply; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-12  7:47 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Julien Grall, Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 1800 bytes --]

On Dec 12, 2016 00:42, "Jan Beulich" <JBeulich@suse.com> wrote:

>>> On 09.12.16 at 20:59, <tamas.lengyel@zentific.com> wrote:
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>   * we indeed found a conflicting mem_access setting.
>   */
>  static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v)
>  {
>      long rc;
>      paddr_t ipa;
> @@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
unsigned long flag)
>      xenmem_access_t xma;
>      p2m_type_t t;
>      struct page_info *page = NULL;
> -    struct p2m_domain *p2m = &current->domain->arch.p2m;
> +    struct p2m_domain *p2m = &v->domain->arch.p2m;
>
>      rc = gva_to_ipa(gva, &ipa, flag);
>      if ( rc < 0 )
> @@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
unsigned long flag)
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
>       */
> -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
> +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
>      if ( rc < 0 )
>          goto err;
>
> @@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
unsigned long flag)
>
>      page = mfn_to_page(mfn_x(mfn));
>
> -    if ( unlikely(!get_page(page, current->domain)) )
> +    if ( unlikely(!get_page(page, v->domain)) )
>          page = NULL;
>
>  err:

Looking at these changes, wouldn't it be more reasonable to hand
the function a struct domain *?


In its current state it might be but I believe with altp2m we will need the
vcpu struct here anyway.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2775 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12  7:47   ` Tamas K Lengyel
@ 2016-12-12 11:46     ` Julien Grall
  2016-12-12 18:42       ` Tamas K Lengyel
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2016-12-12 11:46 UTC (permalink / raw)
  To: Tamas K Lengyel, Jan Beulich; +Cc: xen-devel, Stefano Stabellini



On 12/12/16 07:47, Tamas K Lengyel wrote:
>
>
> On Dec 12, 2016 00:42, "Jan Beulich" <JBeulich@suse.com
> <mailto:JBeulich@suse.com>> wrote:
>
>     >>> On 09.12.16 at 20:59, <tamas.lengyel@zentific.com
>     <mailto:tamas.lengyel@zentific.com>> wrote:
>     > --- a/xen/arch/arm/p2m.c
>     > +++ b/xen/arch/arm/p2m.c
>     > @@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>     >   * we indeed found a conflicting mem_access setting.
>     >   */
>     >  static struct page_info*
>     > -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>     > +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>     > +                                  const struct vcpu *v)
>     >  {
>     >      long rc;
>     >      paddr_t ipa;
>     > @@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
>     gva, unsigned long flag)
>     >      xenmem_access_t xma;
>     >      p2m_type_t t;
>     >      struct page_info *page = NULL;
>     > -    struct p2m_domain *p2m = &current->domain->arch.p2m;
>     > +    struct p2m_domain *p2m = &v->domain->arch.p2m;
>     >
>     >      rc = gva_to_ipa(gva, &ipa, flag);
>     >      if ( rc < 0 )
>     > @@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
>     gva, unsigned long flag)
>     >       * We do this first as this is faster in the default case when no
>     >       * permission is set on the page.
>     >       */
>     > -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
>     > +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
>     >      if ( rc < 0 )
>     >          goto err;
>     >
>     > @@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
>     gva, unsigned long flag)
>     >
>     >      page = mfn_to_page(mfn_x(mfn));
>     >
>     > -    if ( unlikely(!get_page(page, current->domain)) )
>     > +    if ( unlikely(!get_page(page, v->domain)) )
>     >          page = NULL;
>     >
>     >  err:
>
>     Looking at these changes, wouldn't it be more reasonable to hand
>     the function a struct domain *?
>
>
> In its current state it might be but I believe with altp2m we will need
> the vcpu struct here anyway.

Not only the altp2m. I keep mentioning a bigger issue, but it seems been 
ignored every time...

The translation VA to IPA (guest physical address) is done using 
hardware. If the underlying memory of the stage-1 page table is 
protected, so the translation will fail. Given that this function is 
used in hypercall to retrieve the page associated to a buffer, it means 
that it will not be possible to do hypercall when the page table used to 
find the buffer IPA has not been touched.

I believe this would require the vCPU in hand to fix this problem.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 11:46     ` Julien Grall
@ 2016-12-12 18:42       ` Tamas K Lengyel
  2016-12-12 19:11         ` Julien Grall
  0 siblings, 1 reply; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-12 18:42 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 12/12/16 07:47, Tamas K Lengyel wrote:
>>
>>
>>
>> On Dec 12, 2016 00:42, "Jan Beulich" <JBeulich@suse.com
>> <mailto:JBeulich@suse.com>> wrote:
>>
>>     >>> On 09.12.16 at 20:59, <tamas.lengyel@zentific.com
>>     <mailto:tamas.lengyel@zentific.com>> wrote:
>>     > --- a/xen/arch/arm/p2m.c
>>     > +++ b/xen/arch/arm/p2m.c
>>     > @@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>>     >   * we indeed found a conflicting mem_access setting.
>>     >   */
>>     >  static struct page_info*
>>     > -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>>     > +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>>     > +                                  const struct vcpu *v)
>>     >  {
>>     >      long rc;
>>     >      paddr_t ipa;
>>     > @@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
>>     gva, unsigned long flag)
>>     >      xenmem_access_t xma;
>>     >      p2m_type_t t;
>>     >      struct page_info *page = NULL;
>>     > -    struct p2m_domain *p2m = &current->domain->arch.p2m;
>>     > +    struct p2m_domain *p2m = &v->domain->arch.p2m;
>>     >
>>     >      rc = gva_to_ipa(gva, &ipa, flag);
>>     >      if ( rc < 0 )
>>     > @@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
>>     gva, unsigned long flag)
>>     >       * We do this first as this is faster in the default case when
>> no
>>     >       * permission is set on the page.
>>     >       */
>>     > -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
>>     > +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
>>     >      if ( rc < 0 )
>>     >          goto err;
>>     >
>>     > @@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
>>     gva, unsigned long flag)
>>     >
>>     >      page = mfn_to_page(mfn_x(mfn));
>>     >
>>     > -    if ( unlikely(!get_page(page, current->domain)) )
>>     > +    if ( unlikely(!get_page(page, v->domain)) )
>>     >          page = NULL;
>>     >
>>     >  err:
>>
>>     Looking at these changes, wouldn't it be more reasonable to hand
>>     the function a struct domain *?
>>
>>
>> In its current state it might be but I believe with altp2m we will need
>> the vcpu struct here anyway.
>
>
> Not only the altp2m. I keep mentioning a bigger issue, but it seems been
> ignored every time...

I wouldn't say ignored. I think this is the first description with
details that I see of the problem you passingly mentioned previously
and I still don't have a way to reproduce it.

>
> The translation VA to IPA (guest physical address) is done using hardware.
> If the underlying memory of the stage-1 page table is protected, so the
> translation will fail. Given that this function is used in hypercall to
> retrieve the page associated to a buffer, it means that it will not be
> possible to do hypercall when the page table used to find the buffer IPA has
> not been touched.

This function specifically works around the case where the page of the
guest pagetable is not accessible due to mem_access, when the hardware
based lookup doesn't work. This function checks what the fault was,
checks the page type and the mem_access rights to determine whether
the fault was legit, or if it was due to mem_access. If it was
mem_access it gets the page without involving the hardware. I'm not
following what you describe afterwards regarding the buffer and what
you mean by "the buffer IPA has not been touched". Care to elaborate?

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 18:42       ` Tamas K Lengyel
@ 2016-12-12 19:11         ` Julien Grall
  2016-12-12 19:41           ` Tamas K Lengyel
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2016-12-12 19:11 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

Hi Tamas,

On 12/12/16 18:42, Tamas K Lengyel wrote:
> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com> wrote:
>> The translation VA to IPA (guest physical address) is done using hardware.
>> If the underlying memory of the stage-1 page table is protected, so the
>> translation will fail. Given that this function is used in hypercall to
>> retrieve the page associated to a buffer, it means that it will not be
>> possible to do hypercall when the page table used to find the buffer IPA has
>> not been touched.
>
> This function specifically works around the case where the page of the
> guest pagetable is not accessible due to mem_access, when the hardware
> based lookup doesn't work.This function checks what the fault was,
> checks the page type and the mem_access rights to determine whether
> the fault was legit, or if it was due to mem_access. If it was
> mem_access it gets the page without involving the hardware. I'm not
> following what you describe afterwards regarding the buffer and what
> you mean by "the buffer IPA has not been touched". Care to elaborate?

I am afraid to say that the function does not do what you think and is 
still using the hardware to do the translation. For instance the 
function gva_to_ipa is using the hardware to translate a VA to IPA.

This function is called when it is not possible to directly translate a 
VA to a PA. This may fail for various reason:
	* The underlying memory of the buffer was restricted in stage-2
	* The underlying memory of stage-1 page tables was restricted in stage-2

Whilst the function is solving the former, the latter will not work due 
to the call to gva_to_ipa. This will fail because the stage-1 PT are not 
accessible.

One way to trigger it (note I haven't tested myself) is to have the code 
and data in separate page. The translation will be using distinct 
stage-1 page table entry. If nobody touches the page table entry before 
hand (for instance by accessing the buffer), it will still be 
inaccessible when Xen is calling p2m_mem_access_check_and_get_page.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 19:11         ` Julien Grall
@ 2016-12-12 19:41           ` Tamas K Lengyel
  2016-12-12 21:28             ` Julien Grall
  0 siblings, 1 reply; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-12 19:41 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Tamas,
>
> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>
>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> The translation VA to IPA (guest physical address) is done using
>>> hardware.
>>> If the underlying memory of the stage-1 page table is protected, so the
>>> translation will fail. Given that this function is used in hypercall to
>>> retrieve the page associated to a buffer, it means that it will not be
>>> possible to do hypercall when the page table used to find the buffer IPA
>>> has
>>> not been touched.
>>
>>
>> This function specifically works around the case where the page of the
>> guest pagetable is not accessible due to mem_access, when the hardware
>> based lookup doesn't work.This function checks what the fault was,
>> checks the page type and the mem_access rights to determine whether
>> the fault was legit, or if it was due to mem_access. If it was
>> mem_access it gets the page without involving the hardware. I'm not
>> following what you describe afterwards regarding the buffer and what
>> you mean by "the buffer IPA has not been touched". Care to elaborate?
>
>
> I am afraid to say that the function does not do what you think and is still
> using the hardware to do the translation. For instance the function
> gva_to_ipa is using the hardware to translate a VA to IPA.
>
> This function is called when it is not possible to directly translate a VA
> to a PA. This may fail for various reason:
>         * The underlying memory of the buffer was restricted in stage-2
>         * The underlying memory of stage-1 page tables was restricted in
> stage-2
>
> Whilst the function is solving the former, the latter will not work due to
> the call to gva_to_ipa. This will fail because the stage-1 PT are not
> accessible.

I see. So IMHO this is not a problem with mem_access in general, but a
problem with a specific application of mem_access on ARM (ie.
restricting read access to guest pagetables). It's a pitty that ARM
doesn't report the IPA automatically during a stage-2 violation. A way
to work around this would require mem_access restrictions to be
complete removed, which cannot be done unless all other vCPUs of the
domain are paused to avoid a race-condition. With altp2m I could also
envision creating a temporary p2m for the vcpu at hand with the
restriction removed, so that it doesn't affect other vcpus. However,
without a use-case specifically requiring this to be implemented I
would not deem it critical. For now a comment in the header describing
this limitation would suffice from my perspective.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 19:41           ` Tamas K Lengyel
@ 2016-12-12 21:28             ` Julien Grall
  2016-12-12 23:47               ` Tamas K Lengyel
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2016-12-12 21:28 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

Hi Tamas,

On 12/12/2016 19:41, Tamas K Lengyel wrote:
> On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@arm.com> wrote:
>> Hi Tamas,
>>
>> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>>
>>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com>
>>> wrote:
>>>>
>>>> The translation VA to IPA (guest physical address) is done using
>>>> hardware.
>>>> If the underlying memory of the stage-1 page table is protected, so the
>>>> translation will fail. Given that this function is used in hypercall to
>>>> retrieve the page associated to a buffer, it means that it will not be
>>>> possible to do hypercall when the page table used to find the buffer IPA
>>>> has
>>>> not been touched.
>>>
>>>
>>> This function specifically works around the case where the page of the
>>> guest pagetable is not accessible due to mem_access, when the hardware
>>> based lookup doesn't work.This function checks what the fault was,
>>> checks the page type and the mem_access rights to determine whether
>>> the fault was legit, or if it was due to mem_access. If it was
>>> mem_access it gets the page without involving the hardware. I'm not
>>> following what you describe afterwards regarding the buffer and what
>>> you mean by "the buffer IPA has not been touched". Care to elaborate?
>>
>>
>> I am afraid to say that the function does not do what you think and is still
>> using the hardware to do the translation. For instance the function
>> gva_to_ipa is using the hardware to translate a VA to IPA.
>>
>> This function is called when it is not possible to directly translate a VA
>> to a PA. This may fail for various reason:
>>         * The underlying memory of the buffer was restricted in stage-2
>>         * The underlying memory of stage-1 page tables was restricted in
>> stage-2
>>
>> Whilst the function is solving the former, the latter will not work due to
>> the call to gva_to_ipa. This will fail because the stage-1 PT are not
>> accessible.
>
> I see. So IMHO this is not a problem with mem_access in general, but a
> problem with a specific application of mem_access on ARM (ie.
> restricting read access to guest pagetables). It's a pitty that ARM
> doesn't report the IPA automatically during a stage-2 violation.

I don't understand what you are asking for here. If you are not able to 
access stage-1 page table how would you be able to find the IPA?

It works on x86 because, IIRC, you do a software page table walking.
Although, I don't think you have particular write/read access checking 
on x86.

> A way to work around this would require mem_access restrictions to be
> complete removed, which cannot be done unless all other vCPUs of the
> domain are paused to avoid a race-condition. With altp2m I could also
> envision creating a temporary p2m for the vcpu at hand with the
> restriction removed, so that it doesn't affect other vcpus. However,
> without a use-case specifically requiring this to be implemented I
> would not deem it critical.

I suggested a use-case on the previous e-mail... You are affected today 
because Linux is creating hypercall buffer on the stack and heap. So the 
memory would always be accessed before. I could foresee guest using 
const hypercall buffer.

> For now a comment in the header describing
> this limitation would suffice from my perspective.

So you are going to defer everything until someone actually hit it? It 
might be time for you to focus a bit more on other use case...

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 21:28             ` Julien Grall
@ 2016-12-12 23:47               ` Tamas K Lengyel
  2016-12-13 12:50                 ` Julien Grall
  2016-12-13 18:28                 ` Andrew Cooper
  0 siblings, 2 replies; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-12 23:47 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On Mon, Dec 12, 2016 at 2:28 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Tamas,
>
>
> On 12/12/2016 19:41, Tamas K Lengyel wrote:
>>
>> On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> Hi Tamas,
>>>
>>> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>>>
>>>>
>>>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> The translation VA to IPA (guest physical address) is done using
>>>>> hardware.
>>>>> If the underlying memory of the stage-1 page table is protected, so the
>>>>> translation will fail. Given that this function is used in hypercall to
>>>>> retrieve the page associated to a buffer, it means that it will not be
>>>>> possible to do hypercall when the page table used to find the buffer
>>>>> IPA
>>>>> has
>>>>> not been touched.
>>>>
>>>>
>>>>
>>>> This function specifically works around the case where the page of the
>>>> guest pagetable is not accessible due to mem_access, when the hardware
>>>> based lookup doesn't work.This function checks what the fault was,
>>>> checks the page type and the mem_access rights to determine whether
>>>> the fault was legit, or if it was due to mem_access. If it was
>>>> mem_access it gets the page without involving the hardware. I'm not
>>>> following what you describe afterwards regarding the buffer and what
>>>> you mean by "the buffer IPA has not been touched". Care to elaborate?
>>>
>>>
>>>
>>> I am afraid to say that the function does not do what you think and is
>>> still
>>> using the hardware to do the translation. For instance the function
>>> gva_to_ipa is using the hardware to translate a VA to IPA.
>>>
>>> This function is called when it is not possible to directly translate a
>>> VA
>>> to a PA. This may fail for various reason:
>>>         * The underlying memory of the buffer was restricted in stage-2
>>>         * The underlying memory of stage-1 page tables was restricted in
>>> stage-2
>>>
>>> Whilst the function is solving the former, the latter will not work due
>>> to
>>> the call to gva_to_ipa. This will fail because the stage-1 PT are not
>>> accessible.
>>
>>
>> I see. So IMHO this is not a problem with mem_access in general, but a
>> problem with a specific application of mem_access on ARM (ie.
>> restricting read access to guest pagetables). It's a pitty that ARM
>> doesn't report the IPA automatically during a stage-2 violation.
>
>
> I don't understand what you are asking for here. If you are not able to
> access stage-1 page table how would you be able to find the IPA?

I'm not asking for anything, I'm expressing how it's a pity that ARM
CPUs are limited in this regard compared to x86.

>
> It works on x86 because, IIRC, you do a software page table walking.
> Although, I don't think you have particular write/read access checking on
> x86.

I don't recall there being any software page walking being involved on
x86. Why would that be needed? On x86 we get the guest physical
address recorded by the CPU automatically. AFAIK in case the pagetable
was unaccessible for the translation of a VA, we would get an event
with the pagetable's PA and the type of event that generated it (ie.
reading during translation).

>
>> A way to work around this would require mem_access restrictions to be
>> complete removed, which cannot be done unless all other vCPUs of the
>> domain are paused to avoid a race-condition. With altp2m I could also
>> envision creating a temporary p2m for the vcpu at hand with the
>> restriction removed, so that it doesn't affect other vcpus. However,
>> without a use-case specifically requiring this to be implemented I
>> would not deem it critical.
>
>
> I suggested a use-case on the previous e-mail... You are affected today
> because Linux is creating hypercall buffer on the stack and heap. So the
> memory would always be accessed before. I could foresee guest using const
> hypercall buffer.
>
>> For now a comment in the header describing
>> this limitation would suffice from my perspective.
>
>
> So you are going to defer everything until someone actually hit it? It might
> be time for you to focus a bit more on other use case...
>

Yes, as long as this is not a critical issue that breaks mem_access
and can be worked around I'll postpone spending time on it. If someone
finds the time in the meanwhile to submit a patch fixing it I would be
happy to review and test it.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 23:47               ` Tamas K Lengyel
@ 2016-12-13 12:50                 ` Julien Grall
  2016-12-13 18:03                   ` Tamas K Lengyel
  2016-12-13 18:39                   ` Tamas K Lengyel
  2016-12-13 18:28                 ` Andrew Cooper
  1 sibling, 2 replies; 24+ messages in thread
From: Julien Grall @ 2016-12-13 12:50 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

Hello Tamas,

On 12/12/16 23:47, Tamas K Lengyel wrote:
> On Mon, Dec 12, 2016 at 2:28 PM, Julien Grall <julien.grall@arm.com> wrote:
>> On 12/12/2016 19:41, Tamas K Lengyel wrote:
>>> On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@arm.com>
>>> wrote:
>>>> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>>>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com>
>>>>> wrote:
>>> I see. So IMHO this is not a problem with mem_access in general, but a
>>> problem with a specific application of mem_access on ARM (ie.
>>> restricting read access to guest pagetables). It's a pitty that ARM
>>> doesn't report the IPA automatically during a stage-2 violation.
>>
>>
>> I don't understand what you are asking for here. If you are not able to
>> access stage-1 page table how would you be able to find the IPA?
>
> I'm not asking for anything, I'm expressing how it's a pity that ARM
> CPUs are limited in this regard compared to x86.

Give a look at the ARM ARM before complaining. The IPA will be provided 
(see HPFAR) on a stage-2 data/prefetch abort fault.

>
>>
>> It works on x86 because, IIRC, you do a software page table walking.
>> Although, I don't think you have particular write/read access checking on
>> x86.
>
> I don't recall there being any software page walking being involved on
> x86. Why would that be needed? On x86 we get the guest physical
> address recorded by the CPU automatically. AFAIK in case the pagetable
> was unaccessible for the translation of a VA, we would get an event
> with the pagetable's PA and the type of event that generated it (ie.
> reading during translation).

You are talking about a different thing. The function 
p2m_mem_access_check_and_get_page is only used during copy_*_guest 
helpers which will copy hypercall buffer.

If you give a look at the x86 code, for simplicity let's focus on HVM, 
the function __hvm_copy will call paging_gva_to_gfn which is doing the
table walk in software (see arch/x86/mm/hap/guest_walk.c). No hardware 
instruction like on ARM...

Although, it looks like that there is hardware instruction to do address 
translation (see nvmx_hap_walk_L1_p2m), but only for nested 
virtualization. And even in this case, they will return the IPA (e.g 
guest PA) only if stage-1 page table are accessible.

>
>>
>>> A way to work around this would require mem_access restrictions to be
>>> complete removed, which cannot be done unless all other vCPUs of the
>>> domain are paused to avoid a race-condition. With altp2m I could also
>>> envision creating a temporary p2m for the vcpu at hand with the
>>> restriction removed, so that it doesn't affect other vcpus. However,
>>> without a use-case specifically requiring this to be implemented I
>>> would not deem it critical.
>>
>>
>> I suggested a use-case on the previous e-mail... You are affected today
>> because Linux is creating hypercall buffer on the stack and heap. So the
>> memory would always be accessed before. I could foresee guest using const
>> hypercall buffer.
>>
>>> For now a comment in the header describing
>>> this limitation would suffice from my perspective.
>>
>>
>> So you are going to defer everything until someone actually hit it? It might
>> be time for you to focus a bit more on other use case...
>>
>
> Yes, as long as this is not a critical issue that breaks mem_access
> and can be worked around I'll postpone spending time on it. If someone
> finds the time in the meanwhile to submit a patch fixing it I would be
> happy to review and test it.

I will be happy to keep the memaccess code in p2m.c until I see a strong 
reason to move it in a separate file.

Your sincerely,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-13 12:50                 ` Julien Grall
@ 2016-12-13 18:03                   ` Tamas K Lengyel
  2016-12-13 18:39                   ` Tamas K Lengyel
  1 sibling, 0 replies; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-13 18:03 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On Tue, Dec 13, 2016 at 5:50 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Tamas,
>
> On 12/12/16 23:47, Tamas K Lengyel wrote:
>>
>> On Mon, Dec 12, 2016 at 2:28 PM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> On 12/12/2016 19:41, Tamas K Lengyel wrote:
>>>>
>>>> On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>>>>>
>>>>>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com>
>>>>>> wrote:
>>>>
>>>> I see. So IMHO this is not a problem with mem_access in general, but a
>>>> problem with a specific application of mem_access on ARM (ie.
>>>> restricting read access to guest pagetables). It's a pitty that ARM
>>>> doesn't report the IPA automatically during a stage-2 violation.
>>>
>>>
>>>
>>> I don't understand what you are asking for here. If you are not able to
>>> access stage-1 page table how would you be able to find the IPA?
>>
>>
>> I'm not asking for anything, I'm expressing how it's a pity that ARM
>> CPUs are limited in this regard compared to x86.
>
>
> Give a look at the ARM ARM before complaining. The IPA will be provided (see
> HPFAR) on a stage-2 data/prefetch abort fault.
>
>>
>>>
>>> It works on x86 because, IIRC, you do a software page table walking.
>>> Although, I don't think you have particular write/read access checking on
>>> x86.
>>
>>
>> I don't recall there being any software page walking being involved on
>> x86. Why would that be needed? On x86 we get the guest physical
>> address recorded by the CPU automatically. AFAIK in case the pagetable
>> was unaccessible for the translation of a VA, we would get an event
>> with the pagetable's PA and the type of event that generated it (ie.
>> reading during translation).
>
>
> You are talking about a different thing. The function
> p2m_mem_access_check_and_get_page is only used during copy_*_guest helpers
> which will copy hypercall buffer.
>
> If you give a look at the x86 code, for simplicity let's focus on HVM, the
> function __hvm_copy will call paging_gva_to_gfn which is doing the
> table walk in software (see arch/x86/mm/hap/guest_walk.c). No hardware
> instruction like on ARM...
>
> Although, it looks like that there is hardware instruction to do address
> translation (see nvmx_hap_walk_L1_p2m), but only for nested virtualization.
> And even in this case, they will return the IPA (e.g guest PA) only if
> stage-1 page table are accessible.
>
>>
>>>
>>>> A way to work around this would require mem_access restrictions to be
>>>> complete removed, which cannot be done unless all other vCPUs of the
>>>> domain are paused to avoid a race-condition. With altp2m I could also
>>>> envision creating a temporary p2m for the vcpu at hand with the
>>>> restriction removed, so that it doesn't affect other vcpus. However,
>>>> without a use-case specifically requiring this to be implemented I
>>>> would not deem it critical.
>>>
>>>
>>>
>>> I suggested a use-case on the previous e-mail... You are affected today
>>> because Linux is creating hypercall buffer on the stack and heap. So the
>>> memory would always be accessed before. I could foresee guest using const
>>> hypercall buffer.
>>>
>>>> For now a comment in the header describing
>>>> this limitation would suffice from my perspective.
>>>
>>>
>>>
>>> So you are going to defer everything until someone actually hit it? It
>>> might
>>> be time for you to focus a bit more on other use case...
>>>
>>
>> Yes, as long as this is not a critical issue that breaks mem_access
>> and can be worked around I'll postpone spending time on it. If someone
>> finds the time in the meanwhile to submit a patch fixing it I would be
>> happy to review and test it.
>
>
> I will be happy to keep the memaccess code in p2m.c until I see a strong
> reason to move it in a separate file.
>

Does that mean you want to take over maintainership of mem_access on
ARM? Otherwise I don't think that is an acceptable reason to keep it
p2m.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-12 23:47               ` Tamas K Lengyel
  2016-12-13 12:50                 ` Julien Grall
@ 2016-12-13 18:28                 ` Andrew Cooper
  2016-12-13 18:41                   ` Julien Grall
  1 sibling, 1 reply; 24+ messages in thread
From: Andrew Cooper @ 2016-12-13 18:28 UTC (permalink / raw)
  To: Tamas K Lengyel, Julien Grall; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On 12/12/16 23:47, Tamas K Lengyel wrote:
>
>> It works on x86 because, IIRC, you do a software page table walking.
>> Although, I don't think you have particular write/read access checking on
>> x86.
> I don't recall there being any software page walking being involved on
> x86. Why would that be needed? On x86 we get the guest physical
> address recorded by the CPU automatically. AFAIK in case the pagetable
> was unaccessible for the translation of a VA, we would get an event
> with the pagetable's PA and the type of event that generated it (ie.
> reading during translation).

x86 provides no mechanism to have hardware translate an address in a
separate context.  Therefore, all translations which are necessary need
to be done in hardware.

Newer versions of VT-x/SVM may provide additional information on a
vmexit, which include a guest physical address relevant to the the
reason for the vmexit.

Xen will attempt to proactively use this information to avoid a software
pagewalk, but can always fall back to the software method if needs be.


I presume ARM has always relied on hardware support for translation, and
has no software translation support?  I presume therefore that
translations only work when in current context?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-13 12:50                 ` Julien Grall
  2016-12-13 18:03                   ` Tamas K Lengyel
@ 2016-12-13 18:39                   ` Tamas K Lengyel
  1 sibling, 0 replies; 24+ messages in thread
From: Tamas K Lengyel @ 2016-12-13 18:39 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On Tue, Dec 13, 2016 at 5:50 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Tamas,
>
> On 12/12/16 23:47, Tamas K Lengyel wrote:
>>
>> On Mon, Dec 12, 2016 at 2:28 PM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> On 12/12/2016 19:41, Tamas K Lengyel wrote:
>>>>
>>>> On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>>>>>
>>>>>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@arm.com>
>>>>>> wrote:
>>>>
>>>> I see. So IMHO this is not a problem with mem_access in general, but a
>>>> problem with a specific application of mem_access on ARM (ie.
>>>> restricting read access to guest pagetables). It's a pitty that ARM
>>>> doesn't report the IPA automatically during a stage-2 violation.
>>>
>>>
>>>
>>> I don't understand what you are asking for here. If you are not able to
>>> access stage-1 page table how would you be able to find the IPA?
>>
>>
>> I'm not asking for anything, I'm expressing how it's a pity that ARM
>> CPUs are limited in this regard compared to x86.
>
>
> Give a look at the ARM ARM before complaining. The IPA will be provided (see
> HPFAR) on a stage-2 data/prefetch abort fault.

Yes, we are already using HPFAR_EL2 when the guest faults with
mem_access. Here however Xen faults when it tries to do
gvirt_to_maddr. For some reason I was under the impression this
register is only valid when the violation originates from guest
context - hence my comment. If that's not the case we could avoid
using gva_to_ipa and determine if mem_access was the reason for the
gvirt_to_maddr failure by nature of having hpfar_is_valid return true
AFAICT.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-13 18:28                 ` Andrew Cooper
@ 2016-12-13 18:41                   ` Julien Grall
  2016-12-13 19:39                     ` Andrew Cooper
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2016-12-13 18:41 UTC (permalink / raw)
  To: Andrew Cooper, Tamas K Lengyel; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

Hi Andrew,

On 13/12/16 18:28, Andrew Cooper wrote:
> On 12/12/16 23:47, Tamas K Lengyel wrote:
>>
>>> It works on x86 because, IIRC, you do a software page table walking.
>>> Although, I don't think you have particular write/read access checking on
>>> x86.
>> I don't recall there being any software page walking being involved on
>> x86. Why would that be needed? On x86 we get the guest physical
>> address recorded by the CPU automatically. AFAIK in case the pagetable
>> was unaccessible for the translation of a VA, we would get an event
>> with the pagetable's PA and the type of event that generated it (ie.
>> reading during translation).
>
> x86 provides no mechanism to have hardware translate an address in a
> separate context.  Therefore, all translations which are necessary need
> to be done in hardware.

I guess you meant "need to be done in sofware"?

>
> Newer versions of VT-x/SVM may provide additional information on a
> vmexit, which include a guest physical address relevant to the the
> reason for the vmexit.
>
> Xen will attempt to proactively use this information to avoid a software
> pagewalk, but can always fall back to the software method if needs be.

That is a good idea, but may bring some issue with memacces as we 
currently have on ARM.

>
>
> I presume ARM has always relied on hardware support for translation, and
> has no software translation support?  I presume therefore that
> translations only work when in current context?

That's correct. ARM provided hardware support for most of translation 
from the start. But it relies on the guest page table to be accessible 
(e.g there is no memory restriction in stage-2).

Note that ARM does not provide any hardware instruction to translate an 
IPA (guest physical address) to a PA. So we have a walker there.

We also a walk for debugging guest page table (though only when it is 
using LPAE). I guess it could be re-used in the case where it is not 
possible to do it in hardware. Although, it would be rewritten to make 
it safe.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-13 18:41                   ` Julien Grall
@ 2016-12-13 19:39                     ` Andrew Cooper
  2016-12-14 12:05                       ` Julien Grall
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Cooper @ 2016-12-13 19:39 UTC (permalink / raw)
  To: Julien Grall, Tamas K Lengyel; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

On 13/12/16 18:41, Julien Grall wrote:
> Hi Andrew,
>
> On 13/12/16 18:28, Andrew Cooper wrote:
>> On 12/12/16 23:47, Tamas K Lengyel wrote:
>>>
>>>> It works on x86 because, IIRC, you do a software page table walking.
>>>> Although, I don't think you have particular write/read access
>>>> checking on
>>>> x86.
>>> I don't recall there being any software page walking being involved on
>>> x86. Why would that be needed? On x86 we get the guest physical
>>> address recorded by the CPU automatically. AFAIK in case the pagetable
>>> was unaccessible for the translation of a VA, we would get an event
>>> with the pagetable's PA and the type of event that generated it (ie.
>>> reading during translation).
>>
>> x86 provides no mechanism to have hardware translate an address in a
>> separate context.  Therefore, all translations which are necessary need
>> to be done in hardware.
>
> I guess you meant "need to be done in sofware"?

I did.  Sorry for the confusion.

>
>>
>> Newer versions of VT-x/SVM may provide additional information on a
>> vmexit, which include a guest physical address relevant to the the
>> reason for the vmexit.
>>
>> Xen will attempt to proactively use this information to avoid a software
>> pagewalk, but can always fall back to the software method if needs be.
>
> That is a good idea, but may bring some issue with memacces as we
> currently have on ARM.

Why would a software-only approach have problem on ARM?  Slow certainly,
but it should function correctly.

>
>>
>>
>> I presume ARM has always relied on hardware support for translation, and
>> has no software translation support?  I presume therefore that
>> translations only work when in current context?
>
> That's correct. ARM provided hardware support for most of translation
> from the start. But it relies on the guest page table to be accessible
> (e.g there is no memory restriction in stage-2).

Ok, so ARM provides an instruction to translate an arbitrary virtual
address to guest physical.  How does this work in the context of Xen, or
can hardware follow the currently-active virtualisation state to find
the guest pagetables?  Or does it rely on information in the TLB?

> Note that ARM does not provide any hardware instruction to translate
> an IPA (guest physical address) to a PA. So we have a walker there.
>
> We also a walk for debugging guest page table (though only when it is
> using LPAE). I guess it could be re-used in the case where it is not
> possible to do it in hardware. Although, it would be rewritten to make
> it safe.

This sounds like the kind of thing which would be generally useful,
although I'd like to understand the problem better before making
suggestions.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-13 19:39                     ` Andrew Cooper
@ 2016-12-14 12:05                       ` Julien Grall
  2016-12-15  4:54                         ` George Dunlap
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2016-12-14 12:05 UTC (permalink / raw)
  To: Andrew Cooper, Tamas K Lengyel; +Cc: xen-devel, Stefano Stabellini, Jan Beulich

Hi Andrew,

On 13/12/16 19:39, Andrew Cooper wrote:
> On 13/12/16 18:41, Julien Grall wrote:
>> On 13/12/16 18:28, Andrew Cooper wrote:
>>> On 12/12/16 23:47, Tamas K Lengyel wrote:
>>> Newer versions of VT-x/SVM may provide additional information on a
>>> vmexit, which include a guest physical address relevant to the the
>>> reason for the vmexit.
>>>
>>> Xen will attempt to proactively use this information to avoid a software
>>> pagewalk, but can always fall back to the software method if needs be.
>>
>> That is a good idea, but may bring some issue with memacces as we
>> currently have on ARM.
>
> Why would a software-only approach have problem on ARM?  Slow certainly,
> but it should function correctly.

Sorry I meant, that using hardware instruction to translate an address 
on ARM has some drawback when using memaccess.

However, ARM supports 2 kind of page tables (short page table and LPAE), 
different page size (4KB, 16KB, 64KB), split page tables, endianness... 
This would require some works to make all the combinations working.

Furthermore, on 32-bit host not all the RAM is mapped in Xen. So guest 
pages are mapped on demand, requiring TLB invalidation.

So I would only use the software approach when it is strictly necessary.

>
>>
>>>
>>>
>>> I presume ARM has always relied on hardware support for translation, and
>>> has no software translation support?  I presume therefore that
>>> translations only work when in current context?
>>
>> That's correct. ARM provided hardware support for most of translation
>> from the start. But it relies on the guest page table to be accessible
>> (e.g there is no memory restriction in stage-2).
>
> Ok, so ARM provides an instruction to translate an arbitrary virtual
> address to guest physical.  How does this work in the context of Xen, or
> can hardware follow the currently-active virtualisation state to find
> the guest pagetables?  Or does it rely on information in the TLB?

Xen and the guest vCPU are using different separate set of the 
registers. When running in Xen context, the guest vCPU state is still 
present and will be used by the hardware to translate a VA to guest 
physical address.

>
>> Note that ARM does not provide any hardware instruction to translate
>> an IPA (guest physical address) to a PA. So we have a walker there.
>>
>> We also a walk for debugging guest page table (though only when it is
>> using LPAE). I guess it could be re-used in the case where it is not
>> possible to do it in hardware. Although, it would be rewritten to make
>> it safe.
>
> This sounds like the kind of thing which would be generally useful,
> although I'd like to understand the problem better before making
> suggestions.

memaccess will restrict permission of certain memory regions in stage-2 
page table. For the purpose of the example, lets say read-access as been 
restricted.

One of these memory regions may contain the stage-1 page table. Do you 
agree that the guest will not able to read stage-1 page table due the 
restriction?

Similarly, when the hardware will do the page table walk it all the 
accesses will be on behalf of the guest. So a read on stage-1 page table 
would fail and the hardware will not be able to do the translation.

I hope this clarify the problem.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-14 12:05                       ` Julien Grall
@ 2016-12-15  4:54                         ` George Dunlap
  2016-12-15 16:16                           ` Julien Grall
  0 siblings, 1 reply; 24+ messages in thread
From: George Dunlap @ 2016-12-15  4:54 UTC (permalink / raw)
  To: Julien Grall
  Cc: Andrew Cooper, Stefano Stabellini, xen-devel, Jan Beulich,
	Tamas K Lengyel

On Wed, Dec 14, 2016 at 8:05 PM, Julien Grall <julien.grall@arm.com> wrote:
>>> Note that ARM does not provide any hardware instruction to translate
>>> an IPA (guest physical address) to a PA. So we have a walker there.
>>>
>>> We also a walk for debugging guest page table (though only when it is
>>> using LPAE). I guess it could be re-used in the case where it is not
>>> possible to do it in hardware. Although, it would be rewritten to make
>>> it safe.
>>
>>
>> This sounds like the kind of thing which would be generally useful,
>> although I'd like to understand the problem better before making
>> suggestions.
>
>
> memaccess will restrict permission of certain memory regions in stage-2 page
> table. For the purpose of the example, lets say read-access as been
> restricted.
>
> One of these memory regions may contain the stage-1 page table. Do you agree
> that the guest will not able to read stage-1 page table due the restriction?

But only if the memory is read-protected, right?  If a guest PT
(stage-1) has read permission but not write permission, the hardware
walker should still work, shouldn't it?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-15  4:54                         ` George Dunlap
@ 2016-12-15 16:16                           ` Julien Grall
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2016-12-15 16:16 UTC (permalink / raw)
  To: George Dunlap
  Cc: Andrew Cooper, Stefano Stabellini, xen-devel, Jan Beulich,
	Tamas K Lengyel

Hi George,

On 15/12/16 04:54, George Dunlap wrote:
> On Wed, Dec 14, 2016 at 8:05 PM, Julien Grall <julien.grall@arm.com> wrote:
>>>> Note that ARM does not provide any hardware instruction to translate
>>>> an IPA (guest physical address) to a PA. So we have a walker there.
>>>>
>>>> We also a walk for debugging guest page table (though only when it is
>>>> using LPAE). I guess it could be re-used in the case where it is not
>>>> possible to do it in hardware. Although, it would be rewritten to make
>>>> it safe.
>>>
>>>
>>> This sounds like the kind of thing which would be generally useful,
>>> although I'd like to understand the problem better before making
>>> suggestions.
>>
>>
>> memaccess will restrict permission of certain memory regions in stage-2 page
>> table. For the purpose of the example, lets say read-access as been
>> restricted.
>>
>> One of these memory regions may contain the stage-1 page table. Do you agree
>> that the guest will not able to read stage-1 page table due the restriction?
>
> But only if the memory is read-protected, right?  If a guest PT
> (stage-1) has read permission but not write permission, the hardware
> walker should still work, shouldn't it?

Good question, ARMv8.1 is adding support of hardware management for the 
access flag and dirty state (see chapter B4 in DDI0557A.b). So if the 
hardware has to update the page table entry it would need write permission.

I have looked at the pseudo code for the address translation in both 
ARMv8.0 and ARMv8.1.

ARMv8.0 does not need to update the table entry in hardware, looking at 
the AArch64.CheckS2Permission pseudo-code (see J1-5290 in ARM DDI 
0486A.k_iss10775), the hardware walk will check whether stage-2 as read 
permission during Stage 1 page table walk (s2fs1walk variable is to true).

For ARMv8.1, after spending a couple of hours reading the pseudo-code 
(see E.1.2.5 in DDI0557A.b) my understanding is the hardware walker may 
need to update the entry (see AArch64.CheckAndUpdateDescriptor). This is 
depending on who has triggered the walker (e.g address translation 
instruction, instruction fetch, memory access...). In the case of 
address translation instruction (AccType_AT) no hardware update will be 
done.

So to answer your question, the address translation instruction only 
need read access. So the issue mentioned would only happen when the 
memory is read-protected.

Reading the pseudo-code reminded me a potential upcoming error with 
memaccess. I mentioned it a couple of months ago (see [1] and the KVM 
counterpart [2]) on the ML. Stage-2 permission fault during stage-1 
walk, the syndrome field WnR (Write not Read) will report whether the 
abort was caused by a write instruction or read instruction. And not 
whether the hardware walker was reading or writing the page table entry. 
This is pretty obvious with the new pseudo-code for 
AArch64.CheckPermission in the spec (see J1-5290 in ARM DDI 
0486A.k_iss10775).

I think that a guest having stage-1 page table protected in stage-2 
could be stuck forever faulting and retrying a write instruction because 
memaccess would receive the wrong fault (i.e write fault instead of read).

Cheers,

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg00228.html
[2] 
https://lists.cs.columbia.edu/pipermail/kvmarm/2016-September/021925.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-09 19:59 [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Tamas K Lengyel
  2016-12-09 19:59 ` [PATCH v2 2/2] p2m: split mem_access into separate files Tamas K Lengyel
  2016-12-12  7:42 ` [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Jan Beulich
@ 2017-01-03 15:29 ` Tamas K Lengyel
  2017-01-30 21:11 ` Stefano Stabellini
  3 siblings, 0 replies; 24+ messages in thread
From: Tamas K Lengyel @ 2017-01-03 15:29 UTC (permalink / raw)
  To: xen-devel, Stefano Stabellini; +Cc: Tamas K Lengyel, Julien Grall

On Fri, Dec 9, 2016 at 12:59 PM, Tamas K Lengyel
<tamas.lengyel@zentific.com> wrote:
> The only caller of this function is get_page_from_gva which already takes
> a vcpu pointer as input. Pass this along to make the function in-line with
> its intended use-case.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>

Patch ping.

> ---
>  xen/arch/arm/p2m.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index cc5634b..837be1d 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>   * we indeed found a conflicting mem_access setting.
>   */
>  static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v)
>  {
>      long rc;
>      paddr_t ipa;
> @@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>      xenmem_access_t xma;
>      p2m_type_t t;
>      struct page_info *page = NULL;
> -    struct p2m_domain *p2m = &current->domain->arch.p2m;
> +    struct p2m_domain *p2m = &v->domain->arch.p2m;
>
>      rc = gva_to_ipa(gva, &ipa, flag);
>      if ( rc < 0 )
> @@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
>       */
> -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
> +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
>      if ( rc < 0 )
>          goto err;
>
> @@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>
>      page = mfn_to_page(mfn_x(mfn));
>
> -    if ( unlikely(!get_page(page, current->domain)) )
> +    if ( unlikely(!get_page(page, v->domain)) )
>          page = NULL;
>
>  err:
> @@ -1587,7 +1588,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>
>  err:
>      if ( !page && p2m->mem_access_enabled )
> -        page = p2m_mem_access_check_and_get_page(va, flags);
> +        page = p2m_mem_access_check_and_get_page(va, flags, v);
>
>      p2m_read_unlock(p2m);
>
> --
> 2.10.2
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/2] p2m: split mem_access into separate files
  2016-12-09 19:59 ` [PATCH v2 2/2] p2m: split mem_access into separate files Tamas K Lengyel
@ 2017-01-03 15:31   ` Tamas K Lengyel
  2017-01-11 16:33     ` Konrad Rzeszutek Wilk
  2017-01-30 21:11   ` Stefano Stabellini
  1 sibling, 1 reply; 24+ messages in thread
From: Tamas K Lengyel @ 2017-01-03 15:31 UTC (permalink / raw)
  To: xen-devel, Stefano Stabellini
  Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

On Fri, Dec 9, 2016 at 12:59 PM, Tamas K Lengyel
<tamas.lengyel@zentific.com> wrote:
> This patch relocates mem_access components that are currently mixed with p2m
> code into separate files. This better aligns the code with similar subsystems,
> such as mem_sharing and mem_paging, which are already in separate files. There
> are no code-changes introduced, the patch is mechanical code movement.
>
> On ARM we also relocate the static inline gfn_next_boundary function to p2m.h
> as it is a function the mem_access code needs access to.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>
> v2: Don't move ARM radix tree functions
>     Include asm/mem_accesss.h in xen/mem_access.h

Patch ping. I think this only needs an ARM-side ack.

> ---
>  MAINTAINERS                      |   2 +
>  xen/arch/arm/Makefile            |   1 +
>  xen/arch/arm/mem_access.c        | 431 ++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/p2m.c               | 414 +----------------------------------
>  xen/arch/arm/traps.c             |   1 +
>  xen/arch/x86/mm/Makefile         |   1 +
>  xen/arch/x86/mm/mem_access.c     | 462 +++++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/p2m.c            | 421 -----------------------------------
>  xen/arch/x86/vm_event.c          |   3 +-
>  xen/common/mem_access.c          |   2 +-
>  xen/include/asm-arm/mem_access.h |  53 +++++
>  xen/include/asm-arm/p2m.h        |  31 ++-
>  xen/include/asm-x86/mem_access.h |  61 ++++++
>  xen/include/asm-x86/p2m.h        |  24 +-
>  xen/include/xen/mem_access.h     |  67 +++++-
>  xen/include/xen/p2m-common.h     |  52 -----
>  16 files changed, 1089 insertions(+), 937 deletions(-)
>  create mode 100644 xen/arch/arm/mem_access.c
>  create mode 100644 xen/arch/x86/mm/mem_access.c
>  create mode 100644 xen/include/asm-arm/mem_access.h
>  create mode 100644 xen/include/asm-x86/mem_access.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f0d0202..fb26be3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -402,6 +402,8 @@ S:  Supported
>  F:     tools/tests/xen-access
>  F:     xen/arch/*/monitor.c
>  F:     xen/arch/*/vm_event.c
> +F:     xen/arch/arm/mem_access.c
> +F:     xen/arch/x86/mm/mem_access.c
>  F:     xen/arch/x86/hvm/monitor.c
>  F:     xen/common/mem_access.c
>  F:     xen/common/monitor.c
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index da39d39..b095e8a 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -24,6 +24,7 @@ obj-y += io.o
>  obj-y += irq.o
>  obj-y += kernel.o
>  obj-$(CONFIG_LIVEPATCH) += livepatch.o
> +obj-y += mem_access.o
>  obj-y += mm.o
>  obj-y += monitor.o
>  obj-y += p2m.o
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> new file mode 100644
> index 0000000..a6e5bcd
> --- /dev/null
> +++ b/xen/arch/arm/mem_access.c
> @@ -0,0 +1,431 @@
> +/*
> + * arch/arm/mem_access.c
> + *
> + * Architecture-specific mem_access handling routines
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public
> + * License v2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/mem_access.h>
> +#include <xen/monitor.h>
> +#include <xen/sched.h>
> +#include <xen/vm_event.h>
> +#include <public/vm_event.h>
> +#include <asm/event.h>
> +
> +static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> +                                xenmem_access_t *access)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    void *i;
> +    unsigned int index;
> +
> +    static const xenmem_access_t memaccess[] = {
> +#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> +            ACCESS(n),
> +            ACCESS(r),
> +            ACCESS(w),
> +            ACCESS(rw),
> +            ACCESS(x),
> +            ACCESS(rx),
> +            ACCESS(wx),
> +            ACCESS(rwx),
> +            ACCESS(rx2rw),
> +            ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    ASSERT(p2m_is_locked(p2m));
> +
> +    /* If no setting was ever set, just return rwx. */
> +    if ( !p2m->mem_access_enabled )
> +    {
> +        *access = XENMEM_access_rwx;
> +        return 0;
> +    }
> +
> +    /* If request to get default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        *access = memaccess[p2m->default_access];
> +        return 0;
> +    }
> +
> +    i = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
> +
> +    if ( !i )
> +    {
> +        /*
> +         * No setting was found in the Radix tree. Check if the
> +         * entry exists in the page-tables.
> +         */
> +        mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
> +
> +        if ( mfn_eq(mfn, INVALID_MFN) )
> +            return -ESRCH;
> +
> +        /* If entry exists then its rwx. */
> +        *access = XENMEM_access_rwx;
> +    }
> +    else
> +    {
> +        /* Setting was found in the Radix tree. */
> +        index = radix_tree_ptr_to_int(i);
> +        if ( index >= ARRAY_SIZE(memaccess) )
> +            return -ERANGE;
> +
> +        *access = memaccess[index];
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * If mem_access is in use it might have been the reason why get_page_from_gva
> + * failed to fetch the page, as it uses the MMU for the permission checking.
> + * Only in these cases we do a software-based type check and fetch the page if
> + * we indeed found a conflicting mem_access setting.
> + */
> +struct page_info*
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v)
> +{
> +    long rc;
> +    paddr_t ipa;
> +    gfn_t gfn;
> +    mfn_t mfn;
> +    xenmem_access_t xma;
> +    p2m_type_t t;
> +    struct page_info *page = NULL;
> +    struct p2m_domain *p2m = &v->domain->arch.p2m;
> +
> +    rc = gva_to_ipa(gva, &ipa, flag);
> +    if ( rc < 0 )
> +        goto err;
> +
> +    gfn = _gfn(paddr_to_pfn(ipa));
> +
> +    /*
> +     * We do this first as this is faster in the default case when no
> +     * permission is set on the page.
> +     */
> +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
> +    if ( rc < 0 )
> +        goto err;
> +
> +    /* Let's check if mem_access limited the access. */
> +    switch ( xma )
> +    {
> +    default:
> +    case XENMEM_access_rwx:
> +    case XENMEM_access_rw:
> +        /*
> +         * If mem_access contains no rw perm restrictions at all then the original
> +         * fault was correct.
> +         */
> +        goto err;
> +    case XENMEM_access_n2rwx:
> +    case XENMEM_access_n:
> +    case XENMEM_access_x:
> +        /*
> +         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
> +         */
> +        break;
> +    case XENMEM_access_wx:
> +    case XENMEM_access_w:
> +        /*
> +         * If this was a read then it was because of mem_access, but if it was
> +         * a write then the original get_page_from_gva fault was correct.
> +         */
> +        if ( flag == GV2M_READ )
> +            break;
> +        else
> +            goto err;
> +    case XENMEM_access_rx2rw:
> +    case XENMEM_access_rx:
> +    case XENMEM_access_r:
> +        /*
> +         * If this was a write then it was because of mem_access, but if it was
> +         * a read then the original get_page_from_gva fault was correct.
> +         */
> +        if ( flag == GV2M_WRITE )
> +            break;
> +        else
> +            goto err;
> +    }
> +
> +    /*
> +     * We had a mem_access permission limiting the access, but the page type
> +     * could also be limiting, so we need to check that as well.
> +     */
> +    mfn = p2m_get_entry(p2m, gfn, &t, NULL, NULL);
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        goto err;
> +
> +    if ( !mfn_valid(mfn_x(mfn)) )
> +        goto err;
> +
> +    /*
> +     * Base type doesn't allow r/w
> +     */
> +    if ( t != p2m_ram_rw )
> +        goto err;
> +
> +    page = mfn_to_page(mfn_x(mfn));
> +
> +    if ( unlikely(!get_page(page, v->domain)) )
> +        page = NULL;
> +
> +err:
> +    return page;
> +}
> +
> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
> +{
> +    int rc;
> +    bool_t violation;
> +    xenmem_access_t xma;
> +    vm_event_request_t *req;
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
> +
> +    /* Mem_access is not in use. */
> +    if ( !p2m->mem_access_enabled )
> +        return true;
> +
> +    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
> +    if ( rc )
> +        return true;
> +
> +    /* Now check for mem_access violation. */
> +    switch ( xma )
> +    {
> +    case XENMEM_access_rwx:
> +        violation = false;
> +        break;
> +    case XENMEM_access_rw:
> +        violation = npfec.insn_fetch;
> +        break;
> +    case XENMEM_access_wx:
> +        violation = npfec.read_access;
> +        break;
> +    case XENMEM_access_rx:
> +    case XENMEM_access_rx2rw:
> +        violation = npfec.write_access;
> +        break;
> +    case XENMEM_access_x:
> +        violation = npfec.read_access || npfec.write_access;
> +        break;
> +    case XENMEM_access_w:
> +        violation = npfec.read_access || npfec.insn_fetch;
> +        break;
> +    case XENMEM_access_r:
> +        violation = npfec.write_access || npfec.insn_fetch;
> +        break;
> +    default:
> +    case XENMEM_access_n:
> +    case XENMEM_access_n2rwx:
> +        violation = true;
> +        break;
> +    }
> +
> +    if ( !violation )
> +        return true;
> +
> +    /* First, handle rx2rw and n2rwx conversion automatically. */
> +    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
> +    {
> +        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> +                                0, ~0, XENMEM_access_rw, 0);
> +        return false;
> +    }
> +    else if ( xma == XENMEM_access_n2rwx )
> +    {
> +        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> +                                0, ~0, XENMEM_access_rwx, 0);
> +    }
> +
> +    /* Otherwise, check if there is a vm_event monitor subscriber */
> +    if ( !vm_event_check_ring(&v->domain->vm_event->monitor) )
> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no vm_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, v->domain->domain_id);
> +            domain_crash(v->domain);
> +        }
> +        else
> +        {
> +            /* n2rwx was already handled */
> +            if ( xma != XENMEM_access_n2rwx )
> +            {
> +                /* A listener is not required, so clear the access
> +                 * restrictions. */
> +                rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> +                                        0, ~0, XENMEM_access_rwx, 0);
> +            }
> +        }
> +
> +        /* No need to reinject */
> +        return false;
> +    }
> +
> +    req = xzalloc(vm_event_request_t);
> +    if ( req )
> +    {
> +        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> +
> +        /* Send request to mem access subscriber */
> +        req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
> +        req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
> +        if ( npfec.gla_valid )
> +        {
> +            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> +            req->u.mem_access.gla = gla;
> +
> +            if ( npfec.kind == npfec_kind_with_gla )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> +            else if ( npfec.kind == npfec_kind_in_gpt )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> +        }
> +        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> +        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> +        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> +
> +        if ( monitor_traps(v, (xma != XENMEM_access_n2rwx), req) < 0 )
> +            domain_crash(v->domain);
> +
> +        xfree(req);
> +    }
> +
> +    return false;
> +}
> +
> +/*
> + * Set access type for a region of pfns.
> + * If gfn == INVALID_GFN, sets the default access type.
> + */
> +long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access,
> +                        unsigned int altp2m_idx)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_access_t a;
> +    unsigned int order;
> +    long rc = 0;
> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    switch ( access )
> +    {
> +    case 0 ... ARRAY_SIZE(memaccess) - 1:
> +        a = memaccess[access];
> +        break;
> +    case XENMEM_access_default:
> +        a = p2m->default_access;
> +        break;
> +    default:
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Flip mem_access_enabled to true when a permission is set, as to prevent
> +     * allocating or inserting super-pages.
> +     */
> +    p2m->mem_access_enabled = true;
> +
> +    /* If request to set default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        p2m->default_access = a;
> +        return 0;
> +    }
> +
> +    p2m_write_lock(p2m);
> +
> +    for ( gfn = gfn_add(gfn, start); nr > start;
> +          gfn = gfn_next_boundary(gfn, order) )
> +    {
> +        p2m_type_t t;
> +        mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL, &order);
> +
> +
> +        if ( !mfn_eq(mfn, INVALID_MFN) )
> +        {
> +            order = 0;
> +            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, a);
> +            if ( rc )
> +                break;
> +        }
> +
> +        start += gfn_x(gfn_next_boundary(gfn, order)) - gfn_x(gfn);
> +        /* Check for continuation if it is not the last iteration */
> +        if ( nr > start && !(start & mask) && hypercall_preempt_check() )
> +        {
> +            rc = start;
> +            break;
> +        }
> +    }
> +
> +    p2m_write_unlock(p2m);
> +
> +    return rc;
> +}
> +
> +long p2m_set_mem_access_multi(struct domain *d,
> +                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> +                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> +                              uint32_t nr, uint32_t start, uint32_t mask,
> +                              unsigned int altp2m_idx)
> +{
> +    /* Not yet implemented on ARM. */
> +    return -EOPNOTSUPP;
> +}
> +
> +int p2m_get_mem_access(struct domain *d, gfn_t gfn,
> +                       xenmem_access_t *access)
> +{
> +    int ret;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    p2m_read_lock(p2m);
> +    ret = __p2m_get_mem_access(d, gfn, access);
> +    p2m_read_unlock(p2m);
> +
> +    return ret;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 837be1d..4e7ce3d 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -7,6 +7,7 @@
>  #include <xen/vm_event.h>
>  #include <xen/monitor.h>
>  #include <xen/iocap.h>
> +#include <xen/mem_access.h>
>  #include <public/vm_event.h>
>  #include <asm/flushtlb.h>
>  #include <asm/gic.h>
> @@ -58,22 +59,6 @@ static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
>      return (level < 3) && p2m_mapping(pte);
>  }
>
> -/*
> - * Return the start of the next mapping based on the order of the
> - * current one.
> - */
> -static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
> -{
> -    /*
> -     * The order corresponds to the order of the mapping (or invalid
> -     * range) in the page table. So we need to align the GFN before
> -     * incrementing.
> -     */
> -    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
> -
> -    return gfn_add(gfn, 1UL << order);
> -}
> -
>  static void p2m_flush_tlb(struct p2m_domain *p2m);
>
>  /* Unlock the flush and do a P2M TLB flush if necessary */
> @@ -602,73 +587,6 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
>      return 0;
>  }
>
> -static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> -                                xenmem_access_t *access)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    void *i;
> -    unsigned int index;
> -
> -    static const xenmem_access_t memaccess[] = {
> -#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> -            ACCESS(n),
> -            ACCESS(r),
> -            ACCESS(w),
> -            ACCESS(rw),
> -            ACCESS(x),
> -            ACCESS(rx),
> -            ACCESS(wx),
> -            ACCESS(rwx),
> -            ACCESS(rx2rw),
> -            ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    ASSERT(p2m_is_locked(p2m));
> -
> -    /* If no setting was ever set, just return rwx. */
> -    if ( !p2m->mem_access_enabled )
> -    {
> -        *access = XENMEM_access_rwx;
> -        return 0;
> -    }
> -
> -    /* If request to get default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        *access = memaccess[p2m->default_access];
> -        return 0;
> -    }
> -
> -    i = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
> -
> -    if ( !i )
> -    {
> -        /*
> -         * No setting was found in the Radix tree. Check if the
> -         * entry exists in the page-tables.
> -         */
> -        mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
> -
> -        if ( mfn_eq(mfn, INVALID_MFN) )
> -            return -ESRCH;
> -
> -        /* If entry exists then its rwx. */
> -        *access = XENMEM_access_rwx;
> -    }
> -    else
> -    {
> -        /* Setting was found in the Radix tree. */
> -        index = radix_tree_ptr_to_int(i);
> -        if ( index >= ARRAY_SIZE(memaccess) )
> -            return -ERANGE;
> -
> -        *access = memaccess[index];
> -    }
> -
> -    return 0;
> -}
> -
>  static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
>                                      p2m_access_t a)
>  {
> @@ -1454,106 +1372,6 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>      return p2m_lookup(d, gfn, NULL);
>  }
>
> -/*
> - * If mem_access is in use it might have been the reason why get_page_from_gva
> - * failed to fetch the page, as it uses the MMU for the permission checking.
> - * Only in these cases we do a software-based type check and fetch the page if
> - * we indeed found a conflicting mem_access setting.
> - */
> -static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> -                                  const struct vcpu *v)
> -{
> -    long rc;
> -    paddr_t ipa;
> -    gfn_t gfn;
> -    mfn_t mfn;
> -    xenmem_access_t xma;
> -    p2m_type_t t;
> -    struct page_info *page = NULL;
> -    struct p2m_domain *p2m = &v->domain->arch.p2m;
> -
> -    rc = gva_to_ipa(gva, &ipa, flag);
> -    if ( rc < 0 )
> -        goto err;
> -
> -    gfn = _gfn(paddr_to_pfn(ipa));
> -
> -    /*
> -     * We do this first as this is faster in the default case when no
> -     * permission is set on the page.
> -     */
> -    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
> -    if ( rc < 0 )
> -        goto err;
> -
> -    /* Let's check if mem_access limited the access. */
> -    switch ( xma )
> -    {
> -    default:
> -    case XENMEM_access_rwx:
> -    case XENMEM_access_rw:
> -        /*
> -         * If mem_access contains no rw perm restrictions at all then the original
> -         * fault was correct.
> -         */
> -        goto err;
> -    case XENMEM_access_n2rwx:
> -    case XENMEM_access_n:
> -    case XENMEM_access_x:
> -        /*
> -         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
> -         */
> -        break;
> -    case XENMEM_access_wx:
> -    case XENMEM_access_w:
> -        /*
> -         * If this was a read then it was because of mem_access, but if it was
> -         * a write then the original get_page_from_gva fault was correct.
> -         */
> -        if ( flag == GV2M_READ )
> -            break;
> -        else
> -            goto err;
> -    case XENMEM_access_rx2rw:
> -    case XENMEM_access_rx:
> -    case XENMEM_access_r:
> -        /*
> -         * If this was a write then it was because of mem_access, but if it was
> -         * a read then the original get_page_from_gva fault was correct.
> -         */
> -        if ( flag == GV2M_WRITE )
> -            break;
> -        else
> -            goto err;
> -    }
> -
> -    /*
> -     * We had a mem_access permission limiting the access, but the page type
> -     * could also be limiting, so we need to check that as well.
> -     */
> -    mfn = p2m_get_entry(p2m, gfn, &t, NULL, NULL);
> -    if ( mfn_eq(mfn, INVALID_MFN) )
> -        goto err;
> -
> -    if ( !mfn_valid(mfn_x(mfn)) )
> -        goto err;
> -
> -    /*
> -     * Base type doesn't allow r/w
> -     */
> -    if ( t != p2m_ram_rw )
> -        goto err;
> -
> -    page = mfn_to_page(mfn_x(mfn));
> -
> -    if ( unlikely(!get_page(page, v->domain)) )
> -        page = NULL;
> -
> -err:
> -    return page;
> -}
> -
>  struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>                                      unsigned long flags)
>  {
> @@ -1666,236 +1484,6 @@ void __init setup_virt_paging(void)
>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>  }
>
> -bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
> -{
> -    int rc;
> -    bool_t violation;
> -    xenmem_access_t xma;
> -    vm_event_request_t *req;
> -    struct vcpu *v = current;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
> -
> -    /* Mem_access is not in use. */
> -    if ( !p2m->mem_access_enabled )
> -        return true;
> -
> -    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
> -    if ( rc )
> -        return true;
> -
> -    /* Now check for mem_access violation. */
> -    switch ( xma )
> -    {
> -    case XENMEM_access_rwx:
> -        violation = false;
> -        break;
> -    case XENMEM_access_rw:
> -        violation = npfec.insn_fetch;
> -        break;
> -    case XENMEM_access_wx:
> -        violation = npfec.read_access;
> -        break;
> -    case XENMEM_access_rx:
> -    case XENMEM_access_rx2rw:
> -        violation = npfec.write_access;
> -        break;
> -    case XENMEM_access_x:
> -        violation = npfec.read_access || npfec.write_access;
> -        break;
> -    case XENMEM_access_w:
> -        violation = npfec.read_access || npfec.insn_fetch;
> -        break;
> -    case XENMEM_access_r:
> -        violation = npfec.write_access || npfec.insn_fetch;
> -        break;
> -    default:
> -    case XENMEM_access_n:
> -    case XENMEM_access_n2rwx:
> -        violation = true;
> -        break;
> -    }
> -
> -    if ( !violation )
> -        return true;
> -
> -    /* First, handle rx2rw and n2rwx conversion automatically. */
> -    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
> -    {
> -        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> -                                0, ~0, XENMEM_access_rw, 0);
> -        return false;
> -    }
> -    else if ( xma == XENMEM_access_n2rwx )
> -    {
> -        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> -                                0, ~0, XENMEM_access_rwx, 0);
> -    }
> -
> -    /* Otherwise, check if there is a vm_event monitor subscriber */
> -    if ( !vm_event_check_ring(&v->domain->vm_event->monitor) )
> -    {
> -        /* No listener */
> -        if ( p2m->access_required )
> -        {
> -            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> -                                  "no vm_event listener VCPU %d, dom %d\n",
> -                                  v->vcpu_id, v->domain->domain_id);
> -            domain_crash(v->domain);
> -        }
> -        else
> -        {
> -            /* n2rwx was already handled */
> -            if ( xma != XENMEM_access_n2rwx )
> -            {
> -                /* A listener is not required, so clear the access
> -                 * restrictions. */
> -                rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> -                                        0, ~0, XENMEM_access_rwx, 0);
> -            }
> -        }
> -
> -        /* No need to reinject */
> -        return false;
> -    }
> -
> -    req = xzalloc(vm_event_request_t);
> -    if ( req )
> -    {
> -        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> -
> -        /* Send request to mem access subscriber */
> -        req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
> -        req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
> -        if ( npfec.gla_valid )
> -        {
> -            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> -            req->u.mem_access.gla = gla;
> -
> -            if ( npfec.kind == npfec_kind_with_gla )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> -            else if ( npfec.kind == npfec_kind_in_gpt )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> -        }
> -        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> -        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> -        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> -
> -        if ( monitor_traps(v, (xma != XENMEM_access_n2rwx), req) < 0 )
> -            domain_crash(v->domain);
> -
> -        xfree(req);
> -    }
> -
> -    return false;
> -}
> -
> -/*
> - * Set access type for a region of pfns.
> - * If gfn == INVALID_GFN, sets the default access type.
> - */
> -long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> -                        uint32_t start, uint32_t mask, xenmem_access_t access,
> -                        unsigned int altp2m_idx)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    p2m_access_t a;
> -    unsigned int order;
> -    long rc = 0;
> -
> -    static const p2m_access_t memaccess[] = {
> -#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> -        ACCESS(n),
> -        ACCESS(r),
> -        ACCESS(w),
> -        ACCESS(rw),
> -        ACCESS(x),
> -        ACCESS(rx),
> -        ACCESS(wx),
> -        ACCESS(rwx),
> -        ACCESS(rx2rw),
> -        ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    switch ( access )
> -    {
> -    case 0 ... ARRAY_SIZE(memaccess) - 1:
> -        a = memaccess[access];
> -        break;
> -    case XENMEM_access_default:
> -        a = p2m->default_access;
> -        break;
> -    default:
> -        return -EINVAL;
> -    }
> -
> -    /*
> -     * Flip mem_access_enabled to true when a permission is set, as to prevent
> -     * allocating or inserting super-pages.
> -     */
> -    p2m->mem_access_enabled = true;
> -
> -    /* If request to set default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        p2m->default_access = a;
> -        return 0;
> -    }
> -
> -    p2m_write_lock(p2m);
> -
> -    for ( gfn = gfn_add(gfn, start); nr > start;
> -          gfn = gfn_next_boundary(gfn, order) )
> -    {
> -        p2m_type_t t;
> -        mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL, &order);
> -
> -
> -        if ( !mfn_eq(mfn, INVALID_MFN) )
> -        {
> -            order = 0;
> -            rc = __p2m_set_entry(p2m, gfn, 0, mfn, t, a);
> -            if ( rc )
> -                break;
> -        }
> -
> -        start += gfn_x(gfn_next_boundary(gfn, order)) - gfn_x(gfn);
> -        /* Check for continuation if it is not the last iteration */
> -        if ( nr > start && !(start & mask) && hypercall_preempt_check() )
> -        {
> -            rc = start;
> -            break;
> -        }
> -    }
> -
> -    p2m_write_unlock(p2m);
> -
> -    return rc;
> -}
> -
> -long p2m_set_mem_access_multi(struct domain *d,
> -                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> -                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> -                              uint32_t nr, uint32_t start, uint32_t mask,
> -                              unsigned int altp2m_idx)
> -{
> -    /* Not yet implemented on ARM. */
> -    return -EOPNOTSUPP;
> -}
> -
> -int p2m_get_mem_access(struct domain *d, gfn_t gfn,
> -                       xenmem_access_t *access)
> -{
> -    int ret;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -
> -    p2m_read_lock(p2m);
> -    ret = __p2m_get_mem_access(d, gfn, access);
> -    p2m_read_unlock(p2m);
> -
> -    return ret;
> -}
> -
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 8ff73fe..f2ea083 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -32,6 +32,7 @@
>  #include <xen/domain_page.h>
>  #include <xen/perfc.h>
>  #include <xen/virtual_region.h>
> +#include <xen/mem_access.h>
>  #include <public/sched.h>
>  #include <public/xen.h>
>  #include <asm/debugger.h>
> diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
> index 9804c3a..e977dd8 100644
> --- a/xen/arch/x86/mm/Makefile
> +++ b/xen/arch/x86/mm/Makefile
> @@ -9,6 +9,7 @@ obj-y += guest_walk_3.o
>  obj-y += guest_walk_4.o
>  obj-y += mem_paging.o
>  obj-y += mem_sharing.o
> +obj-y += mem_access.o
>
>  guest_walk_%.o: guest_walk.c Makefile
>         $(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
> diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
> new file mode 100644
> index 0000000..34a994d
> --- /dev/null
> +++ b/xen/arch/x86/mm/mem_access.c
> @@ -0,0 +1,462 @@
> +/******************************************************************************
> + * arch/x86/mm/mem_access.c
> + *
> + * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> + * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
> + * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
> + * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
> + * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/guest_access.h> /* copy_from_guest() */
> +#include <xen/mem_access.h>
> +#include <xen/vm_event.h>
> +#include <xen/event.h>
> +#include <public/vm_event.h>
> +#include <asm/p2m.h>
> +#include <asm/altp2m.h>
> +#include <asm/vm_event.h>
> +
> +#include "mm-locks.h"
> +
> +bool p2m_mem_access_emulate_check(struct vcpu *v,
> +                                  const vm_event_response_t *rsp)
> +{
> +    xenmem_access_t access;
> +    bool violation = 1;
> +    const struct vm_event_mem_access *data = &rsp->u.mem_access;
> +
> +    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    {
> +        switch ( access )
> +        {
> +        case XENMEM_access_n:
> +        case XENMEM_access_n2rwx:
> +        default:
> +            violation = data->flags & MEM_ACCESS_RWX;
> +            break;
> +
> +        case XENMEM_access_r:
> +            violation = data->flags & MEM_ACCESS_WX;
> +            break;
> +
> +        case XENMEM_access_w:
> +            violation = data->flags & MEM_ACCESS_RX;
> +            break;
> +
> +        case XENMEM_access_x:
> +            violation = data->flags & MEM_ACCESS_RW;
> +            break;
> +
> +        case XENMEM_access_rx:
> +        case XENMEM_access_rx2rw:
> +            violation = data->flags & MEM_ACCESS_W;
> +            break;
> +
> +        case XENMEM_access_wx:
> +            violation = data->flags & MEM_ACCESS_R;
> +            break;
> +
> +        case XENMEM_access_rw:
> +            violation = data->flags & MEM_ACCESS_X;
> +            break;
> +
> +        case XENMEM_access_rwx:
> +            violation = 0;
> +            break;
> +        }
> +    }
> +
> +    return violation;
> +}
> +
> +bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> +                            struct npfec npfec,
> +                            vm_event_request_t **req_ptr)
> +{
> +    struct vcpu *v = current;
> +    unsigned long gfn = gpa >> PAGE_SHIFT;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = NULL;
> +    mfn_t mfn;
> +    p2m_type_t p2mt;
> +    p2m_access_t p2ma;
> +    vm_event_request_t *req;
> +    int rc;
> +
> +    if ( altp2m_active(d) )
> +        p2m = p2m_get_altp2m(v);
> +    if ( !p2m )
> +        p2m = p2m_get_hostp2m(d);
> +
> +    /* First, handle rx2rw conversion automatically.
> +     * These calls to p2m->set_entry() must succeed: we have the gfn
> +     * locked and just did a successful get_entry(). */
> +    gfn_lock(p2m, gfn, 0);
> +    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> +
> +    if ( npfec.write_access && p2ma == p2m_access_rx2rw )
> +    {
> +        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw, -1);
> +        ASSERT(rc == 0);
> +        gfn_unlock(p2m, gfn, 0);
> +        return 1;
> +    }
> +    else if ( p2ma == p2m_access_n2rwx )
> +    {
> +        ASSERT(npfec.write_access || npfec.read_access || npfec.insn_fetch);
> +        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> +                            p2mt, p2m_access_rwx, -1);
> +        ASSERT(rc == 0);
> +    }
> +    gfn_unlock(p2m, gfn, 0);
> +
> +    /* Otherwise, check if there is a memory event listener, and send the message along */
> +    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr )
> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no vm_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, d->domain_id);
> +            domain_crash(v->domain);
> +            return 0;
> +        }
> +        else
> +        {
> +            gfn_lock(p2m, gfn, 0);
> +            mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> +            if ( p2ma != p2m_access_n2rwx )
> +            {
> +                /* A listener is not required, so clear the access
> +                 * restrictions.  This set must succeed: we have the
> +                 * gfn locked and just did a successful get_entry(). */
> +                rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> +                                    p2mt, p2m_access_rwx, -1);
> +                ASSERT(rc == 0);
> +            }
> +            gfn_unlock(p2m, gfn, 0);
> +            return 1;
> +        }
> +    }
> +
> +    *req_ptr = NULL;
> +    req = xzalloc(vm_event_request_t);
> +    if ( req )
> +    {
> +        *req_ptr = req;
> +
> +        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> +        req->u.mem_access.gfn = gfn;
> +        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
> +        if ( npfec.gla_valid )
> +        {
> +            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> +            req->u.mem_access.gla = gla;
> +
> +            if ( npfec.kind == npfec_kind_with_gla )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> +            else if ( npfec.kind == npfec_kind_in_gpt )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> +        }
> +        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> +        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> +        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> +    }
> +
> +    /* Return whether vCPU pause is required (aka. sync event) */
> +    return (p2ma != p2m_access_n2rwx);
> +}
> +
> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
> +                              struct p2m_domain *ap2m, p2m_access_t a,
> +                              gfn_t gfn)
> +{
> +    mfn_t mfn;
> +    p2m_type_t t;
> +    p2m_access_t old_a;
> +    unsigned int page_order;
> +    unsigned long gfn_l = gfn_x(gfn);
> +    int rc;
> +
> +    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
> +
> +    /* Check host p2m if no valid entry in alternate */
> +    if ( !mfn_valid(mfn_x(mfn)) )
> +    {
> +
> +        mfn = __get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
> +                                    P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
> +
> +        rc = -ESRCH;
> +        if ( !mfn_valid(mfn_x(mfn)) || t != p2m_ram_rw )
> +            return rc;
> +
> +        /* If this is a superpage, copy that first */
> +        if ( page_order != PAGE_ORDER_4K )
> +        {
> +            unsigned long mask = ~((1UL << page_order) - 1);
> +            unsigned long gfn2_l = gfn_l & mask;
> +            mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
> +
> +            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
> +            if ( rc )
> +                return rc;
> +        }
> +    }
> +
> +    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
> +                         (current->domain != d));
> +}
> +
> +static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
> +                          struct p2m_domain *ap2m, p2m_access_t a,
> +                          gfn_t gfn)
> +{
> +    int rc = 0;
> +
> +    if ( ap2m )
> +    {
> +        rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
> +        /* If the corresponding mfn is invalid we will want to just skip it */
> +        if ( rc == -ESRCH )
> +            rc = 0;
> +    }
> +    else
> +    {
> +        mfn_t mfn;
> +        p2m_access_t _a;
> +        p2m_type_t t;
> +        unsigned long gfn_l = gfn_x(gfn);
> +
> +        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
> +        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
> +    }
> +
> +    return rc;
> +}
> +
> +static bool xenmem_access_to_p2m_access(struct p2m_domain *p2m,
> +                                        xenmem_access_t xaccess,
> +                                        p2m_access_t *paccess)
> +{
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    switch ( xaccess )
> +    {
> +    case 0 ... ARRAY_SIZE(memaccess) - 1:
> +        *paccess = memaccess[xaccess];
> +        break;
> +    case XENMEM_access_default:
> +        *paccess = p2m->default_access;
> +        break;
> +    default:
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +/*
> + * Set access type for a region of gfns.
> + * If gfn == INVALID_GFN, sets the default access type.
> + */
> +long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access,
> +                        unsigned int altp2m_idx)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> +    p2m_access_t a;
> +    unsigned long gfn_l;
> +    long rc = 0;
> +
> +    /* altp2m view 0 is treated as the hostp2m */
> +    if ( altp2m_idx )
> +    {
> +        if ( altp2m_idx >= MAX_ALTP2M ||
> +             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> +            return -EINVAL;
> +
> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> +    }
> +
> +    if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> +        return -EINVAL;
> +
> +    /* If request to set default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        p2m->default_access = a;
> +        return 0;
> +    }
> +
> +    p2m_lock(p2m);
> +    if ( ap2m )
> +        p2m_lock(ap2m);
> +
> +    for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l )
> +    {
> +        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> +
> +        if ( rc )
> +            break;
> +
> +        /* Check for continuation if it's not the last iteration. */
> +        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> +        {
> +            rc = start;
> +            break;
> +        }
> +    }
> +
> +    if ( ap2m )
> +        p2m_unlock(ap2m);
> +    p2m_unlock(p2m);
> +
> +    return rc;
> +}
> +
> +long p2m_set_mem_access_multi(struct domain *d,
> +                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> +                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> +                              uint32_t nr, uint32_t start, uint32_t mask,
> +                              unsigned int altp2m_idx)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> +    long rc = 0;
> +
> +    /* altp2m view 0 is treated as the hostp2m */
> +    if ( altp2m_idx )
> +    {
> +        if ( altp2m_idx >= MAX_ALTP2M ||
> +             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> +            return -EINVAL;
> +
> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> +    }
> +
> +    p2m_lock(p2m);
> +    if ( ap2m )
> +        p2m_lock(ap2m);
> +
> +    while ( start < nr )
> +    {
> +        p2m_access_t a;
> +        uint8_t access;
> +        uint64_t gfn_l;
> +
> +        if ( copy_from_guest_offset(&gfn_l, pfn_list, start, 1) ||
> +             copy_from_guest_offset(&access, access_list, start, 1) )
> +        {
> +            rc = -EFAULT;
> +            break;
> +        }
> +
> +        if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> +
> +        if ( rc )
> +            break;
> +
> +        /* Check for continuation if it's not the last iteration. */
> +        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> +        {
> +            rc = start;
> +            break;
> +        }
> +    }
> +
> +    if ( ap2m )
> +        p2m_unlock(ap2m);
> +    p2m_unlock(p2m);
> +
> +    return rc;
> +}
> +
> +/*
> + * Get access type for a gfn.
> + * If gfn == INVALID_GFN, gets the default access type.
> + */
> +int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_type_t t;
> +    p2m_access_t a;
> +    mfn_t mfn;
> +
> +    static const xenmem_access_t memaccess[] = {
> +#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> +            ACCESS(n),
> +            ACCESS(r),
> +            ACCESS(w),
> +            ACCESS(rw),
> +            ACCESS(x),
> +            ACCESS(rx),
> +            ACCESS(wx),
> +            ACCESS(rwx),
> +            ACCESS(rx2rw),
> +            ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    /* If request to get default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        *access = memaccess[p2m->default_access];
> +        return 0;
> +    }
> +
> +    gfn_lock(p2m, gfn, 0);
> +    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
> +    gfn_unlock(p2m, gfn, 0);
> +
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        return -ESRCH;
> +
> +    if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> +        return -ERANGE;
> +
> +    *access =  memaccess[a];
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6a45185..6299d5a 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1589,433 +1589,12 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>      }
>  }
>
> -bool p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> -{
> -    xenmem_access_t access;
> -    bool violation = 1;
> -    const struct vm_event_mem_access *data = &rsp->u.mem_access;
> -
> -    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> -    {
> -        switch ( access )
> -        {
> -        case XENMEM_access_n:
> -        case XENMEM_access_n2rwx:
> -        default:
> -            violation = data->flags & MEM_ACCESS_RWX;
> -            break;
> -
> -        case XENMEM_access_r:
> -            violation = data->flags & MEM_ACCESS_WX;
> -            break;
> -
> -        case XENMEM_access_w:
> -            violation = data->flags & MEM_ACCESS_RX;
> -            break;
> -
> -        case XENMEM_access_x:
> -            violation = data->flags & MEM_ACCESS_RW;
> -            break;
> -
> -        case XENMEM_access_rx:
> -        case XENMEM_access_rx2rw:
> -            violation = data->flags & MEM_ACCESS_W;
> -            break;
> -
> -        case XENMEM_access_wx:
> -            violation = data->flags & MEM_ACCESS_R;
> -            break;
> -
> -        case XENMEM_access_rw:
> -            violation = data->flags & MEM_ACCESS_X;
> -            break;
> -
> -        case XENMEM_access_rwx:
> -            violation = 0;
> -            break;
> -        }
> -    }
> -
> -    return violation;
> -}
> -
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  {
>      if ( altp2m_active(v->domain) )
>          p2m_switch_vcpu_altp2m_by_id(v, idx);
>  }
>
> -bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> -                            struct npfec npfec,
> -                            vm_event_request_t **req_ptr)
> -{
> -    struct vcpu *v = current;
> -    unsigned long gfn = gpa >> PAGE_SHIFT;
> -    struct domain *d = v->domain;
> -    struct p2m_domain *p2m = NULL;
> -    mfn_t mfn;
> -    p2m_type_t p2mt;
> -    p2m_access_t p2ma;
> -    vm_event_request_t *req;
> -    int rc;
> -
> -    if ( altp2m_active(d) )
> -        p2m = p2m_get_altp2m(v);
> -    if ( !p2m )
> -        p2m = p2m_get_hostp2m(d);
> -
> -    /* First, handle rx2rw conversion automatically.
> -     * These calls to p2m->set_entry() must succeed: we have the gfn
> -     * locked and just did a successful get_entry(). */
> -    gfn_lock(p2m, gfn, 0);
> -    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> -
> -    if ( npfec.write_access && p2ma == p2m_access_rx2rw )
> -    {
> -        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw, -1);
> -        ASSERT(rc == 0);
> -        gfn_unlock(p2m, gfn, 0);
> -        return 1;
> -    }
> -    else if ( p2ma == p2m_access_n2rwx )
> -    {
> -        ASSERT(npfec.write_access || npfec.read_access || npfec.insn_fetch);
> -        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> -                            p2mt, p2m_access_rwx, -1);
> -        ASSERT(rc == 0);
> -    }
> -    gfn_unlock(p2m, gfn, 0);
> -
> -    /* Otherwise, check if there is a memory event listener, and send the message along */
> -    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr )
> -    {
> -        /* No listener */
> -        if ( p2m->access_required )
> -        {
> -            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> -                                  "no vm_event listener VCPU %d, dom %d\n",
> -                                  v->vcpu_id, d->domain_id);
> -            domain_crash(v->domain);
> -            return 0;
> -        }
> -        else
> -        {
> -            gfn_lock(p2m, gfn, 0);
> -            mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> -            if ( p2ma != p2m_access_n2rwx )
> -            {
> -                /* A listener is not required, so clear the access
> -                 * restrictions.  This set must succeed: we have the
> -                 * gfn locked and just did a successful get_entry(). */
> -                rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> -                                    p2mt, p2m_access_rwx, -1);
> -                ASSERT(rc == 0);
> -            }
> -            gfn_unlock(p2m, gfn, 0);
> -            return 1;
> -        }
> -    }
> -
> -    *req_ptr = NULL;
> -    req = xzalloc(vm_event_request_t);
> -    if ( req )
> -    {
> -        *req_ptr = req;
> -
> -        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> -        req->u.mem_access.gfn = gfn;
> -        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
> -        if ( npfec.gla_valid )
> -        {
> -            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> -            req->u.mem_access.gla = gla;
> -
> -            if ( npfec.kind == npfec_kind_with_gla )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> -            else if ( npfec.kind == npfec_kind_in_gpt )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> -        }
> -        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> -        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> -        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> -    }
> -
> -    /* Return whether vCPU pause is required (aka. sync event) */
> -    return (p2ma != p2m_access_n2rwx);
> -}
> -
> -static inline
> -int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
> -                              struct p2m_domain *ap2m, p2m_access_t a,
> -                              gfn_t gfn)
> -{
> -    mfn_t mfn;
> -    p2m_type_t t;
> -    p2m_access_t old_a;
> -    unsigned int page_order;
> -    unsigned long gfn_l = gfn_x(gfn);
> -    int rc;
> -
> -    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
> -
> -    /* Check host p2m if no valid entry in alternate */
> -    if ( !mfn_valid(mfn) )
> -    {
> -
> -        mfn = __get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
> -                                    P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
> -
> -        rc = -ESRCH;
> -        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
> -            return rc;
> -
> -        /* If this is a superpage, copy that first */
> -        if ( page_order != PAGE_ORDER_4K )
> -        {
> -            unsigned long mask = ~((1UL << page_order) - 1);
> -            unsigned long gfn2_l = gfn_l & mask;
> -            mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
> -
> -            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
> -            if ( rc )
> -                return rc;
> -        }
> -    }
> -
> -    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
> -                         (current->domain != d));
> -}
> -
> -static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
> -                          struct p2m_domain *ap2m, p2m_access_t a,
> -                          gfn_t gfn)
> -{
> -    int rc = 0;
> -
> -    if ( ap2m )
> -    {
> -        rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
> -        /* If the corresponding mfn is invalid we will want to just skip it */
> -        if ( rc == -ESRCH )
> -            rc = 0;
> -    }
> -    else
> -    {
> -        mfn_t mfn;
> -        p2m_access_t _a;
> -        p2m_type_t t;
> -        unsigned long gfn_l = gfn_x(gfn);
> -
> -        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
> -        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
> -    }
> -
> -    return rc;
> -}
> -
> -static bool xenmem_access_to_p2m_access(struct p2m_domain *p2m,
> -                                        xenmem_access_t xaccess,
> -                                        p2m_access_t *paccess)
> -{
> -    static const p2m_access_t memaccess[] = {
> -#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> -        ACCESS(n),
> -        ACCESS(r),
> -        ACCESS(w),
> -        ACCESS(rw),
> -        ACCESS(x),
> -        ACCESS(rx),
> -        ACCESS(wx),
> -        ACCESS(rwx),
> -        ACCESS(rx2rw),
> -        ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    switch ( xaccess )
> -    {
> -    case 0 ... ARRAY_SIZE(memaccess) - 1:
> -        *paccess = memaccess[xaccess];
> -        break;
> -    case XENMEM_access_default:
> -        *paccess = p2m->default_access;
> -        break;
> -    default:
> -        return false;
> -    }
> -
> -    return true;
> -}
> -
> -/*
> - * Set access type for a region of gfns.
> - * If gfn == INVALID_GFN, sets the default access type.
> - */
> -long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> -                        uint32_t start, uint32_t mask, xenmem_access_t access,
> -                        unsigned int altp2m_idx)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> -    p2m_access_t a;
> -    unsigned long gfn_l;
> -    long rc = 0;
> -
> -    /* altp2m view 0 is treated as the hostp2m */
> -    if ( altp2m_idx )
> -    {
> -        if ( altp2m_idx >= MAX_ALTP2M ||
> -             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> -            return -EINVAL;
> -
> -        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> -    }
> -
> -    if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> -        return -EINVAL;
> -
> -    /* If request to set default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        p2m->default_access = a;
> -        return 0;
> -    }
> -
> -    p2m_lock(p2m);
> -    if ( ap2m )
> -        p2m_lock(ap2m);
> -
> -    for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l )
> -    {
> -        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> -
> -        if ( rc )
> -            break;
> -
> -        /* Check for continuation if it's not the last iteration. */
> -        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> -        {
> -            rc = start;
> -            break;
> -        }
> -    }
> -
> -    if ( ap2m )
> -        p2m_unlock(ap2m);
> -    p2m_unlock(p2m);
> -
> -    return rc;
> -}
> -
> -long p2m_set_mem_access_multi(struct domain *d,
> -                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> -                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> -                              uint32_t nr, uint32_t start, uint32_t mask,
> -                              unsigned int altp2m_idx)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> -    long rc = 0;
> -
> -    /* altp2m view 0 is treated as the hostp2m */
> -    if ( altp2m_idx )
> -    {
> -        if ( altp2m_idx >= MAX_ALTP2M ||
> -             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> -            return -EINVAL;
> -
> -        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> -    }
> -
> -    p2m_lock(p2m);
> -    if ( ap2m )
> -        p2m_lock(ap2m);
> -
> -    while ( start < nr )
> -    {
> -        p2m_access_t a;
> -        uint8_t access;
> -        uint64_t gfn_l;
> -
> -        if ( copy_from_guest_offset(&gfn_l, pfn_list, start, 1) ||
> -             copy_from_guest_offset(&access, access_list, start, 1) )
> -        {
> -            rc = -EFAULT;
> -            break;
> -        }
> -
> -        if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> -        {
> -            rc = -EINVAL;
> -            break;
> -        }
> -
> -        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> -
> -        if ( rc )
> -            break;
> -
> -        /* Check for continuation if it's not the last iteration. */
> -        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> -        {
> -            rc = start;
> -            break;
> -        }
> -    }
> -
> -    if ( ap2m )
> -        p2m_unlock(ap2m);
> -    p2m_unlock(p2m);
> -
> -    return rc;
> -}
> -
> -/*
> - * Get access type for a gfn.
> - * If gfn == INVALID_GFN, gets the default access type.
> - */
> -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    p2m_type_t t;
> -    p2m_access_t a;
> -    mfn_t mfn;
> -
> -    static const xenmem_access_t memaccess[] = {
> -#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> -            ACCESS(n),
> -            ACCESS(r),
> -            ACCESS(w),
> -            ACCESS(rw),
> -            ACCESS(x),
> -            ACCESS(rx),
> -            ACCESS(wx),
> -            ACCESS(rwx),
> -            ACCESS(rx2rw),
> -            ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    /* If request to get default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        *access = memaccess[p2m->default_access];
> -        return 0;
> -    }
> -
> -    gfn_lock(p2m, gfn, 0);
> -    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
> -    gfn_unlock(p2m, gfn, 0);
> -
> -    if ( mfn_eq(mfn, INVALID_MFN) )
> -        return -ESRCH;
> -
> -    if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> -        return -ERANGE;
> -
> -    *access =  memaccess[a];
> -    return 0;
> -}
> -
>  static struct p2m_domain *
>  p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
>  {
> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> index 1e88d67..8d8bc4a 100644
> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -18,7 +18,8 @@
>   * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>
> -#include <asm/p2m.h>
> +#include <xen/sched.h>
> +#include <xen/mem_access.h>
>  #include <asm/vm_event.h>
>
>  /* Implicitly serialized by the domctl lock. */
> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
> index 565a320..19f63bb 100644
> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -24,8 +24,8 @@
>  #include <xen/guest_access.h>
>  #include <xen/hypercall.h>
>  #include <xen/vm_event.h>
> +#include <xen/mem_access.h>
>  #include <public/memory.h>
> -#include <asm/p2m.h>
>  #include <xsm/xsm.h>
>
>  int mem_access_memop(unsigned long cmd,
> diff --git a/xen/include/asm-arm/mem_access.h b/xen/include/asm-arm/mem_access.h
> new file mode 100644
> index 0000000..3a155f8
> --- /dev/null
> +++ b/xen/include/asm-arm/mem_access.h
> @@ -0,0 +1,53 @@
> +/*
> + * mem_access.h: architecture specific mem_access handling routines
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _ASM_ARM_MEM_ACCESS_H
> +#define _ASM_ARM_MEM_ACCESS_H
> +
> +static inline
> +bool p2m_mem_access_emulate_check(struct vcpu *v,
> +                                  const vm_event_response_t *rsp)
> +{
> +    /* Not supported on ARM. */
> +    return 0;
> +}
> +
> +/* vm_event and mem_access are supported on any ARM guest */
> +static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> +{
> +    return 1;
> +}
> +
> +/*
> + * Send mem event based on the access. Boolean return value indicates if trap
> + * needs to be injected into guest.
> + */
> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
> +
> +struct page_info*
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v);
> +
> +#endif /* _ASM_ARM_MEM_ACCESS_H */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index fdb6b47..2b22e9a 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -4,6 +4,7 @@
>  #include <xen/mm.h>
>  #include <xen/radix-tree.h>
>  #include <xen/rwlock.h>
> +#include <xen/mem_access.h>
>  #include <public/vm_event.h> /* for vm_event_response_t */
>  #include <public/memory.h>
>  #include <xen/p2m-common.h>
> @@ -139,14 +140,6 @@ typedef enum {
>                               p2m_to_mask(p2m_map_foreign)))
>
>  static inline
> -bool p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> -{
> -    /* Not supported on ARM. */
> -    return 0;
> -}
> -
> -static inline
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  {
>      /* Not supported on ARM. */
> @@ -343,22 +336,26 @@ static inline int get_page_and_type(struct page_info *page,
>  /* get host p2m table */
>  #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>
> -/* vm_event and mem_access are supported on any ARM guest */
> -static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> -{
> -    return 1;
> -}
> -
>  static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
>  {
>      return 1;
>  }
>
>  /*
> - * Send mem event based on the access. Boolean return value indicates if trap
> - * needs to be injected into guest.
> + * Return the start of the next mapping based on the order of the
> + * current one.
>   */
> -bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
> +static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
> +{
> +    /*
> +     * The order corresponds to the order of the mapping (or invalid
> +     * range) in the page table. So we need to align the GFN before
> +     * incrementing.
> +     */
> +    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
> +
> +    return gfn_add(gfn, 1UL << order);
> +}
>
>  #endif /* _XEN_P2M_H */
>
> diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
> new file mode 100644
> index 0000000..9f7b409
> --- /dev/null
> +++ b/xen/include/asm-x86/mem_access.h
> @@ -0,0 +1,61 @@
> +/******************************************************************************
> + * include/asm-x86/mem_access.h
> + *
> + * Memory access support.
> + *
> + * Copyright (c) 2011 GridCentric Inc. (Andres Lagar-Cavilla)
> + * Copyright (c) 2007 Advanced Micro Devices (Wei Huang)
> + * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
> + * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
> + * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ASM_X86_MEM_ACCESS_H__
> +#define __ASM_X86_MEM_ACCESS_H__
> +
> +/*
> + * Setup vm_event request based on the access (gla is -1ull if not available).
> + * Handles the rw2rx conversion. Boolean return value indicates if event type
> + * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated,
> + * then the caller should use monitor_traps to send the event on the MONITOR
> + * ring. Once having released get_gfn* locks caller must also xfree the
> + * request.
> + */
> +bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> +                            struct npfec npfec,
> +                            vm_event_request_t **req_ptr);
> +
> +/* Check for emulation and mark vcpu for skipping one instruction
> + * upon rescheduling if required. */
> +bool p2m_mem_access_emulate_check(struct vcpu *v,
> +                                  const vm_event_response_t *rsp);
> +
> +/* Sanity check for mem_access hardware support */
> +static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> +{
> +    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
> +}
> +
> +#endif /*__ASM_X86_MEM_ACCESS_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 7035860..8964e90 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -29,6 +29,7 @@
>  #include <xen/config.h>
>  #include <xen/paging.h>
>  #include <xen/p2m-common.h>
> +#include <xen/mem_access.h>
>  #include <asm/mem_sharing.h>
>  #include <asm/page.h>    /* for pagetable_t */
>
> @@ -663,29 +664,6 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
>  /* Resume normal operation (in case a domain was paused) */
>  void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp);
>
> -/*
> - * Setup vm_event request based on the access (gla is -1ull if not available).
> - * Handles the rw2rx conversion. Boolean return value indicates if event type
> - * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated,
> - * then the caller should use monitor_traps to send the event on the MONITOR
> - * ring. Once having released get_gfn* locks caller must also xfree the
> - * request.
> - */
> -bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> -                            struct npfec npfec,
> -                            vm_event_request_t **req_ptr);
> -
> -/* Check for emulation and mark vcpu for skipping one instruction
> - * upon rescheduling if required. */
> -bool p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp);
> -
> -/* Sanity check for mem_access hardware support */
> -static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> -{
> -    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
> -}
> -
>  /*
>   * Internal functions, only called by other p2m code
>   */
> diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
> index da36e07..5ab34c1 100644
> --- a/xen/include/xen/mem_access.h
> +++ b/xen/include/xen/mem_access.h
> @@ -19,29 +19,78 @@
>   * along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>
> -#ifndef _XEN_ASM_MEM_ACCESS_H
> -#define _XEN_ASM_MEM_ACCESS_H
> +#ifndef _XEN_MEM_ACCESS_H
> +#define _XEN_MEM_ACCESS_H
>
> +#include <xen/types.h>
> +#include <xen/mm.h>
>  #include <public/memory.h>
> -#include <asm/p2m.h>
> +#include <public/vm_event.h>
> +#include <asm/mem_access.h>
>
> -#ifdef CONFIG_HAS_MEM_ACCESS
> +/*
> + * Additional access types, which are used to further restrict
> + * the permissions given my the p2m_type_t memory type.  Violations
> + * caused by p2m_access_t restrictions are sent to the vm_event
> + * interface.
> + *
> + * The access permissions are soft state: when any ambiguous change of page
> + * type or use occurs, or when pages are flushed, swapped, or at any other
> + * convenient type, the access permissions can get reset to the p2m_domain
> + * default.
> + */
> +typedef enum {
> +    /* Code uses bottom three bits with bitmask semantics */
> +    p2m_access_n     = 0, /* No access allowed. */
> +    p2m_access_r     = 1 << 0,
> +    p2m_access_w     = 1 << 1,
> +    p2m_access_x     = 1 << 2,
> +    p2m_access_rw    = p2m_access_r | p2m_access_w,
> +    p2m_access_rx    = p2m_access_r | p2m_access_x,
> +    p2m_access_wx    = p2m_access_w | p2m_access_x,
> +    p2m_access_rwx   = p2m_access_r | p2m_access_w | p2m_access_x,
> +
> +    p2m_access_rx2rw = 8, /* Special: page goes from RX to RW on write */
> +    p2m_access_n2rwx = 9, /* Special: page goes from N to RWX on access, *
> +                           * generates an event but does not pause the
> +                           * vcpu */
> +
> +    /* NOTE: Assumed to be only 4 bits right now on x86. */
> +} p2m_access_t;
> +
> +/*
> + * Set access type for a region of gfns.
> + * If gfn == INVALID_GFN, sets the default access type.
> + */
> +long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access,
> +                        unsigned int altp2m_idx);
>
> +long p2m_set_mem_access_multi(struct domain *d,
> +                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> +                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> +                              uint32_t nr, uint32_t start, uint32_t mask,
> +                              unsigned int altp2m_idx);
> +
> +/*
> + * Get access type for a gfn.
> + * If gfn == INVALID_GFN, gets the default access type.
> + */
> +int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access);
> +
> +#ifdef CONFIG_HAS_MEM_ACCESS
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
> -
>  #else
> -
>  static inline
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
>  {
>      return -ENOSYS;
>  }
> +#endif /* CONFIG_HAS_MEM_ACCESS */
>
> -#endif /* HAS_MEM_ACCESS */
> -
> -#endif /* _XEN_ASM_MEM_ACCESS_H */
> +#endif /* _XEN_MEM_ACCESS_H */
>
>  /*
>   * Local variables:
> diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> index 3be1e91..8cd5a6b 100644
> --- a/xen/include/xen/p2m-common.h
> +++ b/xen/include/xen/p2m-common.h
> @@ -1,38 +1,6 @@
>  #ifndef _XEN_P2M_COMMON_H
>  #define _XEN_P2M_COMMON_H
>
> -#include <public/vm_event.h>
> -
> -/*
> - * Additional access types, which are used to further restrict
> - * the permissions given my the p2m_type_t memory type.  Violations
> - * caused by p2m_access_t restrictions are sent to the vm_event
> - * interface.
> - *
> - * The access permissions are soft state: when any ambiguous change of page
> - * type or use occurs, or when pages are flushed, swapped, or at any other
> - * convenient type, the access permissions can get reset to the p2m_domain
> - * default.
> - */
> -typedef enum {
> -    /* Code uses bottom three bits with bitmask semantics */
> -    p2m_access_n     = 0, /* No access allowed. */
> -    p2m_access_r     = 1 << 0,
> -    p2m_access_w     = 1 << 1,
> -    p2m_access_x     = 1 << 2,
> -    p2m_access_rw    = p2m_access_r | p2m_access_w,
> -    p2m_access_rx    = p2m_access_r | p2m_access_x,
> -    p2m_access_wx    = p2m_access_w | p2m_access_x,
> -    p2m_access_rwx   = p2m_access_r | p2m_access_w | p2m_access_x,
> -
> -    p2m_access_rx2rw = 8, /* Special: page goes from RX to RW on write */
> -    p2m_access_n2rwx = 9, /* Special: page goes from N to RWX on access, *
> -                           * generates an event but does not pause the
> -                           * vcpu */
> -
> -    /* NOTE: Assumed to be only 4 bits right now on x86. */
> -} p2m_access_t;
> -
>  /* Map MMIO regions in the p2m: start_gfn and nr describe the range in
>   *  * the guest physical address space to map, starting from the machine
>   *   * frame number mfn. */
> @@ -45,24 +13,4 @@ int unmap_mmio_regions(struct domain *d,
>                         unsigned long nr,
>                         mfn_t mfn);
>
> -/*
> - * Set access type for a region of gfns.
> - * If gfn == INVALID_GFN, sets the default access type.
> - */
> -long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> -                        uint32_t start, uint32_t mask, xenmem_access_t access,
> -                        unsigned int altp2m_idx);
> -
> -long p2m_set_mem_access_multi(struct domain *d,
> -                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> -                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> -                              uint32_t nr, uint32_t start, uint32_t mask,
> -                              unsigned int altp2m_idx);
> -
> -/*
> - * Get access type for a gfn.
> - * If gfn == INVALID_GFN, gets the default access type.
> - */
> -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access);
> -
>  #endif /* _XEN_P2M_COMMON_H */
> --
> 2.10.2
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/2] p2m: split mem_access into separate files
  2017-01-03 15:31   ` Tamas K Lengyel
@ 2017-01-11 16:33     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 24+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-01-11 16:33 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Julien Grall,
	Jan Beulich, xen-devel

On Tue, Jan 03, 2017 at 08:31:25AM -0700, Tamas K Lengyel wrote:
> On Fri, Dec 9, 2016 at 12:59 PM, Tamas K Lengyel
> <tamas.lengyel@zentific.com> wrote:
> > This patch relocates mem_access components that are currently mixed with p2m
> > code into separate files. This better aligns the code with similar subsystems,
> > such as mem_sharing and mem_paging, which are already in separate files. There
> > are no code-changes introduced, the patch is mechanical code movement.
> >
> > On ARM we also relocate the static inline gfn_next_boundary function to p2m.h
> > as it is a function the mem_access code needs access to.
> >
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> > Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> 
> Acked-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current
  2016-12-09 19:59 [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2017-01-03 15:29 ` Tamas K Lengyel
@ 2017-01-30 21:11 ` Stefano Stabellini
  3 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-01-30 21:11 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Julien Grall, Stefano Stabellini

On Fri, 9 Dec 2016, Tamas K Lengyel wrote:
> The only caller of this function is get_page_from_gva which already takes
> a vcpu pointer as input. Pass this along to make the function in-line with
> its intended use-case.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index cc5634b..837be1d 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1461,7 +1461,8 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>   * we indeed found a conflicting mem_access setting.
>   */
>  static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v)
>  {
>      long rc;
>      paddr_t ipa;
> @@ -1470,7 +1471,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>      xenmem_access_t xma;
>      p2m_type_t t;
>      struct page_info *page = NULL;
> -    struct p2m_domain *p2m = &current->domain->arch.p2m;
> +    struct p2m_domain *p2m = &v->domain->arch.p2m;
>  
>      rc = gva_to_ipa(gva, &ipa, flag);
>      if ( rc < 0 )
> @@ -1482,7 +1483,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
>       */
> -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
> +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
>      if ( rc < 0 )
>          goto err;
>  
> @@ -1546,7 +1547,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>  
>      page = mfn_to_page(mfn_x(mfn));
>  
> -    if ( unlikely(!get_page(page, current->domain)) )
> +    if ( unlikely(!get_page(page, v->domain)) )
>          page = NULL;
>  
>  err:
> @@ -1587,7 +1588,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>  
>  err:
>      if ( !page && p2m->mem_access_enabled )
> -        page = p2m_mem_access_check_and_get_page(va, flags);
> +        page = p2m_mem_access_check_and_get_page(va, flags, v);
>  
>      p2m_read_unlock(p2m);
>  
> -- 
> 2.10.2
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/2] p2m: split mem_access into separate files
  2016-12-09 19:59 ` [PATCH v2 2/2] p2m: split mem_access into separate files Tamas K Lengyel
  2017-01-03 15:31   ` Tamas K Lengyel
@ 2017-01-30 21:11   ` Stefano Stabellini
  1 sibling, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-01-30 21:11 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Julien Grall,
	Jan Beulich, xen-devel

On Fri, 9 Dec 2016, Tamas K Lengyel wrote:
> This patch relocates mem_access components that are currently mixed with p2m
> code into separate files. This better aligns the code with similar subsystems,
> such as mem_sharing and mem_paging, which are already in separate files. There
> are no code-changes introduced, the patch is mechanical code movement.
> 
> On ARM we also relocate the static inline gfn_next_boundary function to p2m.h
> as it is a function the mem_access code needs access to.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I'll commit both patches shortly.

> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> 
> v2: Don't move ARM radix tree functions
>     Include asm/mem_accesss.h in xen/mem_access.h
> ---
>  MAINTAINERS                      |   2 +
>  xen/arch/arm/Makefile            |   1 +
>  xen/arch/arm/mem_access.c        | 431 ++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/p2m.c               | 414 +----------------------------------
>  xen/arch/arm/traps.c             |   1 +
>  xen/arch/x86/mm/Makefile         |   1 +
>  xen/arch/x86/mm/mem_access.c     | 462 +++++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/p2m.c            | 421 -----------------------------------
>  xen/arch/x86/vm_event.c          |   3 +-
>  xen/common/mem_access.c          |   2 +-
>  xen/include/asm-arm/mem_access.h |  53 +++++
>  xen/include/asm-arm/p2m.h        |  31 ++-
>  xen/include/asm-x86/mem_access.h |  61 ++++++
>  xen/include/asm-x86/p2m.h        |  24 +-
>  xen/include/xen/mem_access.h     |  67 +++++-
>  xen/include/xen/p2m-common.h     |  52 -----
>  16 files changed, 1089 insertions(+), 937 deletions(-)
>  create mode 100644 xen/arch/arm/mem_access.c
>  create mode 100644 xen/arch/x86/mm/mem_access.c
>  create mode 100644 xen/include/asm-arm/mem_access.h
>  create mode 100644 xen/include/asm-x86/mem_access.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f0d0202..fb26be3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -402,6 +402,8 @@ S:	Supported
>  F:	tools/tests/xen-access
>  F:	xen/arch/*/monitor.c
>  F:	xen/arch/*/vm_event.c
> +F:	xen/arch/arm/mem_access.c
> +F:	xen/arch/x86/mm/mem_access.c
>  F:	xen/arch/x86/hvm/monitor.c
>  F:	xen/common/mem_access.c
>  F:	xen/common/monitor.c
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index da39d39..b095e8a 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -24,6 +24,7 @@ obj-y += io.o
>  obj-y += irq.o
>  obj-y += kernel.o
>  obj-$(CONFIG_LIVEPATCH) += livepatch.o
> +obj-y += mem_access.o
>  obj-y += mm.o
>  obj-y += monitor.o
>  obj-y += p2m.o
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> new file mode 100644
> index 0000000..a6e5bcd
> --- /dev/null
> +++ b/xen/arch/arm/mem_access.c
> @@ -0,0 +1,431 @@
> +/*
> + * arch/arm/mem_access.c
> + *
> + * Architecture-specific mem_access handling routines
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public
> + * License v2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/mem_access.h>
> +#include <xen/monitor.h>
> +#include <xen/sched.h>
> +#include <xen/vm_event.h>
> +#include <public/vm_event.h>
> +#include <asm/event.h>
> +
> +static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> +                                xenmem_access_t *access)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    void *i;
> +    unsigned int index;
> +
> +    static const xenmem_access_t memaccess[] = {
> +#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> +            ACCESS(n),
> +            ACCESS(r),
> +            ACCESS(w),
> +            ACCESS(rw),
> +            ACCESS(x),
> +            ACCESS(rx),
> +            ACCESS(wx),
> +            ACCESS(rwx),
> +            ACCESS(rx2rw),
> +            ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    ASSERT(p2m_is_locked(p2m));
> +
> +    /* If no setting was ever set, just return rwx. */
> +    if ( !p2m->mem_access_enabled )
> +    {
> +        *access = XENMEM_access_rwx;
> +        return 0;
> +    }
> +
> +    /* If request to get default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        *access = memaccess[p2m->default_access];
> +        return 0;
> +    }
> +
> +    i = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
> +
> +    if ( !i )
> +    {
> +        /*
> +         * No setting was found in the Radix tree. Check if the
> +         * entry exists in the page-tables.
> +         */
> +        mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
> +
> +        if ( mfn_eq(mfn, INVALID_MFN) )
> +            return -ESRCH;
> +
> +        /* If entry exists then its rwx. */
> +        *access = XENMEM_access_rwx;
> +    }
> +    else
> +    {
> +        /* Setting was found in the Radix tree. */
> +        index = radix_tree_ptr_to_int(i);
> +        if ( index >= ARRAY_SIZE(memaccess) )
> +            return -ERANGE;
> +
> +        *access = memaccess[index];
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * If mem_access is in use it might have been the reason why get_page_from_gva
> + * failed to fetch the page, as it uses the MMU for the permission checking.
> + * Only in these cases we do a software-based type check and fetch the page if
> + * we indeed found a conflicting mem_access setting.
> + */
> +struct page_info*
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v)
> +{
> +    long rc;
> +    paddr_t ipa;
> +    gfn_t gfn;
> +    mfn_t mfn;
> +    xenmem_access_t xma;
> +    p2m_type_t t;
> +    struct page_info *page = NULL;
> +    struct p2m_domain *p2m = &v->domain->arch.p2m;
> +
> +    rc = gva_to_ipa(gva, &ipa, flag);
> +    if ( rc < 0 )
> +        goto err;
> +
> +    gfn = _gfn(paddr_to_pfn(ipa));
> +
> +    /*
> +     * We do this first as this is faster in the default case when no
> +     * permission is set on the page.
> +     */
> +    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
> +    if ( rc < 0 )
> +        goto err;
> +
> +    /* Let's check if mem_access limited the access. */
> +    switch ( xma )
> +    {
> +    default:
> +    case XENMEM_access_rwx:
> +    case XENMEM_access_rw:
> +        /*
> +         * If mem_access contains no rw perm restrictions at all then the original
> +         * fault was correct.
> +         */
> +        goto err;
> +    case XENMEM_access_n2rwx:
> +    case XENMEM_access_n:
> +    case XENMEM_access_x:
> +        /*
> +         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
> +         */
> +        break;
> +    case XENMEM_access_wx:
> +    case XENMEM_access_w:
> +        /*
> +         * If this was a read then it was because of mem_access, but if it was
> +         * a write then the original get_page_from_gva fault was correct.
> +         */
> +        if ( flag == GV2M_READ )
> +            break;
> +        else
> +            goto err;
> +    case XENMEM_access_rx2rw:
> +    case XENMEM_access_rx:
> +    case XENMEM_access_r:
> +        /*
> +         * If this was a write then it was because of mem_access, but if it was
> +         * a read then the original get_page_from_gva fault was correct.
> +         */
> +        if ( flag == GV2M_WRITE )
> +            break;
> +        else
> +            goto err;
> +    }
> +
> +    /*
> +     * We had a mem_access permission limiting the access, but the page type
> +     * could also be limiting, so we need to check that as well.
> +     */
> +    mfn = p2m_get_entry(p2m, gfn, &t, NULL, NULL);
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        goto err;
> +
> +    if ( !mfn_valid(mfn_x(mfn)) )
> +        goto err;
> +
> +    /*
> +     * Base type doesn't allow r/w
> +     */
> +    if ( t != p2m_ram_rw )
> +        goto err;
> +
> +    page = mfn_to_page(mfn_x(mfn));
> +
> +    if ( unlikely(!get_page(page, v->domain)) )
> +        page = NULL;
> +
> +err:
> +    return page;
> +}
> +
> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
> +{
> +    int rc;
> +    bool_t violation;
> +    xenmem_access_t xma;
> +    vm_event_request_t *req;
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
> +
> +    /* Mem_access is not in use. */
> +    if ( !p2m->mem_access_enabled )
> +        return true;
> +
> +    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
> +    if ( rc )
> +        return true;
> +
> +    /* Now check for mem_access violation. */
> +    switch ( xma )
> +    {
> +    case XENMEM_access_rwx:
> +        violation = false;
> +        break;
> +    case XENMEM_access_rw:
> +        violation = npfec.insn_fetch;
> +        break;
> +    case XENMEM_access_wx:
> +        violation = npfec.read_access;
> +        break;
> +    case XENMEM_access_rx:
> +    case XENMEM_access_rx2rw:
> +        violation = npfec.write_access;
> +        break;
> +    case XENMEM_access_x:
> +        violation = npfec.read_access || npfec.write_access;
> +        break;
> +    case XENMEM_access_w:
> +        violation = npfec.read_access || npfec.insn_fetch;
> +        break;
> +    case XENMEM_access_r:
> +        violation = npfec.write_access || npfec.insn_fetch;
> +        break;
> +    default:
> +    case XENMEM_access_n:
> +    case XENMEM_access_n2rwx:
> +        violation = true;
> +        break;
> +    }
> +
> +    if ( !violation )
> +        return true;
> +
> +    /* First, handle rx2rw and n2rwx conversion automatically. */
> +    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
> +    {
> +        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> +                                0, ~0, XENMEM_access_rw, 0);
> +        return false;
> +    }
> +    else if ( xma == XENMEM_access_n2rwx )
> +    {
> +        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> +                                0, ~0, XENMEM_access_rwx, 0);
> +    }
> +
> +    /* Otherwise, check if there is a vm_event monitor subscriber */
> +    if ( !vm_event_check_ring(&v->domain->vm_event->monitor) )
> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no vm_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, v->domain->domain_id);
> +            domain_crash(v->domain);
> +        }
> +        else
> +        {
> +            /* n2rwx was already handled */
> +            if ( xma != XENMEM_access_n2rwx )
> +            {
> +                /* A listener is not required, so clear the access
> +                 * restrictions. */
> +                rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> +                                        0, ~0, XENMEM_access_rwx, 0);
> +            }
> +        }
> +
> +        /* No need to reinject */
> +        return false;
> +    }
> +
> +    req = xzalloc(vm_event_request_t);
> +    if ( req )
> +    {
> +        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> +
> +        /* Send request to mem access subscriber */
> +        req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
> +        req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
> +        if ( npfec.gla_valid )
> +        {
> +            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> +            req->u.mem_access.gla = gla;
> +
> +            if ( npfec.kind == npfec_kind_with_gla )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> +            else if ( npfec.kind == npfec_kind_in_gpt )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> +        }
> +        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> +        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> +        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> +
> +        if ( monitor_traps(v, (xma != XENMEM_access_n2rwx), req) < 0 )
> +            domain_crash(v->domain);
> +
> +        xfree(req);
> +    }
> +
> +    return false;
> +}
> +
> +/*
> + * Set access type for a region of pfns.
> + * If gfn == INVALID_GFN, sets the default access type.
> + */
> +long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access,
> +                        unsigned int altp2m_idx)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_access_t a;
> +    unsigned int order;
> +    long rc = 0;
> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    switch ( access )
> +    {
> +    case 0 ... ARRAY_SIZE(memaccess) - 1:
> +        a = memaccess[access];
> +        break;
> +    case XENMEM_access_default:
> +        a = p2m->default_access;
> +        break;
> +    default:
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Flip mem_access_enabled to true when a permission is set, as to prevent
> +     * allocating or inserting super-pages.
> +     */
> +    p2m->mem_access_enabled = true;
> +
> +    /* If request to set default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        p2m->default_access = a;
> +        return 0;
> +    }
> +
> +    p2m_write_lock(p2m);
> +
> +    for ( gfn = gfn_add(gfn, start); nr > start;
> +          gfn = gfn_next_boundary(gfn, order) )
> +    {
> +        p2m_type_t t;
> +        mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL, &order);
> +
> +
> +        if ( !mfn_eq(mfn, INVALID_MFN) )
> +        {
> +            order = 0;
> +            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, a);
> +            if ( rc )
> +                break;
> +        }
> +
> +        start += gfn_x(gfn_next_boundary(gfn, order)) - gfn_x(gfn);
> +        /* Check for continuation if it is not the last iteration */
> +        if ( nr > start && !(start & mask) && hypercall_preempt_check() )
> +        {
> +            rc = start;
> +            break;
> +        }
> +    }
> +
> +    p2m_write_unlock(p2m);
> +
> +    return rc;
> +}
> +
> +long p2m_set_mem_access_multi(struct domain *d,
> +                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> +                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> +                              uint32_t nr, uint32_t start, uint32_t mask,
> +                              unsigned int altp2m_idx)
> +{
> +    /* Not yet implemented on ARM. */
> +    return -EOPNOTSUPP;
> +}
> +
> +int p2m_get_mem_access(struct domain *d, gfn_t gfn,
> +                       xenmem_access_t *access)
> +{
> +    int ret;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    p2m_read_lock(p2m);
> +    ret = __p2m_get_mem_access(d, gfn, access);
> +    p2m_read_unlock(p2m);
> +
> +    return ret;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 837be1d..4e7ce3d 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -7,6 +7,7 @@
>  #include <xen/vm_event.h>
>  #include <xen/monitor.h>
>  #include <xen/iocap.h>
> +#include <xen/mem_access.h>
>  #include <public/vm_event.h>
>  #include <asm/flushtlb.h>
>  #include <asm/gic.h>
> @@ -58,22 +59,6 @@ static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
>      return (level < 3) && p2m_mapping(pte);
>  }
>  
> -/*
> - * Return the start of the next mapping based on the order of the
> - * current one.
> - */
> -static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
> -{
> -    /*
> -     * The order corresponds to the order of the mapping (or invalid
> -     * range) in the page table. So we need to align the GFN before
> -     * incrementing.
> -     */
> -    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
> -
> -    return gfn_add(gfn, 1UL << order);
> -}
> -
>  static void p2m_flush_tlb(struct p2m_domain *p2m);
>  
>  /* Unlock the flush and do a P2M TLB flush if necessary */
> @@ -602,73 +587,6 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
>      return 0;
>  }
>  
> -static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> -                                xenmem_access_t *access)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    void *i;
> -    unsigned int index;
> -
> -    static const xenmem_access_t memaccess[] = {
> -#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> -            ACCESS(n),
> -            ACCESS(r),
> -            ACCESS(w),
> -            ACCESS(rw),
> -            ACCESS(x),
> -            ACCESS(rx),
> -            ACCESS(wx),
> -            ACCESS(rwx),
> -            ACCESS(rx2rw),
> -            ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    ASSERT(p2m_is_locked(p2m));
> -
> -    /* If no setting was ever set, just return rwx. */
> -    if ( !p2m->mem_access_enabled )
> -    {
> -        *access = XENMEM_access_rwx;
> -        return 0;
> -    }
> -
> -    /* If request to get default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        *access = memaccess[p2m->default_access];
> -        return 0;
> -    }
> -
> -    i = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
> -
> -    if ( !i )
> -    {
> -        /*
> -         * No setting was found in the Radix tree. Check if the
> -         * entry exists in the page-tables.
> -         */
> -        mfn_t mfn = p2m_get_entry(p2m, gfn, NULL, NULL, NULL);
> -
> -        if ( mfn_eq(mfn, INVALID_MFN) )
> -            return -ESRCH;
> -
> -        /* If entry exists then its rwx. */
> -        *access = XENMEM_access_rwx;
> -    }
> -    else
> -    {
> -        /* Setting was found in the Radix tree. */
> -        index = radix_tree_ptr_to_int(i);
> -        if ( index >= ARRAY_SIZE(memaccess) )
> -            return -ERANGE;
> -
> -        *access = memaccess[index];
> -    }
> -
> -    return 0;
> -}
> -
>  static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
>                                      p2m_access_t a)
>  {
> @@ -1454,106 +1372,6 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>      return p2m_lookup(d, gfn, NULL);
>  }
>  
> -/*
> - * If mem_access is in use it might have been the reason why get_page_from_gva
> - * failed to fetch the page, as it uses the MMU for the permission checking.
> - * Only in these cases we do a software-based type check and fetch the page if
> - * we indeed found a conflicting mem_access setting.
> - */
> -static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> -                                  const struct vcpu *v)
> -{
> -    long rc;
> -    paddr_t ipa;
> -    gfn_t gfn;
> -    mfn_t mfn;
> -    xenmem_access_t xma;
> -    p2m_type_t t;
> -    struct page_info *page = NULL;
> -    struct p2m_domain *p2m = &v->domain->arch.p2m;
> -
> -    rc = gva_to_ipa(gva, &ipa, flag);
> -    if ( rc < 0 )
> -        goto err;
> -
> -    gfn = _gfn(paddr_to_pfn(ipa));
> -
> -    /*
> -     * We do this first as this is faster in the default case when no
> -     * permission is set on the page.
> -     */
> -    rc = __p2m_get_mem_access(v->domain, gfn, &xma);
> -    if ( rc < 0 )
> -        goto err;
> -
> -    /* Let's check if mem_access limited the access. */
> -    switch ( xma )
> -    {
> -    default:
> -    case XENMEM_access_rwx:
> -    case XENMEM_access_rw:
> -        /*
> -         * If mem_access contains no rw perm restrictions at all then the original
> -         * fault was correct.
> -         */
> -        goto err;
> -    case XENMEM_access_n2rwx:
> -    case XENMEM_access_n:
> -    case XENMEM_access_x:
> -        /*
> -         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
> -         */
> -        break;
> -    case XENMEM_access_wx:
> -    case XENMEM_access_w:
> -        /*
> -         * If this was a read then it was because of mem_access, but if it was
> -         * a write then the original get_page_from_gva fault was correct.
> -         */
> -        if ( flag == GV2M_READ )
> -            break;
> -        else
> -            goto err;
> -    case XENMEM_access_rx2rw:
> -    case XENMEM_access_rx:
> -    case XENMEM_access_r:
> -        /*
> -         * If this was a write then it was because of mem_access, but if it was
> -         * a read then the original get_page_from_gva fault was correct.
> -         */
> -        if ( flag == GV2M_WRITE )
> -            break;
> -        else
> -            goto err;
> -    }
> -
> -    /*
> -     * We had a mem_access permission limiting the access, but the page type
> -     * could also be limiting, so we need to check that as well.
> -     */
> -    mfn = p2m_get_entry(p2m, gfn, &t, NULL, NULL);
> -    if ( mfn_eq(mfn, INVALID_MFN) )
> -        goto err;
> -
> -    if ( !mfn_valid(mfn_x(mfn)) )
> -        goto err;
> -
> -    /*
> -     * Base type doesn't allow r/w
> -     */
> -    if ( t != p2m_ram_rw )
> -        goto err;
> -
> -    page = mfn_to_page(mfn_x(mfn));
> -
> -    if ( unlikely(!get_page(page, v->domain)) )
> -        page = NULL;
> -
> -err:
> -    return page;
> -}
> -
>  struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>                                      unsigned long flags)
>  {
> @@ -1666,236 +1484,6 @@ void __init setup_virt_paging(void)
>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>  }
>  
> -bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
> -{
> -    int rc;
> -    bool_t violation;
> -    xenmem_access_t xma;
> -    vm_event_request_t *req;
> -    struct vcpu *v = current;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
> -
> -    /* Mem_access is not in use. */
> -    if ( !p2m->mem_access_enabled )
> -        return true;
> -
> -    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
> -    if ( rc )
> -        return true;
> -
> -    /* Now check for mem_access violation. */
> -    switch ( xma )
> -    {
> -    case XENMEM_access_rwx:
> -        violation = false;
> -        break;
> -    case XENMEM_access_rw:
> -        violation = npfec.insn_fetch;
> -        break;
> -    case XENMEM_access_wx:
> -        violation = npfec.read_access;
> -        break;
> -    case XENMEM_access_rx:
> -    case XENMEM_access_rx2rw:
> -        violation = npfec.write_access;
> -        break;
> -    case XENMEM_access_x:
> -        violation = npfec.read_access || npfec.write_access;
> -        break;
> -    case XENMEM_access_w:
> -        violation = npfec.read_access || npfec.insn_fetch;
> -        break;
> -    case XENMEM_access_r:
> -        violation = npfec.write_access || npfec.insn_fetch;
> -        break;
> -    default:
> -    case XENMEM_access_n:
> -    case XENMEM_access_n2rwx:
> -        violation = true;
> -        break;
> -    }
> -
> -    if ( !violation )
> -        return true;
> -
> -    /* First, handle rx2rw and n2rwx conversion automatically. */
> -    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
> -    {
> -        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> -                                0, ~0, XENMEM_access_rw, 0);
> -        return false;
> -    }
> -    else if ( xma == XENMEM_access_n2rwx )
> -    {
> -        rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> -                                0, ~0, XENMEM_access_rwx, 0);
> -    }
> -
> -    /* Otherwise, check if there is a vm_event monitor subscriber */
> -    if ( !vm_event_check_ring(&v->domain->vm_event->monitor) )
> -    {
> -        /* No listener */
> -        if ( p2m->access_required )
> -        {
> -            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> -                                  "no vm_event listener VCPU %d, dom %d\n",
> -                                  v->vcpu_id, v->domain->domain_id);
> -            domain_crash(v->domain);
> -        }
> -        else
> -        {
> -            /* n2rwx was already handled */
> -            if ( xma != XENMEM_access_n2rwx )
> -            {
> -                /* A listener is not required, so clear the access
> -                 * restrictions. */
> -                rc = p2m_set_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 1,
> -                                        0, ~0, XENMEM_access_rwx, 0);
> -            }
> -        }
> -
> -        /* No need to reinject */
> -        return false;
> -    }
> -
> -    req = xzalloc(vm_event_request_t);
> -    if ( req )
> -    {
> -        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> -
> -        /* Send request to mem access subscriber */
> -        req->u.mem_access.gfn = gpa >> PAGE_SHIFT;
> -        req->u.mem_access.offset =  gpa & ((1 << PAGE_SHIFT) - 1);
> -        if ( npfec.gla_valid )
> -        {
> -            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> -            req->u.mem_access.gla = gla;
> -
> -            if ( npfec.kind == npfec_kind_with_gla )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> -            else if ( npfec.kind == npfec_kind_in_gpt )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> -        }
> -        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> -        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> -        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> -
> -        if ( monitor_traps(v, (xma != XENMEM_access_n2rwx), req) < 0 )
> -            domain_crash(v->domain);
> -
> -        xfree(req);
> -    }
> -
> -    return false;
> -}
> -
> -/*
> - * Set access type for a region of pfns.
> - * If gfn == INVALID_GFN, sets the default access type.
> - */
> -long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> -                        uint32_t start, uint32_t mask, xenmem_access_t access,
> -                        unsigned int altp2m_idx)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    p2m_access_t a;
> -    unsigned int order;
> -    long rc = 0;
> -
> -    static const p2m_access_t memaccess[] = {
> -#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> -        ACCESS(n),
> -        ACCESS(r),
> -        ACCESS(w),
> -        ACCESS(rw),
> -        ACCESS(x),
> -        ACCESS(rx),
> -        ACCESS(wx),
> -        ACCESS(rwx),
> -        ACCESS(rx2rw),
> -        ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    switch ( access )
> -    {
> -    case 0 ... ARRAY_SIZE(memaccess) - 1:
> -        a = memaccess[access];
> -        break;
> -    case XENMEM_access_default:
> -        a = p2m->default_access;
> -        break;
> -    default:
> -        return -EINVAL;
> -    }
> -
> -    /*
> -     * Flip mem_access_enabled to true when a permission is set, as to prevent
> -     * allocating or inserting super-pages.
> -     */
> -    p2m->mem_access_enabled = true;
> -
> -    /* If request to set default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        p2m->default_access = a;
> -        return 0;
> -    }
> -
> -    p2m_write_lock(p2m);
> -
> -    for ( gfn = gfn_add(gfn, start); nr > start;
> -          gfn = gfn_next_boundary(gfn, order) )
> -    {
> -        p2m_type_t t;
> -        mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL, &order);
> -
> -
> -        if ( !mfn_eq(mfn, INVALID_MFN) )
> -        {
> -            order = 0;
> -            rc = __p2m_set_entry(p2m, gfn, 0, mfn, t, a);
> -            if ( rc )
> -                break;
> -        }
> -
> -        start += gfn_x(gfn_next_boundary(gfn, order)) - gfn_x(gfn);
> -        /* Check for continuation if it is not the last iteration */
> -        if ( nr > start && !(start & mask) && hypercall_preempt_check() )
> -        {
> -            rc = start;
> -            break;
> -        }
> -    }
> -
> -    p2m_write_unlock(p2m);
> -
> -    return rc;
> -}
> -
> -long p2m_set_mem_access_multi(struct domain *d,
> -                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> -                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> -                              uint32_t nr, uint32_t start, uint32_t mask,
> -                              unsigned int altp2m_idx)
> -{
> -    /* Not yet implemented on ARM. */
> -    return -EOPNOTSUPP;
> -}
> -
> -int p2m_get_mem_access(struct domain *d, gfn_t gfn,
> -                       xenmem_access_t *access)
> -{
> -    int ret;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -
> -    p2m_read_lock(p2m);
> -    ret = __p2m_get_mem_access(d, gfn, access);
> -    p2m_read_unlock(p2m);
> -
> -    return ret;
> -}
> -
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 8ff73fe..f2ea083 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -32,6 +32,7 @@
>  #include <xen/domain_page.h>
>  #include <xen/perfc.h>
>  #include <xen/virtual_region.h>
> +#include <xen/mem_access.h>
>  #include <public/sched.h>
>  #include <public/xen.h>
>  #include <asm/debugger.h>
> diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
> index 9804c3a..e977dd8 100644
> --- a/xen/arch/x86/mm/Makefile
> +++ b/xen/arch/x86/mm/Makefile
> @@ -9,6 +9,7 @@ obj-y += guest_walk_3.o
>  obj-y += guest_walk_4.o
>  obj-y += mem_paging.o
>  obj-y += mem_sharing.o
> +obj-y += mem_access.o
>  
>  guest_walk_%.o: guest_walk.c Makefile
>  	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
> diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
> new file mode 100644
> index 0000000..34a994d
> --- /dev/null
> +++ b/xen/arch/x86/mm/mem_access.c
> @@ -0,0 +1,462 @@
> +/******************************************************************************
> + * arch/x86/mm/mem_access.c
> + *
> + * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> + * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
> + * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
> + * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
> + * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/guest_access.h> /* copy_from_guest() */
> +#include <xen/mem_access.h>
> +#include <xen/vm_event.h>
> +#include <xen/event.h>
> +#include <public/vm_event.h>
> +#include <asm/p2m.h>
> +#include <asm/altp2m.h>
> +#include <asm/vm_event.h>
> +
> +#include "mm-locks.h"
> +
> +bool p2m_mem_access_emulate_check(struct vcpu *v,
> +                                  const vm_event_response_t *rsp)
> +{
> +    xenmem_access_t access;
> +    bool violation = 1;
> +    const struct vm_event_mem_access *data = &rsp->u.mem_access;
> +
> +    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    {
> +        switch ( access )
> +        {
> +        case XENMEM_access_n:
> +        case XENMEM_access_n2rwx:
> +        default:
> +            violation = data->flags & MEM_ACCESS_RWX;
> +            break;
> +
> +        case XENMEM_access_r:
> +            violation = data->flags & MEM_ACCESS_WX;
> +            break;
> +
> +        case XENMEM_access_w:
> +            violation = data->flags & MEM_ACCESS_RX;
> +            break;
> +
> +        case XENMEM_access_x:
> +            violation = data->flags & MEM_ACCESS_RW;
> +            break;
> +
> +        case XENMEM_access_rx:
> +        case XENMEM_access_rx2rw:
> +            violation = data->flags & MEM_ACCESS_W;
> +            break;
> +
> +        case XENMEM_access_wx:
> +            violation = data->flags & MEM_ACCESS_R;
> +            break;
> +
> +        case XENMEM_access_rw:
> +            violation = data->flags & MEM_ACCESS_X;
> +            break;
> +
> +        case XENMEM_access_rwx:
> +            violation = 0;
> +            break;
> +        }
> +    }
> +
> +    return violation;
> +}
> +
> +bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> +                            struct npfec npfec,
> +                            vm_event_request_t **req_ptr)
> +{
> +    struct vcpu *v = current;
> +    unsigned long gfn = gpa >> PAGE_SHIFT;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = NULL;
> +    mfn_t mfn;
> +    p2m_type_t p2mt;
> +    p2m_access_t p2ma;
> +    vm_event_request_t *req;
> +    int rc;
> +
> +    if ( altp2m_active(d) )
> +        p2m = p2m_get_altp2m(v);
> +    if ( !p2m )
> +        p2m = p2m_get_hostp2m(d);
> +
> +    /* First, handle rx2rw conversion automatically.
> +     * These calls to p2m->set_entry() must succeed: we have the gfn
> +     * locked and just did a successful get_entry(). */
> +    gfn_lock(p2m, gfn, 0);
> +    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> +
> +    if ( npfec.write_access && p2ma == p2m_access_rx2rw )
> +    {
> +        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw, -1);
> +        ASSERT(rc == 0);
> +        gfn_unlock(p2m, gfn, 0);
> +        return 1;
> +    }
> +    else if ( p2ma == p2m_access_n2rwx )
> +    {
> +        ASSERT(npfec.write_access || npfec.read_access || npfec.insn_fetch);
> +        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> +                            p2mt, p2m_access_rwx, -1);
> +        ASSERT(rc == 0);
> +    }
> +    gfn_unlock(p2m, gfn, 0);
> +
> +    /* Otherwise, check if there is a memory event listener, and send the message along */
> +    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr )
> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no vm_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, d->domain_id);
> +            domain_crash(v->domain);
> +            return 0;
> +        }
> +        else
> +        {
> +            gfn_lock(p2m, gfn, 0);
> +            mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> +            if ( p2ma != p2m_access_n2rwx )
> +            {
> +                /* A listener is not required, so clear the access
> +                 * restrictions.  This set must succeed: we have the
> +                 * gfn locked and just did a successful get_entry(). */
> +                rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> +                                    p2mt, p2m_access_rwx, -1);
> +                ASSERT(rc == 0);
> +            }
> +            gfn_unlock(p2m, gfn, 0);
> +            return 1;
> +        }
> +    }
> +
> +    *req_ptr = NULL;
> +    req = xzalloc(vm_event_request_t);
> +    if ( req )
> +    {
> +        *req_ptr = req;
> +
> +        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> +        req->u.mem_access.gfn = gfn;
> +        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
> +        if ( npfec.gla_valid )
> +        {
> +            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> +            req->u.mem_access.gla = gla;
> +
> +            if ( npfec.kind == npfec_kind_with_gla )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> +            else if ( npfec.kind == npfec_kind_in_gpt )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> +        }
> +        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> +        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> +        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> +    }
> +
> +    /* Return whether vCPU pause is required (aka. sync event) */
> +    return (p2ma != p2m_access_n2rwx);
> +}
> +
> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
> +                              struct p2m_domain *ap2m, p2m_access_t a,
> +                              gfn_t gfn)
> +{
> +    mfn_t mfn;
> +    p2m_type_t t;
> +    p2m_access_t old_a;
> +    unsigned int page_order;
> +    unsigned long gfn_l = gfn_x(gfn);
> +    int rc;
> +
> +    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
> +
> +    /* Check host p2m if no valid entry in alternate */
> +    if ( !mfn_valid(mfn_x(mfn)) )
> +    {
> +
> +        mfn = __get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
> +                                    P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
> +
> +        rc = -ESRCH;
> +        if ( !mfn_valid(mfn_x(mfn)) || t != p2m_ram_rw )
> +            return rc;
> +
> +        /* If this is a superpage, copy that first */
> +        if ( page_order != PAGE_ORDER_4K )
> +        {
> +            unsigned long mask = ~((1UL << page_order) - 1);
> +            unsigned long gfn2_l = gfn_l & mask;
> +            mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
> +
> +            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
> +            if ( rc )
> +                return rc;
> +        }
> +    }
> +
> +    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
> +                         (current->domain != d));
> +}
> +
> +static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
> +                          struct p2m_domain *ap2m, p2m_access_t a,
> +                          gfn_t gfn)
> +{
> +    int rc = 0;
> +
> +    if ( ap2m )
> +    {
> +        rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
> +        /* If the corresponding mfn is invalid we will want to just skip it */
> +        if ( rc == -ESRCH )
> +            rc = 0;
> +    }
> +    else
> +    {
> +        mfn_t mfn;
> +        p2m_access_t _a;
> +        p2m_type_t t;
> +        unsigned long gfn_l = gfn_x(gfn);
> +
> +        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
> +        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
> +    }
> +
> +    return rc;
> +}
> +
> +static bool xenmem_access_to_p2m_access(struct p2m_domain *p2m,
> +                                        xenmem_access_t xaccess,
> +                                        p2m_access_t *paccess)
> +{
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    switch ( xaccess )
> +    {
> +    case 0 ... ARRAY_SIZE(memaccess) - 1:
> +        *paccess = memaccess[xaccess];
> +        break;
> +    case XENMEM_access_default:
> +        *paccess = p2m->default_access;
> +        break;
> +    default:
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +/*
> + * Set access type for a region of gfns.
> + * If gfn == INVALID_GFN, sets the default access type.
> + */
> +long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access,
> +                        unsigned int altp2m_idx)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> +    p2m_access_t a;
> +    unsigned long gfn_l;
> +    long rc = 0;
> +
> +    /* altp2m view 0 is treated as the hostp2m */
> +    if ( altp2m_idx )
> +    {
> +        if ( altp2m_idx >= MAX_ALTP2M ||
> +             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> +            return -EINVAL;
> +
> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> +    }
> +
> +    if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> +        return -EINVAL;
> +
> +    /* If request to set default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        p2m->default_access = a;
> +        return 0;
> +    }
> +
> +    p2m_lock(p2m);
> +    if ( ap2m )
> +        p2m_lock(ap2m);
> +
> +    for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l )
> +    {
> +        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> +
> +        if ( rc )
> +            break;
> +
> +        /* Check for continuation if it's not the last iteration. */
> +        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> +        {
> +            rc = start;
> +            break;
> +        }
> +    }
> +
> +    if ( ap2m )
> +        p2m_unlock(ap2m);
> +    p2m_unlock(p2m);
> +
> +    return rc;
> +}
> +
> +long p2m_set_mem_access_multi(struct domain *d,
> +                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> +                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> +                              uint32_t nr, uint32_t start, uint32_t mask,
> +                              unsigned int altp2m_idx)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> +    long rc = 0;
> +
> +    /* altp2m view 0 is treated as the hostp2m */
> +    if ( altp2m_idx )
> +    {
> +        if ( altp2m_idx >= MAX_ALTP2M ||
> +             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> +            return -EINVAL;
> +
> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> +    }
> +
> +    p2m_lock(p2m);
> +    if ( ap2m )
> +        p2m_lock(ap2m);
> +
> +    while ( start < nr )
> +    {
> +        p2m_access_t a;
> +        uint8_t access;
> +        uint64_t gfn_l;
> +
> +        if ( copy_from_guest_offset(&gfn_l, pfn_list, start, 1) ||
> +             copy_from_guest_offset(&access, access_list, start, 1) )
> +        {
> +            rc = -EFAULT;
> +            break;
> +        }
> +
> +        if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> +
> +        if ( rc )
> +            break;
> +
> +        /* Check for continuation if it's not the last iteration. */
> +        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> +        {
> +            rc = start;
> +            break;
> +        }
> +    }
> +
> +    if ( ap2m )
> +        p2m_unlock(ap2m);
> +    p2m_unlock(p2m);
> +
> +    return rc;
> +}
> +
> +/*
> + * Get access type for a gfn.
> + * If gfn == INVALID_GFN, gets the default access type.
> + */
> +int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_type_t t;
> +    p2m_access_t a;
> +    mfn_t mfn;
> +
> +    static const xenmem_access_t memaccess[] = {
> +#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> +            ACCESS(n),
> +            ACCESS(r),
> +            ACCESS(w),
> +            ACCESS(rw),
> +            ACCESS(x),
> +            ACCESS(rx),
> +            ACCESS(wx),
> +            ACCESS(rwx),
> +            ACCESS(rx2rw),
> +            ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    /* If request to get default access. */
> +    if ( gfn_eq(gfn, INVALID_GFN) )
> +    {
> +        *access = memaccess[p2m->default_access];
> +        return 0;
> +    }
> +
> +    gfn_lock(p2m, gfn, 0);
> +    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
> +    gfn_unlock(p2m, gfn, 0);
> +
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        return -ESRCH;
> +
> +    if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> +        return -ERANGE;
> +
> +    *access =  memaccess[a];
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6a45185..6299d5a 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1589,433 +1589,12 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>      }
>  }
>  
> -bool p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> -{
> -    xenmem_access_t access;
> -    bool violation = 1;
> -    const struct vm_event_mem_access *data = &rsp->u.mem_access;
> -
> -    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> -    {
> -        switch ( access )
> -        {
> -        case XENMEM_access_n:
> -        case XENMEM_access_n2rwx:
> -        default:
> -            violation = data->flags & MEM_ACCESS_RWX;
> -            break;
> -
> -        case XENMEM_access_r:
> -            violation = data->flags & MEM_ACCESS_WX;
> -            break;
> -
> -        case XENMEM_access_w:
> -            violation = data->flags & MEM_ACCESS_RX;
> -            break;
> -
> -        case XENMEM_access_x:
> -            violation = data->flags & MEM_ACCESS_RW;
> -            break;
> -
> -        case XENMEM_access_rx:
> -        case XENMEM_access_rx2rw:
> -            violation = data->flags & MEM_ACCESS_W;
> -            break;
> -
> -        case XENMEM_access_wx:
> -            violation = data->flags & MEM_ACCESS_R;
> -            break;
> -
> -        case XENMEM_access_rw:
> -            violation = data->flags & MEM_ACCESS_X;
> -            break;
> -
> -        case XENMEM_access_rwx:
> -            violation = 0;
> -            break;
> -        }
> -    }
> -
> -    return violation;
> -}
> -
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  {
>      if ( altp2m_active(v->domain) )
>          p2m_switch_vcpu_altp2m_by_id(v, idx);
>  }
>  
> -bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> -                            struct npfec npfec,
> -                            vm_event_request_t **req_ptr)
> -{
> -    struct vcpu *v = current;
> -    unsigned long gfn = gpa >> PAGE_SHIFT;
> -    struct domain *d = v->domain;    
> -    struct p2m_domain *p2m = NULL;
> -    mfn_t mfn;
> -    p2m_type_t p2mt;
> -    p2m_access_t p2ma;
> -    vm_event_request_t *req;
> -    int rc;
> -
> -    if ( altp2m_active(d) )
> -        p2m = p2m_get_altp2m(v);
> -    if ( !p2m )
> -        p2m = p2m_get_hostp2m(d);
> -
> -    /* First, handle rx2rw conversion automatically.
> -     * These calls to p2m->set_entry() must succeed: we have the gfn
> -     * locked and just did a successful get_entry(). */
> -    gfn_lock(p2m, gfn, 0);
> -    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> -
> -    if ( npfec.write_access && p2ma == p2m_access_rx2rw ) 
> -    {
> -        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw, -1);
> -        ASSERT(rc == 0);
> -        gfn_unlock(p2m, gfn, 0);
> -        return 1;
> -    }
> -    else if ( p2ma == p2m_access_n2rwx )
> -    {
> -        ASSERT(npfec.write_access || npfec.read_access || npfec.insn_fetch);
> -        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> -                            p2mt, p2m_access_rwx, -1);
> -        ASSERT(rc == 0);
> -    }
> -    gfn_unlock(p2m, gfn, 0);
> -
> -    /* Otherwise, check if there is a memory event listener, and send the message along */
> -    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr ) 
> -    {
> -        /* No listener */
> -        if ( p2m->access_required ) 
> -        {
> -            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> -                                  "no vm_event listener VCPU %d, dom %d\n",
> -                                  v->vcpu_id, d->domain_id);
> -            domain_crash(v->domain);
> -            return 0;
> -        }
> -        else
> -        {
> -            gfn_lock(p2m, gfn, 0);
> -            mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> -            if ( p2ma != p2m_access_n2rwx )
> -            {
> -                /* A listener is not required, so clear the access
> -                 * restrictions.  This set must succeed: we have the
> -                 * gfn locked and just did a successful get_entry(). */
> -                rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
> -                                    p2mt, p2m_access_rwx, -1);
> -                ASSERT(rc == 0);
> -            }
> -            gfn_unlock(p2m, gfn, 0);
> -            return 1;
> -        }
> -    }
> -
> -    *req_ptr = NULL;
> -    req = xzalloc(vm_event_request_t);
> -    if ( req )
> -    {
> -        *req_ptr = req;
> -
> -        req->reason = VM_EVENT_REASON_MEM_ACCESS;
> -        req->u.mem_access.gfn = gfn;
> -        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
> -        if ( npfec.gla_valid )
> -        {
> -            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> -            req->u.mem_access.gla = gla;
> -
> -            if ( npfec.kind == npfec_kind_with_gla )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> -            else if ( npfec.kind == npfec_kind_in_gpt )
> -                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> -        }
> -        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> -        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> -        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
> -    }
> -
> -    /* Return whether vCPU pause is required (aka. sync event) */
> -    return (p2ma != p2m_access_n2rwx);
> -}
> -
> -static inline
> -int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
> -                              struct p2m_domain *ap2m, p2m_access_t a,
> -                              gfn_t gfn)
> -{
> -    mfn_t mfn;
> -    p2m_type_t t;
> -    p2m_access_t old_a;
> -    unsigned int page_order;
> -    unsigned long gfn_l = gfn_x(gfn);
> -    int rc;
> -
> -    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
> -
> -    /* Check host p2m if no valid entry in alternate */
> -    if ( !mfn_valid(mfn) )
> -    {
> -
> -        mfn = __get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
> -                                    P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
> -
> -        rc = -ESRCH;
> -        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
> -            return rc;
> -
> -        /* If this is a superpage, copy that first */
> -        if ( page_order != PAGE_ORDER_4K )
> -        {
> -            unsigned long mask = ~((1UL << page_order) - 1);
> -            unsigned long gfn2_l = gfn_l & mask;
> -            mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
> -
> -            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
> -            if ( rc )
> -                return rc;
> -        }
> -    }
> -
> -    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
> -                         (current->domain != d));
> -}
> -
> -static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
> -                          struct p2m_domain *ap2m, p2m_access_t a,
> -                          gfn_t gfn)
> -{
> -    int rc = 0;
> -
> -    if ( ap2m )
> -    {
> -        rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
> -        /* If the corresponding mfn is invalid we will want to just skip it */
> -        if ( rc == -ESRCH )
> -            rc = 0;
> -    }
> -    else
> -    {
> -        mfn_t mfn;
> -        p2m_access_t _a;
> -        p2m_type_t t;
> -        unsigned long gfn_l = gfn_x(gfn);
> -
> -        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
> -        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
> -    }
> -
> -    return rc;
> -}
> -
> -static bool xenmem_access_to_p2m_access(struct p2m_domain *p2m,
> -                                        xenmem_access_t xaccess,
> -                                        p2m_access_t *paccess)
> -{
> -    static const p2m_access_t memaccess[] = {
> -#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> -        ACCESS(n),
> -        ACCESS(r),
> -        ACCESS(w),
> -        ACCESS(rw),
> -        ACCESS(x),
> -        ACCESS(rx),
> -        ACCESS(wx),
> -        ACCESS(rwx),
> -        ACCESS(rx2rw),
> -        ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    switch ( xaccess )
> -    {
> -    case 0 ... ARRAY_SIZE(memaccess) - 1:
> -        *paccess = memaccess[xaccess];
> -        break;
> -    case XENMEM_access_default:
> -        *paccess = p2m->default_access;
> -        break;
> -    default:
> -        return false;
> -    }
> -
> -    return true;
> -}
> -
> -/*
> - * Set access type for a region of gfns.
> - * If gfn == INVALID_GFN, sets the default access type.
> - */
> -long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> -                        uint32_t start, uint32_t mask, xenmem_access_t access,
> -                        unsigned int altp2m_idx)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> -    p2m_access_t a;
> -    unsigned long gfn_l;
> -    long rc = 0;
> -
> -    /* altp2m view 0 is treated as the hostp2m */
> -    if ( altp2m_idx )
> -    {
> -        if ( altp2m_idx >= MAX_ALTP2M ||
> -             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> -            return -EINVAL;
> -
> -        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> -    }
> -
> -    if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> -        return -EINVAL;
> -
> -    /* If request to set default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        p2m->default_access = a;
> -        return 0;
> -    }
> -
> -    p2m_lock(p2m);
> -    if ( ap2m )
> -        p2m_lock(ap2m);
> -
> -    for ( gfn_l = gfn_x(gfn) + start; nr > start; ++gfn_l )
> -    {
> -        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> -
> -        if ( rc )
> -            break;
> -
> -        /* Check for continuation if it's not the last iteration. */
> -        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> -        {
> -            rc = start;
> -            break;
> -        }
> -    }
> -
> -    if ( ap2m )
> -        p2m_unlock(ap2m);
> -    p2m_unlock(p2m);
> -
> -    return rc;
> -}
> -
> -long p2m_set_mem_access_multi(struct domain *d,
> -                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> -                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> -                              uint32_t nr, uint32_t start, uint32_t mask,
> -                              unsigned int altp2m_idx)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d), *ap2m = NULL;
> -    long rc = 0;
> -
> -    /* altp2m view 0 is treated as the hostp2m */
> -    if ( altp2m_idx )
> -    {
> -        if ( altp2m_idx >= MAX_ALTP2M ||
> -             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> -            return -EINVAL;
> -
> -        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> -    }
> -
> -    p2m_lock(p2m);
> -    if ( ap2m )
> -        p2m_lock(ap2m);
> -
> -    while ( start < nr )
> -    {
> -        p2m_access_t a;
> -        uint8_t access;
> -        uint64_t gfn_l;
> -
> -        if ( copy_from_guest_offset(&gfn_l, pfn_list, start, 1) ||
> -             copy_from_guest_offset(&access, access_list, start, 1) )
> -        {
> -            rc = -EFAULT;
> -            break;
> -        }
> -
> -        if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
> -        {
> -            rc = -EINVAL;
> -            break;
> -        }
> -
> -        rc = set_mem_access(d, p2m, ap2m, a, _gfn(gfn_l));
> -
> -        if ( rc )
> -            break;
> -
> -        /* Check for continuation if it's not the last iteration. */
> -        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
> -        {
> -            rc = start;
> -            break;
> -        }
> -    }
> -
> -    if ( ap2m )
> -        p2m_unlock(ap2m);
> -    p2m_unlock(p2m);
> -
> -    return rc;
> -}
> -
> -/*
> - * Get access type for a gfn.
> - * If gfn == INVALID_GFN, gets the default access type.
> - */
> -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> -{
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    p2m_type_t t;
> -    p2m_access_t a;
> -    mfn_t mfn;
> -
> -    static const xenmem_access_t memaccess[] = {
> -#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
> -            ACCESS(n),
> -            ACCESS(r),
> -            ACCESS(w),
> -            ACCESS(rw),
> -            ACCESS(x),
> -            ACCESS(rx),
> -            ACCESS(wx),
> -            ACCESS(rwx),
> -            ACCESS(rx2rw),
> -            ACCESS(n2rwx),
> -#undef ACCESS
> -    };
> -
> -    /* If request to get default access. */
> -    if ( gfn_eq(gfn, INVALID_GFN) )
> -    {
> -        *access = memaccess[p2m->default_access];
> -        return 0;
> -    }
> -
> -    gfn_lock(p2m, gfn, 0);
> -    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
> -    gfn_unlock(p2m, gfn, 0);
> -
> -    if ( mfn_eq(mfn, INVALID_MFN) )
> -        return -ESRCH;
> -    
> -    if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> -        return -ERANGE;
> -
> -    *access =  memaccess[a];
> -    return 0;
> -}
> -
>  static struct p2m_domain *
>  p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
>  {
> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> index 1e88d67..8d8bc4a 100644
> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -18,7 +18,8 @@
>   * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <asm/p2m.h>
> +#include <xen/sched.h>
> +#include <xen/mem_access.h>
>  #include <asm/vm_event.h>
>  
>  /* Implicitly serialized by the domctl lock. */
> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
> index 565a320..19f63bb 100644
> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -24,8 +24,8 @@
>  #include <xen/guest_access.h>
>  #include <xen/hypercall.h>
>  #include <xen/vm_event.h>
> +#include <xen/mem_access.h>
>  #include <public/memory.h>
> -#include <asm/p2m.h>
>  #include <xsm/xsm.h>
>  
>  int mem_access_memop(unsigned long cmd,
> diff --git a/xen/include/asm-arm/mem_access.h b/xen/include/asm-arm/mem_access.h
> new file mode 100644
> index 0000000..3a155f8
> --- /dev/null
> +++ b/xen/include/asm-arm/mem_access.h
> @@ -0,0 +1,53 @@
> +/*
> + * mem_access.h: architecture specific mem_access handling routines
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _ASM_ARM_MEM_ACCESS_H
> +#define _ASM_ARM_MEM_ACCESS_H
> +
> +static inline
> +bool p2m_mem_access_emulate_check(struct vcpu *v,
> +                                  const vm_event_response_t *rsp)
> +{
> +    /* Not supported on ARM. */
> +    return 0;
> +}
> +
> +/* vm_event and mem_access are supported on any ARM guest */
> +static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> +{
> +    return 1;
> +}
> +
> +/*
> + * Send mem event based on the access. Boolean return value indicates if trap
> + * needs to be injected into guest.
> + */
> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
> +
> +struct page_info*
> +p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
> +                                  const struct vcpu *v);
> +
> +#endif /* _ASM_ARM_MEM_ACCESS_H */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index fdb6b47..2b22e9a 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -4,6 +4,7 @@
>  #include <xen/mm.h>
>  #include <xen/radix-tree.h>
>  #include <xen/rwlock.h>
> +#include <xen/mem_access.h>
>  #include <public/vm_event.h> /* for vm_event_response_t */
>  #include <public/memory.h>
>  #include <xen/p2m-common.h>
> @@ -139,14 +140,6 @@ typedef enum {
>                               p2m_to_mask(p2m_map_foreign)))
>  
>  static inline
> -bool p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> -{
> -    /* Not supported on ARM. */
> -    return 0;
> -}
> -
> -static inline
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  {
>      /* Not supported on ARM. */
> @@ -343,22 +336,26 @@ static inline int get_page_and_type(struct page_info *page,
>  /* get host p2m table */
>  #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>  
> -/* vm_event and mem_access are supported on any ARM guest */
> -static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> -{
> -    return 1;
> -}
> -
>  static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
>  {
>      return 1;
>  }
>  
>  /*
> - * Send mem event based on the access. Boolean return value indicates if trap
> - * needs to be injected into guest.
> + * Return the start of the next mapping based on the order of the
> + * current one.
>   */
> -bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
> +static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
> +{
> +    /*
> +     * The order corresponds to the order of the mapping (or invalid
> +     * range) in the page table. So we need to align the GFN before
> +     * incrementing.
> +     */
> +    gfn = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
> +
> +    return gfn_add(gfn, 1UL << order);
> +}
>  
>  #endif /* _XEN_P2M_H */
>  
> diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
> new file mode 100644
> index 0000000..9f7b409
> --- /dev/null
> +++ b/xen/include/asm-x86/mem_access.h
> @@ -0,0 +1,61 @@
> +/******************************************************************************
> + * include/asm-x86/mem_access.h
> + *
> + * Memory access support.
> + *
> + * Copyright (c) 2011 GridCentric Inc. (Andres Lagar-Cavilla)
> + * Copyright (c) 2007 Advanced Micro Devices (Wei Huang)
> + * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
> + * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
> + * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ASM_X86_MEM_ACCESS_H__
> +#define __ASM_X86_MEM_ACCESS_H__
> +
> +/*
> + * Setup vm_event request based on the access (gla is -1ull if not available).
> + * Handles the rw2rx conversion. Boolean return value indicates if event type
> + * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated,
> + * then the caller should use monitor_traps to send the event on the MONITOR
> + * ring. Once having released get_gfn* locks caller must also xfree the
> + * request.
> + */
> +bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> +                            struct npfec npfec,
> +                            vm_event_request_t **req_ptr);
> +
> +/* Check for emulation and mark vcpu for skipping one instruction
> + * upon rescheduling if required. */
> +bool p2m_mem_access_emulate_check(struct vcpu *v,
> +                                  const vm_event_response_t *rsp);
> +
> +/* Sanity check for mem_access hardware support */
> +static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> +{
> +    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
> +}
> +
> +#endif /*__ASM_X86_MEM_ACCESS_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 7035860..8964e90 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -29,6 +29,7 @@
>  #include <xen/config.h>
>  #include <xen/paging.h>
>  #include <xen/p2m-common.h>
> +#include <xen/mem_access.h>
>  #include <asm/mem_sharing.h>
>  #include <asm/page.h>    /* for pagetable_t */
>  
> @@ -663,29 +664,6 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
>  /* Resume normal operation (in case a domain was paused) */
>  void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp);
>  
> -/*
> - * Setup vm_event request based on the access (gla is -1ull if not available).
> - * Handles the rw2rx conversion. Boolean return value indicates if event type
> - * is syncronous (aka. requires vCPU pause). If the req_ptr has been populated,
> - * then the caller should use monitor_traps to send the event on the MONITOR
> - * ring. Once having released get_gfn* locks caller must also xfree the
> - * request.
> - */
> -bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> -                            struct npfec npfec,
> -                            vm_event_request_t **req_ptr);
> -
> -/* Check for emulation and mark vcpu for skipping one instruction
> - * upon rescheduling if required. */
> -bool p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp);
> -
> -/* Sanity check for mem_access hardware support */
> -static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> -{
> -    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
> -}
> -
>  /* 
>   * Internal functions, only called by other p2m code
>   */
> diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
> index da36e07..5ab34c1 100644
> --- a/xen/include/xen/mem_access.h
> +++ b/xen/include/xen/mem_access.h
> @@ -19,29 +19,78 @@
>   * along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#ifndef _XEN_ASM_MEM_ACCESS_H
> -#define _XEN_ASM_MEM_ACCESS_H
> +#ifndef _XEN_MEM_ACCESS_H
> +#define _XEN_MEM_ACCESS_H
>  
> +#include <xen/types.h>
> +#include <xen/mm.h>
>  #include <public/memory.h>
> -#include <asm/p2m.h>
> +#include <public/vm_event.h>
> +#include <asm/mem_access.h>
>  
> -#ifdef CONFIG_HAS_MEM_ACCESS
> +/*
> + * Additional access types, which are used to further restrict
> + * the permissions given my the p2m_type_t memory type.  Violations
> + * caused by p2m_access_t restrictions are sent to the vm_event
> + * interface.
> + *
> + * The access permissions are soft state: when any ambiguous change of page
> + * type or use occurs, or when pages are flushed, swapped, or at any other
> + * convenient type, the access permissions can get reset to the p2m_domain
> + * default.
> + */
> +typedef enum {
> +    /* Code uses bottom three bits with bitmask semantics */
> +    p2m_access_n     = 0, /* No access allowed. */
> +    p2m_access_r     = 1 << 0,
> +    p2m_access_w     = 1 << 1,
> +    p2m_access_x     = 1 << 2,
> +    p2m_access_rw    = p2m_access_r | p2m_access_w,
> +    p2m_access_rx    = p2m_access_r | p2m_access_x,
> +    p2m_access_wx    = p2m_access_w | p2m_access_x,
> +    p2m_access_rwx   = p2m_access_r | p2m_access_w | p2m_access_x,
> +
> +    p2m_access_rx2rw = 8, /* Special: page goes from RX to RW on write */
> +    p2m_access_n2rwx = 9, /* Special: page goes from N to RWX on access, *
> +                           * generates an event but does not pause the
> +                           * vcpu */
> +
> +    /* NOTE: Assumed to be only 4 bits right now on x86. */
> +} p2m_access_t;
> +
> +/*
> + * Set access type for a region of gfns.
> + * If gfn == INVALID_GFN, sets the default access type.
> + */
> +long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access,
> +                        unsigned int altp2m_idx);
>  
> +long p2m_set_mem_access_multi(struct domain *d,
> +                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> +                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> +                              uint32_t nr, uint32_t start, uint32_t mask,
> +                              unsigned int altp2m_idx);
> +
> +/*
> + * Get access type for a gfn.
> + * If gfn == INVALID_GFN, gets the default access type.
> + */
> +int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access);
> +
> +#ifdef CONFIG_HAS_MEM_ACCESS
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
> -
>  #else
> -
>  static inline
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
>  {
>      return -ENOSYS;
>  }
> +#endif /* CONFIG_HAS_MEM_ACCESS */
>  
> -#endif /* HAS_MEM_ACCESS */
> -
> -#endif /* _XEN_ASM_MEM_ACCESS_H */
> +#endif /* _XEN_MEM_ACCESS_H */
>  
>  /*
>   * Local variables:
> diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> index 3be1e91..8cd5a6b 100644
> --- a/xen/include/xen/p2m-common.h
> +++ b/xen/include/xen/p2m-common.h
> @@ -1,38 +1,6 @@
>  #ifndef _XEN_P2M_COMMON_H
>  #define _XEN_P2M_COMMON_H
>  
> -#include <public/vm_event.h>
> -
> -/*
> - * Additional access types, which are used to further restrict
> - * the permissions given my the p2m_type_t memory type.  Violations
> - * caused by p2m_access_t restrictions are sent to the vm_event
> - * interface.
> - *
> - * The access permissions are soft state: when any ambiguous change of page
> - * type or use occurs, or when pages are flushed, swapped, or at any other
> - * convenient type, the access permissions can get reset to the p2m_domain
> - * default.
> - */
> -typedef enum {
> -    /* Code uses bottom three bits with bitmask semantics */
> -    p2m_access_n     = 0, /* No access allowed. */
> -    p2m_access_r     = 1 << 0,
> -    p2m_access_w     = 1 << 1,
> -    p2m_access_x     = 1 << 2,
> -    p2m_access_rw    = p2m_access_r | p2m_access_w,
> -    p2m_access_rx    = p2m_access_r | p2m_access_x,
> -    p2m_access_wx    = p2m_access_w | p2m_access_x,
> -    p2m_access_rwx   = p2m_access_r | p2m_access_w | p2m_access_x,
> -
> -    p2m_access_rx2rw = 8, /* Special: page goes from RX to RW on write */
> -    p2m_access_n2rwx = 9, /* Special: page goes from N to RWX on access, *
> -                           * generates an event but does not pause the
> -                           * vcpu */
> -
> -    /* NOTE: Assumed to be only 4 bits right now on x86. */
> -} p2m_access_t;
> -
>  /* Map MMIO regions in the p2m: start_gfn and nr describe the range in
>   *  * the guest physical address space to map, starting from the machine
>   *   * frame number mfn. */
> @@ -45,24 +13,4 @@ int unmap_mmio_regions(struct domain *d,
>                         unsigned long nr,
>                         mfn_t mfn);
>  
> -/*
> - * Set access type for a region of gfns.
> - * If gfn == INVALID_GFN, sets the default access type.
> - */
> -long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> -                        uint32_t start, uint32_t mask, xenmem_access_t access,
> -                        unsigned int altp2m_idx);
> -
> -long p2m_set_mem_access_multi(struct domain *d,
> -                              const XEN_GUEST_HANDLE(const_uint64) pfn_list,
> -                              const XEN_GUEST_HANDLE(const_uint8) access_list,
> -                              uint32_t nr, uint32_t start, uint32_t mask,
> -                              unsigned int altp2m_idx);
> -
> -/*
> - * Get access type for a gfn.
> - * If gfn == INVALID_GFN, gets the default access type.
> - */
> -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access);
> -
>  #endif /* _XEN_P2M_COMMON_H */
> -- 
> 2.10.2
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-01-30 21:12 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-09 19:59 [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Tamas K Lengyel
2016-12-09 19:59 ` [PATCH v2 2/2] p2m: split mem_access into separate files Tamas K Lengyel
2017-01-03 15:31   ` Tamas K Lengyel
2017-01-11 16:33     ` Konrad Rzeszutek Wilk
2017-01-30 21:11   ` Stefano Stabellini
2016-12-12  7:42 ` [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current Jan Beulich
2016-12-12  7:47   ` Tamas K Lengyel
2016-12-12 11:46     ` Julien Grall
2016-12-12 18:42       ` Tamas K Lengyel
2016-12-12 19:11         ` Julien Grall
2016-12-12 19:41           ` Tamas K Lengyel
2016-12-12 21:28             ` Julien Grall
2016-12-12 23:47               ` Tamas K Lengyel
2016-12-13 12:50                 ` Julien Grall
2016-12-13 18:03                   ` Tamas K Lengyel
2016-12-13 18:39                   ` Tamas K Lengyel
2016-12-13 18:28                 ` Andrew Cooper
2016-12-13 18:41                   ` Julien Grall
2016-12-13 19:39                     ` Andrew Cooper
2016-12-14 12:05                       ` Julien Grall
2016-12-15  4:54                         ` George Dunlap
2016-12-15 16:16                           ` Julien Grall
2017-01-03 15:29 ` Tamas K Lengyel
2017-01-30 21:11 ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.