All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/15] Mem_event and mem_access for ARM
@ 2014-09-01 14:21 Tamas K Lengyel
  2014-09-01 14:21 ` [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
                   ` (15 more replies)
  0 siblings, 16 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:21 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

The ARM virtualization extension provides 2-stage paging, a similar mechanisms
to Intel's EPT, which can be used to trace the memory accesses performed by
the guest systems. This series moves the mem_access and mem_event codebase
into Xen common, performs some code cleanup and architecture specific division
of components, then sets up the necessary infrastructure in the ARM code
to deliver the event on R/W/X traps. Finally, we turn on the compilation of
mem_access and mem_event on ARM and perform the necessary changes to the
tools side.

This version of the series has been fully tested and is functional on an
Arndale board.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/arm_memaccess3

Tamas K Lengyel (15):
  xen: Relocate mem_access and mem_event into common.
  xen: Relocate struct npfec definition into common
  xen: Relocate mem_event_op domctl and access_op memop into common.
  xen/mem_event: Clean out superfluous white-spaces
  xen/mem_event: Relax error condition on debug builds
  xen/mem_event: Abstract architecture specific sanity checks
  xen/mem_access: Abstract architecture specific sanity check
  xen/arm: p2m type definitions and changes
  xen/arm: Add set access required domctl
  xen/arm: Data abort exception (R/W) mem_events.
  xen/arm: Instruction prefetch abort (X) mem_event handling
  xen/arm: Shatter large pages when using mem_acces
  xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  tools/libxc: Allocate magic page for mem access on ARM
  tools/tests: Enable xen-access on ARM

 MAINTAINERS                         |   6 +
 tools/libxc/xc_dom_arm.c            |   6 +-
 tools/tests/xen-access/Makefile     |   4 +-
 tools/tests/xen-access/xen-access.c |  53 ++-
 xen/Rules.mk                        |   1 +
 xen/arch/arm/Rules.mk               |   1 +
 xen/arch/arm/domctl.c               |  13 +
 xen/arch/arm/p2m.c                  | 494 +++++++++++++++++++++---
 xen/arch/arm/traps.c                |  74 +++-
 xen/arch/x86/Rules.mk               |   1 +
 xen/arch/x86/domctl.c               |  10 +-
 xen/arch/x86/hvm/hvm.c              |  61 +--
 xen/arch/x86/mm/Makefile            |   2 -
 xen/arch/x86/mm/hap/nested_ept.c    |   2 +-
 xen/arch/x86/mm/hap/nested_hap.c    |   2 +-
 xen/arch/x86/mm/mem_access.c        | 133 -------
 xen/arch/x86/mm/mem_event.c         | 705 -----------------------------------
 xen/arch/x86/mm/mem_paging.c        |   2 +-
 xen/arch/x86/mm/mem_sharing.c       |   2 +-
 xen/arch/x86/mm/p2m-pod.c           |   2 +-
 xen/arch/x86/mm/p2m-pt.c            |   2 +-
 xen/arch/x86/mm/p2m.c               |   2 +-
 xen/arch/x86/x86_64/compat/mm.c     |   8 +-
 xen/arch/x86/x86_64/mm.c            |   8 +-
 xen/common/Makefile                 |   2 +
 xen/common/domain.c                 |   1 +
 xen/common/domctl.c                 |   9 +
 xen/common/mem_access.c             | 135 +++++++
 xen/common/mem_event.c              | 725 ++++++++++++++++++++++++++++++++++++
 xen/common/memory.c                 |  68 ++++
 xen/include/asm-arm/mm.h            |   1 -
 xen/include/asm-arm/p2m.h           | 109 +++++-
 xen/include/asm-arm/processor.h     |  70 +++-
 xen/include/asm-x86/config.h        |   3 +
 xen/include/asm-x86/hvm/hvm.h       |   7 +-
 xen/include/asm-x86/mem_access.h    |  39 --
 xen/include/asm-x86/mem_event.h     |  82 ----
 xen/include/asm-x86/mm.h            |  23 --
 xen/include/asm-x86/p2m.h           |  22 ++
 xen/include/xen/mem_access.h        |  58 +++
 xen/include/xen/mem_event.h         | 141 +++++++
 xen/include/xen/mm.h                |  27 ++
 xen/include/xsm/dummy.h             |  26 +-
 xen/include/xsm/xsm.h               |  29 +-
 xen/xsm/dummy.c                     |   7 +-
 xen/xsm/flask/hooks.c               |  33 +-
 46 files changed, 2007 insertions(+), 1204 deletions(-)
 delete mode 100644 xen/arch/x86/mm/mem_access.c
 delete mode 100644 xen/arch/x86/mm/mem_event.c
 create mode 100644 xen/common/mem_access.c
 create mode 100644 xen/common/mem_event.c
 delete mode 100644 xen/include/asm-x86/mem_access.h
 delete mode 100644 xen/include/asm-x86/mem_event.h
 create mode 100644 xen/include/xen/mem_access.h
 create mode 100644 xen/include/xen/mem_event.h

-- 
2.1.0.rc1

^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common.
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
@ 2014-09-01 14:21 ` Tamas K Lengyel
  2014-09-01 15:06   ` Jan Beulich
  2014-09-01 14:21 ` [PATCH v3 02/15] xen: Relocate struct npfec definition " Tamas K Lengyel
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:21 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

In preparation to add support for ARM LPAE mem_event, relocate mem_access,
mem_event and auxiliary functions into common Xen code.
This patch makes no functional changes to the X86 side, for ARM mem_event
and mem_access functions are just defined as placeholder stubs, and are
actually enabled later in the series.

Edits that are only header path adjustments:
   xen/arch/x86/domctl.c
   xen/arch/x86/mm/hap/nested_ept.c
   xen/arch/x86/mm/hap/nested_hap.c
   xen/arch/x86/mm/mem_paging.c
   xen/arch/x86/mm/mem_sharing.c
   xen/arch/x86/mm/p2m-pod.c
   xen/arch/x86/mm/p2m-pt.c
   xen/arch/x86/mm/p2m.c
   xen/arch/x86/x86_64/compat/mm.c
   xen/arch/x86/x86_64/mm.c

Makefile adjustments for new/removed code:
   xen/common/Makefile
   xen/arch/x86/mm/Makefile

Relocated prepare_ring_for_helper and destroy_ring_for_helper functions:
   xen/include/xen/mm.h
   xen/common/memory.c
   xen/include/asm-x86/hvm/hvm.h
   xen/arch/x86/hvm/hvm.c

Code movement of mem_event and mem_access:
    xen/arch/x86/mm/mem_access.c -> xen/common/mem_access.c
    xen/arch/x86/mm/mem_event.c -> xen/common/mem_event.c
    xen/include/asm-x86/mem_access.h -> xen/include/xen/mem_access.h
    xen/include/asm-x86/mem_event.h -> xen/include/xen/mem_event.h

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Tim Deegan <tim@xen.org>
---
v3: Replace asm/domain.h with xen/sched.h in mem_event.c to better
    accomodate for the new code location.
    Replace #ifdef CONFIG_X86 wrappers with HAS_MEM_ACCESS flags.

v2: Update MAINTAINERS.
    More descriptive commit message to aid in the review process.
---
 MAINTAINERS                      |   6 +
 xen/Rules.mk                     |   1 +
 xen/arch/x86/Rules.mk            |   1 +
 xen/arch/x86/domctl.c            |   2 +-
 xen/arch/x86/hvm/hvm.c           |  61 +---
 xen/arch/x86/mm/Makefile         |   2 -
 xen/arch/x86/mm/hap/nested_ept.c |   2 +-
 xen/arch/x86/mm/hap/nested_hap.c |   2 +-
 xen/arch/x86/mm/mem_access.c     | 133 --------
 xen/arch/x86/mm/mem_event.c      | 705 ---------------------------------------
 xen/arch/x86/mm/mem_paging.c     |   2 +-
 xen/arch/x86/mm/mem_sharing.c    |   2 +-
 xen/arch/x86/mm/p2m-pod.c        |   2 +-
 xen/arch/x86/mm/p2m-pt.c         |   2 +-
 xen/arch/x86/mm/p2m.c            |   2 +-
 xen/arch/x86/x86_64/compat/mm.c  |   4 +-
 xen/arch/x86/x86_64/mm.c         |   4 +-
 xen/common/Makefile              |   2 +
 xen/common/domain.c              |   1 +
 xen/common/mem_access.c          | 133 ++++++++
 xen/common/mem_event.c           | 705 +++++++++++++++++++++++++++++++++++++++
 xen/common/memory.c              |  63 ++++
 xen/include/asm-arm/mm.h         |   1 -
 xen/include/asm-x86/hvm/hvm.h    |   6 -
 xen/include/asm-x86/mem_access.h |  39 ---
 xen/include/asm-x86/mem_event.h  |  82 -----
 xen/include/asm-x86/mm.h         |   2 -
 xen/include/xen/mem_access.h     |  58 ++++
 xen/include/xen/mem_event.h      | 141 ++++++++
 xen/include/xen/mm.h             |   6 +
 30 files changed, 1131 insertions(+), 1041 deletions(-)
 delete mode 100644 xen/arch/x86/mm/mem_access.c
 delete mode 100644 xen/arch/x86/mm/mem_event.c
 create mode 100644 xen/common/mem_access.c
 create mode 100644 xen/common/mem_event.c
 delete mode 100644 xen/include/asm-x86/mem_access.h
 delete mode 100644 xen/include/asm-x86/mem_event.h
 create mode 100644 xen/include/xen/mem_access.h
 create mode 100644 xen/include/xen/mem_event.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 266e47b..f659180 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -337,6 +337,12 @@ F:	xen/arch/x86/mm/mem_sharing.c
 F:	xen/arch/x86/mm/mem_paging.c
 F:	tools/memshr
 
+MEMORY EVENT AND ACCESS
+M:	Tim Deegan <tim@xen.org>
+S:	Supported
+F:	xen/common/mem_event.c
+F:	xen/common/mem_access.c
+
 XENTRACE
 M:	George Dunlap <george.dunlap@eu.citrix.com>
 S:	Supported
diff --git a/xen/Rules.mk b/xen/Rules.mk
index b49f3c8..dc15b09 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -57,6 +57,7 @@ CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
 CFLAGS-$(HAS_GDBSX)     += -DHAS_GDBSX
 CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
 CFLAGS-$(HAS_DEVICE_TREE) += -DHAS_DEVICE_TREE
+CFLAGS-$(HAS_MEM_ACCESS)  += -DHAS_MEM_ACCESS
 CFLAGS-$(HAS_PCI)       += -DHAS_PCI
 CFLAGS-$(HAS_IOPORTS)   += -DHAS_IOPORTS
 CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 576985e..bd4e342 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -12,6 +12,7 @@ HAS_NS16550 := y
 HAS_EHCI := y
 HAS_KEXEC := y
 HAS_GDBSX := y
+HAS_MEM_ACCESS := y
 xenoprof := y
 
 #
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index d1517c4..3aeb79d 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,7 +30,7 @@
 #include <xen/hypercall.h> /* for arch_do_domctl */
 #include <xsm/xsm.h>
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 83e6fae..3569481 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -63,8 +63,8 @@
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
 #include <public/memory.h>
-#include <asm/mem_event.h>
-#include <asm/mem_access.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
 #include <public/mem_event.h>
 #include <xen/rangeset.h>
 #include <public/arch-x86/cpuid.h>
@@ -484,19 +484,6 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
     clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
 }
 
-void destroy_ring_for_helper(
-    void **_va, struct page_info *page)
-{
-    void *va = *_va;
-
-    if ( va != NULL )
-    {
-        unmap_domain_page_global(va);
-        put_page_and_type(page);
-        *_va = NULL;
-    }
-}
-
 static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
@@ -504,50 +491,6 @@ static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
     destroy_ring_for_helper(&iorp->va, iorp->page);
 }
 
-int prepare_ring_for_helper(
-    struct domain *d, unsigned long gmfn, struct page_info **_page,
-    void **_va)
-{
-    struct page_info *page;
-    p2m_type_t p2mt;
-    void *va;
-
-    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
-    if ( p2m_is_paging(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        p2m_mem_paging_populate(d, gmfn);
-        return -ENOENT;
-    }
-    if ( p2m_is_shared(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        return -ENOENT;
-    }
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    va = __map_domain_page_global(page);
-    if ( va == NULL )
-    {
-        put_page_and_type(page);
-        return -ENOMEM;
-    }
-
-    *_va = va;
-    *_page = page;
-
-    return 0;
-}
-
 static int hvm_map_ioreq_page(
     struct hvm_ioreq_server *s, bool_t buf, unsigned long gmfn)
 {
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 73dcdf4..ed4b1f8 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -6,10 +6,8 @@ obj-y += p2m.o p2m-pt.o p2m-ept.o p2m-pod.o
 obj-y += guest_walk_2.o
 obj-y += guest_walk_3.o
 obj-$(x86_64) += guest_walk_4.o
-obj-$(x86_64) += mem_event.o
 obj-$(x86_64) += mem_paging.o
 obj-$(x86_64) += mem_sharing.o
-obj-$(x86_64) += mem_access.o
 
 guest_walk_%.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 0d044bc..704bb66 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 137a87c..f6becd4 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -23,7 +23,7 @@
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
deleted file mode 100644
index e8465a5..0000000
--- a/xen/arch/x86/mm/mem_access.c
+++ /dev/null
@@ -1,133 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_access.c
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#include <xen/sched.h>
-#include <xen/guest_access.h>
-#include <xen/hypercall.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <xsm/xsm.h>
-
-
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
-{
-    long rc;
-    xen_mem_access_op_t mao;
-    struct domain *d;
-
-    if ( copy_from_guest(&mao, arg, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
-    if ( rc )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
-    if ( rc )
-        goto out;
-
-    rc = -ENODEV;
-    if ( unlikely(!d->mem_event->access.ring_page) )
-        goto out;
-
-    switch ( mao.op )
-    {
-    case XENMEM_access_op_resume:
-        p2m_mem_access_resume(d);
-        rc = 0;
-        break;
-
-    case XENMEM_access_op_set_access:
-    {
-        unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
-
-        rc = -EINVAL;
-        if ( (mao.pfn != ~0ull) &&
-             (mao.nr < start_iter ||
-              ((mao.pfn + mao.nr - 1) < mao.pfn) ||
-              ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
-            break;
-
-        rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
-                                MEMOP_CMD_MASK, mao.access);
-        if ( rc > 0 )
-        {
-            ASSERT(!(rc & MEMOP_CMD_MASK));
-            rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
-                                               XENMEM_access_op | rc, arg);
-        }
-        break;
-    }
-
-    case XENMEM_access_op_get_access:
-    {
-        xenmem_access_t access;
-
-        rc = -EINVAL;
-        if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
-            break;
-
-        rc = p2m_get_mem_access(d, mao.pfn, &access);
-        if ( rc != 0 )
-            break;
-
-        mao.access = access;
-        rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
-
-        break;
-    }
-
-    default:
-        rc = -ENOSYS;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
-{
-    int rc = mem_event_claim_slot(d, &d->mem_event->access);
-    if ( rc < 0 )
-        return rc;
-
-    mem_event_put_request(d, &d->mem_event->access, req);
-
-    return 0;
-} 
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
deleted file mode 100644
index ba7e71e..0000000
--- a/xen/arch/x86/mm/mem_event.c
+++ /dev/null
@@ -1,705 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_event.c
- *
- * Memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#include <asm/domain.h>
-#include <xen/event.h>
-#include <xen/wait.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <asm/mem_paging.h>
-#include <asm/mem_access.h>
-#include <asm/mem_sharing.h>
-#include <xsm/xsm.h>
-
-/* for public/io/ring.h macros */
-#define xen_mb()   mb()
-#define xen_rmb()  rmb()
-#define xen_wmb()  wmb()
-
-#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
-#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
-#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
-
-static int mem_event_enable(
-    struct domain *d,
-    xen_domctl_mem_event_op_t *mec,
-    struct mem_event_domain *med,
-    int pause_flag,
-    int param,
-    xen_event_channel_notification_t notification_fn)
-{
-    int rc;
-    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
-
-    /* Only one helper at a time. If the helper crashed,
-     * the ring is in an undefined state and so is the guest.
-     */
-    if ( med->ring_page )
-        return -EBUSY;
-
-    /* The parameter defaults to zero, and it should be 
-     * set to something */
-    if ( ring_gfn == 0 )
-        return -ENOSYS;
-
-    mem_event_ring_lock_init(med);
-    mem_event_ring_lock(med);
-
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
-                                    &med->ring_page);
-    if ( rc < 0 )
-        goto err;
-
-    /* Set the number of currently blocked vCPUs to 0. */
-    med->blocked = 0;
-
-    /* Allocate event channel */
-    rc = alloc_unbound_xen_event_channel(d->vcpu[0],
-                                         current->domain->domain_id,
-                                         notification_fn);
-    if ( rc < 0 )
-        goto err;
-
-    med->xen_port = mec->port = rc;
-
-    /* Prepare ring buffer */
-    FRONT_RING_INIT(&med->front_ring,
-                    (mem_event_sring_t *)med->ring_page,
-                    PAGE_SIZE);
-
-    /* Save the pause flag for this particular ring. */
-    med->pause_flag = pause_flag;
-
-    /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&med->wq);
-
-    mem_event_ring_unlock(med);
-    return 0;
-
- err:
-    destroy_ring_for_helper(&med->ring_page, 
-                            med->ring_pg_struct);
-    mem_event_ring_unlock(med);
-
-    return rc;
-}
-
-static unsigned int mem_event_ring_available(struct mem_event_domain *med)
-{
-    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
-    avail_req -= med->target_producers;
-    avail_req -= med->foreign_producers;
-
-    BUG_ON(avail_req < 0);
-
-    return avail_req;
-}
-
-/*
- * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
- * ring. These vCPUs were paused on their way out after placing an event,
- * but need to be resumed where the ring is capable of processing at least
- * one event from them.
- */
-static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
-{
-    struct vcpu *v;
-    int online = d->max_vcpus;
-    unsigned int avail_req = mem_event_ring_available(med);
-
-    if ( avail_req == 0 || med->blocked == 0 )
-        return;
-
-    /*
-     * We ensure that we only have vCPUs online if there are enough free slots
-     * for their memory events to be processed.  This will ensure that no
-     * memory events are lost (due to the fact that certain types of events
-     * cannot be replayed, we need to ensure that there is space in the ring
-     * for when they are hit).
-     * See comment below in mem_event_put_request().
-     */
-    for_each_vcpu ( d, v )
-        if ( test_bit(med->pause_flag, &v->pause_flags) )
-            online--;
-
-    ASSERT(online == (d->max_vcpus - med->blocked));
-
-    /* We remember which vcpu last woke up to avoid scanning always linearly
-     * from zero and starving higher-numbered vcpus under high load */
-    if ( d->vcpu )
-    {
-        int i, j, k;
-
-        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
-        {
-            k = i % d->max_vcpus;
-            v = d->vcpu[k];
-            if ( !v )
-                continue;
-
-            if ( !(med->blocked) || online >= avail_req )
-               break;
-
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                online++;
-                med->blocked--;
-                med->last_vcpu_wake_up = k;
-            }
-        }
-    }
-}
-
-/*
- * In the event that a vCPU attempted to place an event in the ring and
- * was unable to do so, it is queued on a wait queue.  These are woken as
- * needed, and take precedence over the blocked vCPUs.
- */
-static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
-{
-    unsigned int avail_req = mem_event_ring_available(med);
-
-    if ( avail_req > 0 )
-        wake_up_nr(&med->wq, avail_req);
-}
-
-/*
- * mem_event_wake() will wakeup all vcpus waiting for the ring to
- * become available.  If we have queued vCPUs, they get top priority. We
- * are guaranteed that they will go through code paths that will eventually
- * call mem_event_wake() again, ensuring that any blocked vCPUs will get
- * unpaused once all the queued vCPUs have made it through.
- */
-void mem_event_wake(struct domain *d, struct mem_event_domain *med)
-{
-    if (!list_empty(&med->wq.list))
-        mem_event_wake_queued(d, med);
-    else
-        mem_event_wake_blocked(d, med);
-}
-
-static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
-{
-    if ( med->ring_page )
-    {
-        struct vcpu *v;
-
-        mem_event_ring_lock(med);
-
-        if ( !list_empty(&med->wq.list) )
-        {
-            mem_event_ring_unlock(med);
-            return -EBUSY;
-        }
-
-        /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d->vcpu[0], med->xen_port);
-
-        /* Unblock all vCPUs */
-        for_each_vcpu ( d, v )
-        {
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                med->blocked--;
-            }
-        }
-
-        destroy_ring_for_helper(&med->ring_page, 
-                                med->ring_pg_struct);
-        mem_event_ring_unlock(med);
-    }
-
-    return 0;
-}
-
-static inline void mem_event_release_slot(struct domain *d,
-                                          struct mem_event_domain *med)
-{
-    /* Update the accounting */
-    if ( current->domain == d )
-        med->target_producers--;
-    else
-        med->foreign_producers--;
-
-    /* Kick any waiters */
-    mem_event_wake(d, med);
-}
-
-/*
- * mem_event_mark_and_pause() tags vcpu and put it to sleep.
- * The vcpu will resume execution in mem_event_wake_waiters().
- */
-void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
-{
-    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
-    {
-        vcpu_pause_nosync(v);
-        med->blocked++;
-    }
-}
-
-/*
- * This must be preceded by a call to claim_slot(), and is guaranteed to
- * succeed.  As a side-effect however, the vCPU may be paused if the ring is
- * overly full and its continued execution would cause stalling and excessive
- * waiting.  The vCPU will be automatically unpaused when the ring clears.
- */
-void mem_event_put_request(struct domain *d,
-                           struct mem_event_domain *med,
-                           mem_event_request_t *req)
-{
-    mem_event_front_ring_t *front_ring;
-    int free_req;
-    unsigned int avail_req;
-    RING_IDX req_prod;
-
-    if ( current->domain != d )
-    {
-        req->flags |= MEM_EVENT_FLAG_FOREIGN;
-        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
-    }
-
-    mem_event_ring_lock(med);
-
-    /* Due to the reservations, this step must succeed. */
-    front_ring = &med->front_ring;
-    free_req = RING_FREE_REQUESTS(front_ring);
-    ASSERT(free_req > 0);
-
-    /* Copy request */
-    req_prod = front_ring->req_prod_pvt;
-    memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
-    req_prod++;
-
-    /* Update ring */
-    front_ring->req_prod_pvt = req_prod;
-    RING_PUSH_REQUESTS(front_ring);
-
-    /* We've actually *used* our reservation, so release the slot. */
-    mem_event_release_slot(d, med);
-
-    /* Give this vCPU a black eye if necessary, on the way out.
-     * See the comments above wake_blocked() for more information
-     * on how this mechanism works to avoid waiting. */
-    avail_req = mem_event_ring_available(med);
-    if( current->domain == d && avail_req < d->max_vcpus )
-        mem_event_mark_and_pause(current, med);
-
-    mem_event_ring_unlock(med);
-
-    notify_via_xen_event_channel(d, med->xen_port);
-}
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
-{
-    mem_event_front_ring_t *front_ring;
-    RING_IDX rsp_cons;
-
-    mem_event_ring_lock(med);
-
-    front_ring = &med->front_ring;
-    rsp_cons = front_ring->rsp_cons;
-
-    if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
-    {
-        mem_event_ring_unlock(med);
-        return 0;
-    }
-
-    /* Copy response */
-    memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
-    rsp_cons++;
-
-    /* Update ring */
-    front_ring->rsp_cons = rsp_cons;
-    front_ring->sring->rsp_event = rsp_cons + 1;
-
-    /* Kick any waiters -- since we've just consumed an event,
-     * there may be additional space available in the ring. */
-    mem_event_wake(d, med);
-
-    mem_event_ring_unlock(med);
-
-    return 1;
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
-{
-    mem_event_ring_lock(med);
-    mem_event_release_slot(d, med);
-    mem_event_ring_unlock(med);
-}
-
-static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
-{
-    unsigned int avail_req;
-
-    if ( !med->ring_page )
-        return -ENOSYS;
-
-    mem_event_ring_lock(med);
-
-    avail_req = mem_event_ring_available(med);
-    if ( avail_req == 0 )
-    {
-        mem_event_ring_unlock(med);
-        return -EBUSY;
-    }
-
-    if ( !foreign )
-        med->target_producers++;
-    else
-        med->foreign_producers++;
-
-    mem_event_ring_unlock(med);
-
-    return 0;
-}
-
-/* Simple try_grab wrapper for use in the wait_event() macro. */
-static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
-{
-    *rc = mem_event_grab_slot(med, 0);
-    return *rc;
-}
-
-/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
-static int mem_event_wait_slot(struct mem_event_domain *med)
-{
-    int rc = -EBUSY;
-    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
-    return rc;
-}
-
-bool_t mem_event_check_ring(struct mem_event_domain *med)
-{
-    return (med->ring_page != NULL);
-}
-
-/*
- * Determines whether or not the current vCPU belongs to the target domain,
- * and calls the appropriate wait function.  If it is a guest vCPU, then we
- * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
- * this function will always return 0 for a guest.  For a non-guest, we check
- * for space and return -EBUSY if the ring is not available.
- *
- * Return codes: -ENOSYS: the ring is not yet configured
- *               -EBUSY: the ring is busy
- *               0: a spot has been reserved
- *
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep)
-{
-    if ( (current->domain == d) && allow_sleep )
-        return mem_event_wait_slot(med);
-    else
-        return mem_event_grab_slot(med, (current->domain != d));
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_paging_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
-        p2m_mem_paging_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
-        p2m_mem_access_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_sharing_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
-        mem_sharing_sharing_resume(v->domain);
-}
-
-int do_mem_event_op(int op, uint32_t domain, void *arg)
-{
-    int ret;
-    struct domain *d;
-
-    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
-    if ( ret )
-        return ret;
-
-    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
-    if ( ret )
-        goto out;
-
-    switch (op)
-    {
-        case XENMEM_paging_op:
-            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
-            break;
-        case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
-            break;
-        default:
-            ret = -ENOSYS;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return ret;
-}
-
-/* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d)
-{
-    if ( d->mem_event->paging.ring_page ) {
-        /* Destroying the wait queue head means waking up all
-         * queued vcpus. This will drain the list, allowing
-         * the disable routine to complete. It will also drop
-         * all domain refs the wait-queued vcpus are holding.
-         * Finally, because this code path involves previously
-         * pausing the domain (domain_kill), unpausing the 
-         * vcpus causes no harm. */
-        destroy_waitqueue_head(&d->mem_event->paging.wq);
-        (void)mem_event_disable(d, &d->mem_event->paging);
-    }
-    if ( d->mem_event->access.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->access.wq);
-        (void)mem_event_disable(d, &d->mem_event->access);
-    }
-    if ( d->mem_event->share.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->share.wq);
-        (void)mem_event_disable(d, &d->mem_event->share);
-    }
-}
-
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
-    int rc;
-
-    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
-    if ( rc )
-        return rc;
-
-    if ( unlikely(d == current->domain) )
-    {
-        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
-        return -EINVAL;
-    }
-
-    if ( unlikely(d->is_dying) )
-    {
-        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
-                 d->domain_id);
-        return 0;
-    }
-
-    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
-    {
-        gdprintk(XENLOG_INFO,
-                 "Memory event op on a domain (%u) with no vcpus\n",
-                 d->domain_id);
-        return -EINVAL;
-    }
-
-    rc = -ENOSYS;
-
-    switch ( mec->mode )
-    {
-    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
-    {
-        struct mem_event_domain *med = &d->mem_event->paging;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
-        {
-            struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-            rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
-                break;
-
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* No paging if iommu is used */
-            rc = -EMLINK;
-            if ( unlikely(need_iommu(d)) )
-                break;
-
-            rc = -EXDEV;
-            /* Disallow paging in a PoD guest */
-            if ( p2m->pod.entry_count )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
-                                    HVM_PARAM_PAGING_RING_PFN,
-                                    mem_paging_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
-    {
-        struct mem_event_domain *med = &d->mem_event->access;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
-        {
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* Currently only EPT is supported */
-            if ( !cpu_has_vmx )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
-                                    HVM_PARAM_ACCESS_RING_PFN,
-                                    mem_access_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
-    {
-        struct mem_event_domain *med = &d->mem_event->share;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
-        {
-            rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
-                break;
-
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
-                                    HVM_PARAM_SHARING_RING_PFN,
-                                    mem_sharing_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    default:
-        rc = -ENOSYS;
-    }
-
-    return rc;
-}
-
-void mem_event_vcpu_pause(struct vcpu *v)
-{
-    ASSERT(v == current);
-
-    atomic_inc(&v->mem_event_pause_count);
-    vcpu_pause_nosync(v);
-}
-
-void mem_event_vcpu_unpause(struct vcpu *v)
-{
-    int old, new, prev = v->mem_event_pause_count.counter;
-
-    /* All unpause requests as a result of toolstack responses.  Prevent
-     * underflow of the vcpu pause count. */
-    do
-    {
-        old = prev;
-        new = old - 1;
-
-        if ( new < 0 )
-        {
-            printk(XENLOG_G_WARNING
-                   "%pv mem_event: Too many unpause attempts\n", v);
-            return;
-        }
-
-        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
-    } while ( prev != old );
-
-    vcpu_unpause(v);
-}
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 235776d..65f6a3d 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,7 +22,7 @@
 
 
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 
 
 int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 79188b9..fa845fd 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -30,7 +30,7 @@
 #include <asm/page.h>
 #include <asm/string.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <asm/atomic.h>
 #include <xen/rcupdate.h>
 #include <asm/event.h>
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bd4c7c8..881259a 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -26,7 +26,7 @@
 #include <asm/p2m.h>
 #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 085ab6f..46231cf 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -30,7 +30,7 @@
 #include <asm/paging.h>
 #include <asm/p2m.h>
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index c2e89e1..ac30e9c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -30,7 +30,7 @@
 #include <asm/p2m.h>
 #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <xen/event.h>
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 69c6195..203c6b4 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -2,9 +2,9 @@
 #include <xen/multicall.h>
 #include <compat/memory.h>
 #include <compat/xen.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
+#include <xen/mem_access.h>
 
 int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 4937f9a..1f9702d 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -35,9 +35,9 @@
 #include <asm/msr.h>
 #include <asm/setup.h>
 #include <asm/numa.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
+#include <xen/mem_access.h>
 #include <public/memory.h>
 
 /* Parameters for PFN/MADDR compression. */
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 3683ae3..b9f3387 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -51,6 +51,8 @@ obj-y += tmem_xen.o
 obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += lzo.o
+obj-$(HAS_MEM_ACCESS) += mem_access.o
+obj-$(HAS_MEM_ACCESS) += mem_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 1952070..6f51311 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -15,6 +15,7 @@
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
+#include <xen/mem_event.h>
 #include <xen/time.h>
 #include <xen/console.h>
 #include <xen/softirq.h>
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
new file mode 100644
index 0000000..07161a2
--- /dev/null
+++ b/xen/common/mem_access.c
@@ -0,0 +1,133 @@
+/******************************************************************************
+ * mem_access.c
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#include <xen/sched.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <asm/p2m.h>
+#include <public/memory.h>
+#include <xen/mem_event.h>
+#include <xsm/xsm.h>
+
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+    long rc;
+    xen_mem_access_op_t mao;
+    struct domain *d;
+
+    if ( copy_from_guest(&mao, arg, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
+    if ( rc )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    if ( rc )
+        goto out;
+
+    rc = -ENODEV;
+    if ( unlikely(!d->mem_event->access.ring_page) )
+        goto out;
+
+    switch ( mao.op )
+    {
+    case XENMEM_access_op_resume:
+        p2m_mem_access_resume(d);
+        rc = 0;
+        break;
+
+    case XENMEM_access_op_set_access:
+    {
+        unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
+
+        rc = -EINVAL;
+        if ( (mao.pfn != ~0ull) &&
+             (mao.nr < start_iter ||
+              ((mao.pfn + mao.nr - 1) < mao.pfn) ||
+              ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
+            break;
+
+        rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
+                                MEMOP_CMD_MASK, mao.access);
+        if ( rc > 0 )
+        {
+            ASSERT(!(rc & MEMOP_CMD_MASK));
+            rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
+                                               XENMEM_access_op | rc, arg);
+        }
+        break;
+    }
+
+    case XENMEM_access_op_get_access:
+    {
+        xenmem_access_t access;
+
+        rc = -EINVAL;
+        if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
+            break;
+
+        rc = p2m_get_mem_access(d, mao.pfn, &access);
+        if ( rc != 0 )
+            break;
+
+        mao.access = access;
+        rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
+
+        break;
+    }
+
+    default:
+        rc = -ENOSYS;
+        break;
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+    int rc = mem_event_claim_slot(d, &d->mem_event->access);
+    if ( rc < 0 )
+        return rc;
+
+    mem_event_put_request(d, &d->mem_event->access, req);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
new file mode 100644
index 0000000..b4a23fd
--- /dev/null
+++ b/xen/common/mem_event.c
@@ -0,0 +1,705 @@
+/******************************************************************************
+ * mem_event.c
+ *
+ * Memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#include <xen/sched.h>
+#include <xen/event.h>
+#include <xen/wait.h>
+#include <asm/p2m.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <asm/mem_paging.h>
+#include <asm/mem_sharing.h>
+#include <xsm/xsm.h>
+
+/* for public/io/ring.h macros */
+#define xen_mb()   mb()
+#define xen_rmb()  rmb()
+#define xen_wmb()  wmb()
+
+#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
+#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
+#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
+
+static int mem_event_enable(
+    struct domain *d,
+    xen_domctl_mem_event_op_t *mec,
+    struct mem_event_domain *med,
+    int pause_flag,
+    int param,
+    xen_event_channel_notification_t notification_fn)
+{
+    int rc;
+    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
+
+    /* Only one helper at a time. If the helper crashed,
+     * the ring is in an undefined state and so is the guest.
+     */
+    if ( med->ring_page )
+        return -EBUSY;
+
+    /* The parameter defaults to zero, and it should be 
+     * set to something */
+    if ( ring_gfn == 0 )
+        return -ENOSYS;
+
+    mem_event_ring_lock_init(med);
+    mem_event_ring_lock(med);
+
+    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
+                                    &med->ring_page);
+    if ( rc < 0 )
+        goto err;
+
+    /* Set the number of currently blocked vCPUs to 0. */
+    med->blocked = 0;
+
+    /* Allocate event channel */
+    rc = alloc_unbound_xen_event_channel(d->vcpu[0],
+                                         current->domain->domain_id,
+                                         notification_fn);
+    if ( rc < 0 )
+        goto err;
+
+    med->xen_port = mec->port = rc;
+
+    /* Prepare ring buffer */
+    FRONT_RING_INIT(&med->front_ring,
+                    (mem_event_sring_t *)med->ring_page,
+                    PAGE_SIZE);
+
+    /* Save the pause flag for this particular ring. */
+    med->pause_flag = pause_flag;
+
+    /* Initialize the last-chance wait queue. */
+    init_waitqueue_head(&med->wq);
+
+    mem_event_ring_unlock(med);
+    return 0;
+
+ err:
+    destroy_ring_for_helper(&med->ring_page, 
+                            med->ring_pg_struct);
+    mem_event_ring_unlock(med);
+
+    return rc;
+}
+
+static unsigned int mem_event_ring_available(struct mem_event_domain *med)
+{
+    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
+    avail_req -= med->target_producers;
+    avail_req -= med->foreign_producers;
+
+    BUG_ON(avail_req < 0);
+
+    return avail_req;
+}
+
+/*
+ * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
+ * ring. These vCPUs were paused on their way out after placing an event,
+ * but need to be resumed where the ring is capable of processing at least
+ * one event from them.
+ */
+static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
+{
+    struct vcpu *v;
+    int online = d->max_vcpus;
+    unsigned int avail_req = mem_event_ring_available(med);
+
+    if ( avail_req == 0 || med->blocked == 0 )
+        return;
+
+    /*
+     * We ensure that we only have vCPUs online if there are enough free slots
+     * for their memory events to be processed.  This will ensure that no
+     * memory events are lost (due to the fact that certain types of events
+     * cannot be replayed, we need to ensure that there is space in the ring
+     * for when they are hit).
+     * See comment below in mem_event_put_request().
+     */
+    for_each_vcpu ( d, v )
+        if ( test_bit(med->pause_flag, &v->pause_flags) )
+            online--;
+
+    ASSERT(online == (d->max_vcpus - med->blocked));
+
+    /* We remember which vcpu last woke up to avoid scanning always linearly
+     * from zero and starving higher-numbered vcpus under high load */
+    if ( d->vcpu )
+    {
+        int i, j, k;
+
+        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
+        {
+            k = i % d->max_vcpus;
+            v = d->vcpu[k];
+            if ( !v )
+                continue;
+
+            if ( !(med->blocked) || online >= avail_req )
+               break;
+
+            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            {
+                vcpu_unpause(v);
+                online++;
+                med->blocked--;
+                med->last_vcpu_wake_up = k;
+            }
+        }
+    }
+}
+
+/*
+ * In the event that a vCPU attempted to place an event in the ring and
+ * was unable to do so, it is queued on a wait queue.  These are woken as
+ * needed, and take precedence over the blocked vCPUs.
+ */
+static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
+{
+    unsigned int avail_req = mem_event_ring_available(med);
+
+    if ( avail_req > 0 )
+        wake_up_nr(&med->wq, avail_req);
+}
+
+/*
+ * mem_event_wake() will wakeup all vcpus waiting for the ring to
+ * become available.  If we have queued vCPUs, they get top priority. We
+ * are guaranteed that they will go through code paths that will eventually
+ * call mem_event_wake() again, ensuring that any blocked vCPUs will get
+ * unpaused once all the queued vCPUs have made it through.
+ */
+void mem_event_wake(struct domain *d, struct mem_event_domain *med)
+{
+    if (!list_empty(&med->wq.list))
+        mem_event_wake_queued(d, med);
+    else
+        mem_event_wake_blocked(d, med);
+}
+
+static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
+{
+    if ( med->ring_page )
+    {
+        struct vcpu *v;
+
+        mem_event_ring_lock(med);
+
+        if ( !list_empty(&med->wq.list) )
+        {
+            mem_event_ring_unlock(med);
+            return -EBUSY;
+        }
+
+        /* Free domU's event channel and leave the other one unbound */
+        free_xen_event_channel(d->vcpu[0], med->xen_port);
+
+        /* Unblock all vCPUs */
+        for_each_vcpu ( d, v )
+        {
+            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            {
+                vcpu_unpause(v);
+                med->blocked--;
+            }
+        }
+
+        destroy_ring_for_helper(&med->ring_page, 
+                                med->ring_pg_struct);
+        mem_event_ring_unlock(med);
+    }
+
+    return 0;
+}
+
+static inline void mem_event_release_slot(struct domain *d,
+                                          struct mem_event_domain *med)
+{
+    /* Update the accounting */
+    if ( current->domain == d )
+        med->target_producers--;
+    else
+        med->foreign_producers--;
+
+    /* Kick any waiters */
+    mem_event_wake(d, med);
+}
+
+/*
+ * mem_event_mark_and_pause() tags vcpu and put it to sleep.
+ * The vcpu will resume execution in mem_event_wake_waiters().
+ */
+void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
+{
+    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
+    {
+        vcpu_pause_nosync(v);
+        med->blocked++;
+    }
+}
+
+/*
+ * This must be preceded by a call to claim_slot(), and is guaranteed to
+ * succeed.  As a side-effect however, the vCPU may be paused if the ring is
+ * overly full and its continued execution would cause stalling and excessive
+ * waiting.  The vCPU will be automatically unpaused when the ring clears.
+ */
+void mem_event_put_request(struct domain *d,
+                           struct mem_event_domain *med,
+                           mem_event_request_t *req)
+{
+    mem_event_front_ring_t *front_ring;
+    int free_req;
+    unsigned int avail_req;
+    RING_IDX req_prod;
+
+    if ( current->domain != d )
+    {
+        req->flags |= MEM_EVENT_FLAG_FOREIGN;
+        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+    }
+
+    mem_event_ring_lock(med);
+
+    /* Due to the reservations, this step must succeed. */
+    front_ring = &med->front_ring;
+    free_req = RING_FREE_REQUESTS(front_ring);
+    ASSERT(free_req > 0);
+
+    /* Copy request */
+    req_prod = front_ring->req_prod_pvt;
+    memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
+    req_prod++;
+
+    /* Update ring */
+    front_ring->req_prod_pvt = req_prod;
+    RING_PUSH_REQUESTS(front_ring);
+
+    /* We've actually *used* our reservation, so release the slot. */
+    mem_event_release_slot(d, med);
+
+    /* Give this vCPU a black eye if necessary, on the way out.
+     * See the comments above wake_blocked() for more information
+     * on how this mechanism works to avoid waiting. */
+    avail_req = mem_event_ring_available(med);
+    if( current->domain == d && avail_req < d->max_vcpus )
+        mem_event_mark_and_pause(current, med);
+
+    mem_event_ring_unlock(med);
+
+    notify_via_xen_event_channel(d, med->xen_port);
+}
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
+{
+    mem_event_front_ring_t *front_ring;
+    RING_IDX rsp_cons;
+
+    mem_event_ring_lock(med);
+
+    front_ring = &med->front_ring;
+    rsp_cons = front_ring->rsp_cons;
+
+    if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
+    {
+        mem_event_ring_unlock(med);
+        return 0;
+    }
+
+    /* Copy response */
+    memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
+    rsp_cons++;
+
+    /* Update ring */
+    front_ring->rsp_cons = rsp_cons;
+    front_ring->sring->rsp_event = rsp_cons + 1;
+
+    /* Kick any waiters -- since we've just consumed an event,
+     * there may be additional space available in the ring. */
+    mem_event_wake(d, med);
+
+    mem_event_ring_unlock(med);
+
+    return 1;
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{
+    mem_event_ring_lock(med);
+    mem_event_release_slot(d, med);
+    mem_event_ring_unlock(med);
+}
+
+static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
+{
+    unsigned int avail_req;
+
+    if ( !med->ring_page )
+        return -ENOSYS;
+
+    mem_event_ring_lock(med);
+
+    avail_req = mem_event_ring_available(med);
+    if ( avail_req == 0 )
+    {
+        mem_event_ring_unlock(med);
+        return -EBUSY;
+    }
+
+    if ( !foreign )
+        med->target_producers++;
+    else
+        med->foreign_producers++;
+
+    mem_event_ring_unlock(med);
+
+    return 0;
+}
+
+/* Simple try_grab wrapper for use in the wait_event() macro. */
+static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
+{
+    *rc = mem_event_grab_slot(med, 0);
+    return *rc;
+}
+
+/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
+static int mem_event_wait_slot(struct mem_event_domain *med)
+{
+    int rc = -EBUSY;
+    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
+    return rc;
+}
+
+bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+    return (med->ring_page != NULL);
+}
+
+/*
+ * Determines whether or not the current vCPU belongs to the target domain,
+ * and calls the appropriate wait function.  If it is a guest vCPU, then we
+ * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
+ * this function will always return 0 for a guest.  For a non-guest, we check
+ * for space and return -EBUSY if the ring is not available.
+ *
+ * Return codes: -ENOSYS: the ring is not yet configured
+ *               -EBUSY: the ring is busy
+ *               0: a spot has been reserved
+ *
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep)
+{
+    if ( (current->domain == d) && allow_sleep )
+        return mem_event_wait_slot(med);
+    else
+        return mem_event_grab_slot(med, (current->domain != d));
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_paging_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
+        p2m_mem_paging_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_access_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
+        p2m_mem_access_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_sharing_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
+        mem_sharing_sharing_resume(v->domain);
+}
+
+int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+    int ret;
+    struct domain *d;
+
+    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
+    if ( ret )
+        return ret;
+
+    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
+    if ( ret )
+        goto out;
+
+    switch (op)
+    {
+        case XENMEM_paging_op:
+            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
+            break;
+        case XENMEM_sharing_op:
+            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
+            break;
+        default:
+            ret = -ENOSYS;
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return ret;
+}
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d)
+{
+    if ( d->mem_event->paging.ring_page ) {
+        /* Destroying the wait queue head means waking up all
+         * queued vcpus. This will drain the list, allowing
+         * the disable routine to complete. It will also drop
+         * all domain refs the wait-queued vcpus are holding.
+         * Finally, because this code path involves previously
+         * pausing the domain (domain_kill), unpausing the 
+         * vcpus causes no harm. */
+        destroy_waitqueue_head(&d->mem_event->paging.wq);
+        (void)mem_event_disable(d, &d->mem_event->paging);
+    }
+    if ( d->mem_event->access.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->access.wq);
+        (void)mem_event_disable(d, &d->mem_event->access);
+    }
+    if ( d->mem_event->share.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->share.wq);
+        (void)mem_event_disable(d, &d->mem_event->share);
+    }
+}
+
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    int rc;
+
+    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
+    if ( rc )
+        return rc;
+
+    if ( unlikely(d == current->domain) )
+    {
+        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
+                 d->domain_id);
+        return 0;
+    }
+
+    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
+    {
+        gdprintk(XENLOG_INFO,
+                 "Memory event op on a domain (%u) with no vcpus\n",
+                 d->domain_id);
+        return -EINVAL;
+    }
+
+    rc = -ENOSYS;
+
+    switch ( mec->mode )
+    {
+    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
+    {
+        struct mem_event_domain *med = &d->mem_event->paging;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
+        {
+            struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+            rc = -EOPNOTSUPP;
+            /* pvh fixme: p2m_is_foreign types need addressing */
+            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+                break;
+
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            /* No paging if iommu is used */
+            rc = -EMLINK;
+            if ( unlikely(need_iommu(d)) )
+                break;
+
+            rc = -EXDEV;
+            /* Disallow paging in a PoD guest */
+            if ( p2m->pod.entry_count )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
+                                    HVM_PARAM_PAGING_RING_PFN,
+                                    mem_paging_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
+    {
+        struct mem_event_domain *med = &d->mem_event->access;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
+        {
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            /* Currently only EPT is supported */
+            if ( !cpu_has_vmx )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
+                                    HVM_PARAM_ACCESS_RING_PFN,
+                                    mem_access_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
+    {
+        struct mem_event_domain *med = &d->mem_event->share;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
+        {
+            rc = -EOPNOTSUPP;
+            /* pvh fixme: p2m_is_foreign types need addressing */
+            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+                break;
+
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
+                                    HVM_PARAM_SHARING_RING_PFN,
+                                    mem_sharing_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    default:
+        rc = -ENOSYS;
+    }
+
+    return rc;
+}
+
+void mem_event_vcpu_pause(struct vcpu *v)
+{
+    ASSERT(v == current);
+
+    atomic_inc(&v->mem_event_pause_count);
+    vcpu_pause_nosync(v);
+}
+
+void mem_event_vcpu_unpause(struct vcpu *v)
+{
+    int old, new, prev = v->mem_event_pause_count.counter;
+
+    /* All unpause requests as a result of toolstack responses.  Prevent
+     * underflow of the vcpu pause count. */
+    do
+    {
+        old = prev;
+        new = old - 1;
+
+        if ( new < 0 )
+        {
+            printk(XENLOG_G_WARNING
+                   "%pv mem_event: Too many unpause attempts\n", v);
+            return;
+        }
+
+        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
+    } while ( prev != old );
+
+    vcpu_unpause(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index c2dd31b..cc8a3d0 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -977,6 +977,69 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     return rc;
 }
 
+void destroy_ring_for_helper(
+    void **_va, struct page_info *page)
+{
+    void *va = *_va;
+
+    if ( va != NULL )
+    {
+        unmap_domain_page_global(va);
+        put_page_and_type(page);
+        *_va = NULL;
+    }
+}
+
+int prepare_ring_for_helper(
+    struct domain *d, unsigned long gmfn, struct page_info **_page,
+    void **_va)
+{
+    struct page_info *page;
+    p2m_type_t p2mt;
+    void *va;
+
+    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
+
+#ifdef CONFIG_MEM_PAGING
+    if ( p2m_is_paging(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        p2m_mem_paging_populate(d, gmfn);
+        return -ENOENT;
+    }
+#endif
+#ifdef CONFIG_MEM_SHARING
+    if ( p2m_is_shared(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        return -ENOENT;
+    }
+#endif
+
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    va = __map_domain_page_global(page);
+    if ( va == NULL )
+    {
+        put_page_and_type(page);
+        return -ENOMEM;
+    }
+
+    *_va = va;
+    *_page = page;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 9fa80a4..7fc3b97 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -301,7 +301,6 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
     })
 
 static inline void put_gfn(struct domain *d, unsigned long gfn) {}
-static inline void mem_event_cleanup(struct domain *d) {}
 static inline int relinquish_shared_pages(struct domain *d)
 {
     return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 1123857..74e66f8 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -226,12 +226,6 @@ int hvm_vcpu_cacheattr_init(struct vcpu *v);
 void hvm_vcpu_cacheattr_destroy(struct vcpu *v);
 void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip);
 
-/* Prepare/destroy a ring for a dom0 helper. Helper with talk
- * with Xen on behalf of this hvm domain. */
-int prepare_ring_for_helper(struct domain *d, unsigned long gmfn, 
-                            struct page_info **_page, void **_va);
-void destroy_ring_for_helper(void **_va, struct page_info *page);
-
 bool_t hvm_send_assist_req(ioreq_t *p);
 void hvm_broadcast_assist_req(ioreq_t *p);
 
diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
deleted file mode 100644
index 5c7c5fd..0000000
--- a/xen/include/asm-x86/mem_access.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_access.h
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-#ifndef _XEN_ASM_MEM_ACCESS_H
-#define _XEN_ASM_MEM_ACCESS_H
-
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-int mem_access_send_req(struct domain *d, mem_event_request_t *req);
-
-#endif /* _XEN_ASM_MEM_ACCESS_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
deleted file mode 100644
index ed4481a..0000000
--- a/xen/include/asm-x86/mem_event.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_event.h
- *
- * Common interface for memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#ifndef __MEM_EVENT_H__
-#define __MEM_EVENT_H__
-
-/* Returns whether a ring has been set up */
-bool_t mem_event_check_ring(struct mem_event_domain *med);
-
-/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
- * available space and the caller is a foreign domain. If the guest itself
- * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events. 
- *
- * However, the allow_sleep flag can be set to false in cases in which it is ok
- * to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!). 
- *
- * In general, you must follow a claim_slot() call with either put_request() or
- * cancel_slot(), both of which are guaranteed to
- * succeed. 
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d, 
-                                        struct mem_event_domain *med)
-{
-    return __mem_event_claim_slot(d, med, 1);
-}
-
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
-{
-    return __mem_event_claim_slot(d, med, 0);
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
-
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req);
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp);
-
-int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
-
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
-
-#endif /* __MEM_EVENT_H__ */
-
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 7b85865..ebd482d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -611,8 +611,6 @@ unsigned int domain_clamp_alloc_bitsize(struct domain *d, unsigned int bits);
 
 unsigned long domain_get_maximum_gpfn(struct domain *d);
 
-void mem_event_cleanup(struct domain *d);
-
 extern struct domain *dom_xen, *dom_io, *dom_cow;	/* for vmcoreinfo */
 
 /* Definition of an mm lock: spinlock with extra fields for debugging */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
new file mode 100644
index 0000000..6c5a068
--- /dev/null
+++ b/xen/include/xen/mem_access.h
@@ -0,0 +1,58 @@
+/******************************************************************************
+ * mem_access.h
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#ifndef _XEN_ASM_MEM_ACCESS_H
+#define _XEN_ASM_MEM_ACCESS_H
+
+#ifdef HAS_MEM_ACCESS
+
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
+int mem_access_send_req(struct domain *d, mem_event_request_t *req);
+
+#else
+
+static inline
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+    return -ENOSYS;
+}
+
+static inline
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+    return -ENOSYS;
+}
+
+#endif /* HAS_MEM_ACCESS */
+
+#endif /* _XEN_ASM_MEM_ACCESS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
new file mode 100644
index 0000000..5e3963e
--- /dev/null
+++ b/xen/include/xen/mem_event.h
@@ -0,0 +1,141 @@
+/******************************************************************************
+ * mem_event.h
+ *
+ * Common interface for memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#ifndef __MEM_EVENT_H__
+#define __MEM_EVENT_H__
+
+#ifdef HAS_MEM_ACCESS
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d);
+
+/* Returns whether a ring has been set up */
+bool_t mem_event_check_ring(struct mem_event_domain *med);
+
+/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
+ * available space and the caller is a foreign domain. If the guest itself
+ * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
+ * that the ring does not lose future events. 
+ *
+ * However, the allow_sleep flag can be set to false in cases in which it is ok
+ * to lose future events, and thus -EBUSY can be returned to guest vcpus
+ * (handle with care!). 
+ *
+ * In general, you must follow a claim_slot() call with either put_request() or
+ * cancel_slot(), both of which are guaranteed to
+ * succeed. 
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep);
+static inline int mem_event_claim_slot(struct domain *d, 
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 1);
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 0);
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
+
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+                            mem_event_request_t *req);
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+                           mem_event_response_t *rsp);
+
+int do_mem_event_op(int op, uint32_t domain, void *arg);
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+
+void mem_event_vcpu_pause(struct vcpu *v);
+void mem_event_vcpu_unpause(struct vcpu *v);
+
+#else
+
+static inline void mem_event_cleanup(struct domain *d) {}
+
+static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+    return 0;
+}
+
+static inline int mem_event_claim_slot(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return -ENOSYS;
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return -ENOSYS;
+}
+
+static inline
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{}
+
+static inline
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+                            mem_event_request_t *req)
+{}
+
+static inline
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+                           mem_event_response_t *rsp)
+{
+    return -ENOSYS;
+}
+
+static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+    return -ENOSYS;
+}
+
+static inline
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    return -ENOSYS;
+}
+
+static inline void mem_event_vcpu_pause(struct vcpu *v) {}
+static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+
+#endif /* HAS_MEM_ACCESS */
+
+#endif /* __MEM_EVENT_H__ */
+
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index b183189..7c0efc7 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -371,4 +371,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn);
 /* TRUE if the whole page at @mfn is of the requested RAM type(s) above. */
 int page_is_ram_type(unsigned long mfn, unsigned long mem_type);
 
+/* Prepare/destroy a ring for a dom0 helper. Helper with talk
+ * with Xen on behalf of this domain. */
+int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
+                            struct page_info **_page, void **_va);
+void destroy_ring_for_helper(void **_va, struct page_info *page);
+
 #endif /* __XEN_MM_H__ */
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 02/15] xen: Relocate struct npfec definition into common
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-09-01 14:21 ` [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-09-01 14:21 ` Tamas K Lengyel
  2014-09-01 15:44   ` Jan Beulich
  2014-09-01 14:21 ` [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:21 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Nested page fault exception code definitions can be reused on ARM as well.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/include/asm-x86/hvm/hvm.h |  1 +
 xen/include/asm-x86/mm.h      | 21 ---------------------
 xen/include/xen/mm.h          | 21 +++++++++++++++++++++
 3 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 74e66f8..2701e35 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -27,6 +27,7 @@
 #include <public/domctl.h>
 #include <public/hvm/save.h>
 #include <public/hvm/ioreq.h>
+#include <xen/mm.h>
 #include <asm/mm.h>
 
 /* Interrupt acknowledgement sources. */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index ebd482d..bafd28c 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -551,27 +551,6 @@ void audit_domains(void);
 
 #endif
 
-/*
- * Extra fault info types which are used to further describe
- * the source of an access violation.
- */
-typedef enum {
-    npfec_kind_unknown, /* must be first */
-    npfec_kind_in_gpt,  /* violation in guest page table */
-    npfec_kind_with_gla /* violation with guest linear address */
-} npfec_kind_t;
-
-/*
- * Nested page fault exception codes.
- */
-struct npfec {
-    unsigned int read_access:1;
-    unsigned int write_access:1;
-    unsigned int insn_fetch:1;
-    unsigned int gla_valid:1;
-    unsigned int kind:2;  /* npfec_kind_t */
-};
-
 int new_guest_cr3(unsigned long pfn);
 void make_cr3(struct vcpu *v, unsigned long mfn);
 void update_cr3(struct vcpu *v);
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 7c0efc7..74a65a6 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -88,6 +88,27 @@ int assign_pages(
 /* Dump info to serial console */
 void arch_dump_shared_mem_info(void);
 
+/*
+ * Extra fault info types which are used to further describe
+ * the source of an access violation.
+ */
+typedef enum {
+    npfec_kind_unknown, /* must be first */
+    npfec_kind_in_gpt,  /* violation in guest page table */
+    npfec_kind_with_gla /* violation with guest linear address */
+} npfec_kind_t;
+
+/*
+ * Nested page fault exception codes.
+ */
+struct npfec {
+    unsigned int read_access:1;
+    unsigned int write_access:1;
+    unsigned int insn_fetch:1;
+    unsigned int gla_valid:1;
+    unsigned int kind:2;  /* npfec_kind_t */
+};
+
 /* memflags: */
 #define _MEMF_no_refcount 0
 #define  MEMF_no_refcount (1U<<_MEMF_no_refcount)
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-09-01 14:21 ` [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
  2014-09-01 14:21 ` [PATCH v3 02/15] xen: Relocate struct npfec definition " Tamas K Lengyel
@ 2014-09-01 14:21 ` Tamas K Lengyel
  2014-09-01 15:46   ` Jan Beulich
  2014-09-01 18:11   ` Julien Grall
  2014-09-01 14:21 ` [PATCH v3 04/15] xen/mem_event: Clean out superfluous white-spaces Tamas K Lengyel
                   ` (12 subsequent siblings)
  15 siblings, 2 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:21 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/arch/x86/domctl.c           | 8 --------
 xen/arch/x86/x86_64/compat/mm.c | 4 ----
 xen/arch/x86/x86_64/mm.c        | 4 ----
 xen/common/domctl.c             | 9 +++++++++
 xen/common/memory.c             | 5 +++++
 5 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 3aeb79d..55a9495 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1207,14 +1207,6 @@ long arch_do_domctl(
     }
     break;
 
-    case XEN_DOMCTL_mem_event_op:
-    {
-        ret = mem_event_domctl(d, &domctl->u.mem_event_op,
-                              guest_handle_cast(u_domctl, void));
-        copyback = 1;
-    }
-    break;
-
     case XEN_DOMCTL_mem_sharing_op:
     {
         ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 203c6b4..9c1a36d 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -198,10 +198,6 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-    case XENMEM_access_op:
-        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
-        break;
-
     case XENMEM_sharing_op:
     {
         xen_mem_sharing_op_t mso;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 1f9702d..c8272e9 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1048,10 +1048,6 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-    case XENMEM_access_op:
-        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
-        break;
-
     case XENMEM_sharing_op:
     {
         xen_mem_sharing_op_t mso;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index c326aba..489f84a 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -24,6 +24,7 @@
 #include <xen/bitmap.h>
 #include <xen/paging.h>
 #include <xen/hypercall.h>
+#include <xen/mem_event.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -967,6 +968,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
     break;
 
+    case XEN_DOMCTL_mem_event_op:
+    {
+        ret = mem_event_domctl(d, &op->u.mem_event_op,
+                              guest_handle_cast(u_domctl, void));
+        copyback = 1;
+    }
+    break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index cc8a3d0..4e530bf 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -25,6 +25,7 @@
 #include <asm/hardirq.h>
 #include <asm/p2m.h>
 #include <xen/numa.h>
+#include <xen/mem_access.h>
 #include <public/memory.h>
 #include <xsm/xsm.h>
 #include <xen/trace.h>
@@ -969,6 +970,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         break;
 
+    case XENMEM_access_op:
+        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
+        break;
+
     default:
         rc = arch_memory_op(cmd, arg);
         break;
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 04/15] xen/mem_event: Clean out superfluous white-spaces
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2014-09-01 14:21 ` [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
@ 2014-09-01 14:21 ` Tamas K Lengyel
  2014-09-01 14:21 ` [PATCH v3 05/15] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:21 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Clean the mem_event header as well.
---
 xen/common/mem_event.c      | 20 ++++++++++----------
 xen/include/xen/mem_event.h |  8 ++++----
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index b4a23fd..eae4ba8 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -57,7 +57,7 @@ static int mem_event_enable(
     if ( med->ring_page )
         return -EBUSY;
 
-    /* The parameter defaults to zero, and it should be 
+    /* The parameter defaults to zero, and it should be
      * set to something */
     if ( ring_gfn == 0 )
         return -ENOSYS;
@@ -65,7 +65,7 @@ static int mem_event_enable(
     mem_event_ring_lock_init(med);
     mem_event_ring_lock(med);
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
+    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
                                     &med->ring_page);
     if ( rc < 0 )
         goto err;
@@ -97,7 +97,7 @@ static int mem_event_enable(
     return 0;
 
  err:
-    destroy_ring_for_helper(&med->ring_page, 
+    destroy_ring_for_helper(&med->ring_page,
                             med->ring_pg_struct);
     mem_event_ring_unlock(med);
 
@@ -226,7 +226,7 @@ static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
             }
         }
 
-        destroy_ring_for_helper(&med->ring_page, 
+        destroy_ring_for_helper(&med->ring_page,
                                 med->ring_pg_struct);
         mem_event_ring_unlock(med);
     }
@@ -479,7 +479,7 @@ void mem_event_cleanup(struct domain *d)
          * the disable routine to complete. It will also drop
          * all domain refs the wait-queued vcpus are holding.
          * Finally, because this code path involves previously
-         * pausing the domain (domain_kill), unpausing the 
+         * pausing the domain (domain_kill), unpausing the
          * vcpus causes no harm. */
         destroy_waitqueue_head(&d->mem_event->paging.wq);
         (void)mem_event_disable(d, &d->mem_event->paging);
@@ -559,7 +559,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( p2m->pod.entry_count )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
                                     HVM_PARAM_PAGING_RING_PFN,
                                     mem_paging_notification);
         }
@@ -579,7 +579,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
+    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
     {
         struct mem_event_domain *med = &d->mem_event->access;
         rc = -EINVAL;
@@ -597,7 +597,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !cpu_has_vmx )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_access,
                                     HVM_PARAM_ACCESS_RING_PFN,
                                     mem_access_notification);
         }
@@ -617,7 +617,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
+    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
     {
         struct mem_event_domain *med = &d->mem_event->share;
         rc = -EINVAL;
@@ -636,7 +636,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !hap_enabled(d) )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
                                     HVM_PARAM_SHARING_RING_PFN,
                                     mem_sharing_notification);
         }
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
index 5e3963e..33c9e7c 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/mem_event.h
@@ -35,19 +35,19 @@ bool_t mem_event_check_ring(struct mem_event_domain *med);
 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
  * available space and the caller is a foreign domain. If the guest itself
  * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events. 
+ * that the ring does not lose future events.
  *
  * However, the allow_sleep flag can be set to false in cases in which it is ok
  * to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!). 
+ * (handle with care!).
  *
  * In general, you must follow a claim_slot() call with either put_request() or
  * cancel_slot(), both of which are guaranteed to
- * succeed. 
+ * succeed.
  */
 int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
                             bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d, 
+static inline int mem_event_claim_slot(struct domain *d,
                                         struct mem_event_domain *med)
 {
     return __mem_event_claim_slot(d, med, 1);
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 05/15] xen/mem_event: Relax error condition on debug builds
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2014-09-01 14:21 ` [PATCH v3 04/15] xen/mem_event: Clean out superfluous white-spaces Tamas K Lengyel
@ 2014-09-01 14:21 ` Tamas K Lengyel
  2014-09-01 15:47   ` Jan Beulich
  2014-09-01 14:22 ` [PATCH v3 06/15] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:21 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

A faulty tool stack can brick a debug hypervisor. Unpleasant while dev/test.

Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v3: Switch to gdprintk and print the vCPU id as well.
---
 xen/common/mem_event.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index eae4ba8..f0feedd 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -278,7 +278,11 @@ void mem_event_put_request(struct domain *d,
     if ( current->domain != d )
     {
         req->flags |= MEM_EVENT_FLAG_FOREIGN;
-        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+#ifndef NDEBUG
+        if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
+            gdprintk(XENLOG_G_WARNING, "VCPU %"PRIu32" was not paused.\n",
+                     req->vcpu_id);
+#endif
     }
 
     mem_event_ring_lock(med);
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 06/15] xen/mem_event: Abstract architecture specific sanity checks
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2014-09-01 14:21 ` [PATCH v3 05/15] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 14:22 ` [PATCH v3 07/15] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Move architecture specific sanity checks into its own function
which is called when enabling mem_event.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Move sanity check function into architecture specific p2m.h
---
 xen/common/mem_event.c    | 11 ++++-------
 xen/include/asm-x86/p2m.h | 14 ++++++++++++++
 2 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index f0feedd..584de16 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -592,14 +592,11 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         {
         case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
         {
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* Currently only EPT is supported */
-            if ( !cpu_has_vmx )
+            if ( !p2m_mem_event_sanity_check(d) )
+            {
+                rc = -ENODEV;
                 break;
+            }
 
             rc = mem_event_enable(d, mec, med, _VPF_mem_access,
                                     HVM_PARAM_ACCESS_RING_PFN,
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 3975e32..9a1fae3 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -613,6 +613,20 @@ long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
 int p2m_get_mem_access(struct domain *d, unsigned long pfn,
                        xenmem_access_t *access);
 
+/* Sanity check for mem_event hardware support */
+static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+{
+    /* Only HAP is supported */
+    if ( !hap_enabled(d) )
+        return 0;
+
+    /* Currently only EPT is supported */
+    if ( !cpu_has_vmx )
+        return 0;
+
+    return 1;
+}
+
 /* 
  * Internal functions, only called by other p2m code
  */
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 07/15] xen/mem_access: Abstract architecture specific sanity check
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 06/15] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 15:50   ` Jan Beulich
  2014-09-01 14:22 ` [PATCH v3 08/15] xen/arm: p2m type definitions and changes Tamas K Lengyel
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Move sanity check function into architecture specific p2m.h
---
 xen/common/mem_access.c   | 6 ++++--
 xen/include/asm-x86/p2m.h | 8 ++++++++
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 07161a2..26cf0a8 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -43,9 +43,11 @@ int mem_access_memop(unsigned long cmd,
     if ( rc )
         return rc;
 
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( !p2m_mem_access_sanity_check(d) )
+    {
+        rc = -EINVAL;
         goto out;
+    }
 
     rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
     if ( rc )
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9a1fae3..5152862 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -627,6 +627,14 @@ static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
     return 1;
 }
 
+/* Sanity check for mem_access hardware support */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    if ( !is_hvm_domain(d) )
+        return 0;
+    return 1;
+}
+
 /* 
  * Internal functions, only called by other p2m code
  */
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 08/15] xen/arm: p2m type definitions and changes
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 07/15] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 14:22 ` [PATCH v3 09/15] xen/arm: Add set access required domctl Tamas K Lengyel
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Define p2m_access_t in ARM and add necessary changes for page table
construction routines to pass the default access information. Also,
define the Radix tree that will hold access permission settings as
the PTE's don't have enough software programmable bits available.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/arch/arm/p2m.c        | 38 ++++++++++++++++-------
 xen/include/asm-arm/p2m.h | 78 +++++++++++++++++++++++++++++++++++------------
 2 files changed, 85 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 143199b..a6dea5b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -10,6 +10,9 @@
 #include <asm/event.h>
 #include <asm/hardirq.h>
 #include <asm/page.h>
+#include <xen/mem_event.h>
+#include <public/mem_event.h>
+#include <xen/mem_access.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -458,7 +461,8 @@ static int apply_one_level(struct domain *d,
                            paddr_t *maddr,
                            bool_t *flush,
                            int mattr,
-                           p2m_type_t t)
+                           p2m_type_t t,
+                           p2m_access_t a)
 {
     /* Helpers to lookup the properties of each level */
     const paddr_t level_sizes[] =
@@ -690,7 +694,8 @@ static int apply_p2m_changes(struct domain *d,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     p2m_access_t a)
 {
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -755,7 +760,7 @@ static int apply_p2m_changes(struct domain *d,
                               1, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
@@ -776,7 +781,7 @@ static int apply_p2m_changes(struct domain *d,
                               2, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
@@ -795,7 +800,7 @@ static int apply_p2m_changes(struct domain *d,
                               3, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         /* L3 had better have done something! We cannot descend any further */
         BUG_ON(ret == P2M_ONE_DESCEND);
@@ -837,7 +842,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, p2m_ram_rw,
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -849,7 +855,8 @@ int map_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr_mfns),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_mmio_direct);
+                             MATTR_DEV, p2m_mmio_direct,
+                             d->arch.p2m.default_access);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -861,7 +868,8 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, t,
+                             d->arch.p2m.default_access);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -871,7 +879,8 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid,
+                      d->arch.p2m.default_access);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -974,6 +983,8 @@ void p2m_teardown(struct domain *d)
 
     p2m_free_vmid(d);
 
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
     spin_unlock(&p2m->lock);
 }
 
@@ -999,6 +1010,9 @@ int p2m_init(struct domain *d)
     p2m->max_mapped_gfn = 0;
     p2m->lowest_mapped_gfn = ULONG_MAX;
 
+    p2m->default_access = p2m_access_rwx;
+    radix_tree_init(&p2m->mem_access_settings);
+
 err:
     spin_unlock(&p2m->lock);
 
@@ -1013,7 +1027,8 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid,
+                              d->arch.p2m.default_access);
 }
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
@@ -1027,7 +1042,8 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
-                             MATTR_MEM, p2m_invalid);
+                             MATTR_MEM, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 06c93a0..afdbf84 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,9 +2,54 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
+#include <xen/radix-tree.h>
+#include <public/memory.h>
+#include <public/mem_event.h>
 
 struct domain;
 
+/* List of possible type for each page in the p2m entry.
+ * The number of available bit per page in the pte for this purpose is 4 bits.
+ * So it's possible to only have 16 fields. If we run out of value in the
+ * future, it's possible to use higher value for pseudo-type and don't store
+ * them in the p2m entry.
+ */
+typedef enum {
+    p2m_invalid = 0,    /* Nothing mapped here */
+    p2m_ram_rw,         /* Normal read/write guest RAM */
+    p2m_ram_ro,         /* Read-only; writes are silently dropped */
+    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
+    p2m_map_foreign,    /* Ram pages from foreign domain */
+    p2m_grant_map_rw,   /* Read/write grant mapping */
+    p2m_grant_map_ro,   /* Read-only grant mapping */
+    /* The types below are only used to decide the page attribute in the P2M */
+    p2m_iommu_map_rw,   /* Read/write iommu mapping */
+    p2m_iommu_map_ro,   /* Read-only iommu mapping */
+    p2m_max_real_type,  /* Types after this won't be store in the p2m */
+} p2m_type_t;
+
+/*
+ * Additional access types, which are used to further restrict
+ * the permissions given by the p2m_type_t memory type. Violations
+ * caused by p2m_access_t restrictions are sent to the mem_event
+ * interface.
+ *
+ * The access permissions are soft state: when any ambigious change of page
+ * type or use occurs, or when pages are flushed, swapped, or at any other
+ * convenient type, the access permissions can get reset to the p2m_domain
+ * default.
+ */
+typedef enum {
+    p2m_access_n    = 0, /* No access permissions allowed */
+    p2m_access_r    = 1,
+    p2m_access_w    = 2,
+    p2m_access_rw   = 3,
+    p2m_access_x    = 4,
+    p2m_access_rx   = 5,
+    p2m_access_wx   = 6,
+    p2m_access_rwx  = 7
+} p2m_access_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
@@ -38,27 +83,20 @@ struct p2m_domain {
          * at each p2m tree level. */
         unsigned long shattered[4];
     } stats;
-};
 
-/* List of possible type for each page in the p2m entry.
- * The number of available bit per page in the pte for this purpose is 4 bits.
- * So it's possible to only have 16 fields. If we run out of value in the
- * future, it's possible to use higher value for pseudo-type and don't store
- * them in the p2m entry.
- */
-typedef enum {
-    p2m_invalid = 0,    /* Nothing mapped here */
-    p2m_ram_rw,         /* Normal read/write guest RAM */
-    p2m_ram_ro,         /* Read-only; writes are silently dropped */
-    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
-    p2m_map_foreign,    /* Ram pages from foreign domain */
-    p2m_grant_map_rw,   /* Read/write grant mapping */
-    p2m_grant_map_ro,   /* Read-only grant mapping */
-    /* The types below are only used to decide the page attribute in the P2M */
-    p2m_iommu_map_rw,   /* Read/write iommu mapping */
-    p2m_iommu_map_ro,   /* Read-only iommu mapping */
-    p2m_max_real_type,  /* Types after this won't be store in the p2m */
-} p2m_type_t;
+    /* Default P2M access type for each page in the the domain: new pages,
+     * swapped in pages, cleared pages, and pages that are ambiquously
+     * retyped get this access type. See definition of p2m_access_t. */
+    p2m_access_t default_access;
+
+    /* If true, and an access fault comes in and there is no mem_event listener,
+     * pause domain. Otherwise, remove access restrictions. */
+    bool_t access_required;
+
+    /* Radix tree to store the p2m_access_t settings as the pte's don't have
+     * enough available bits to store this information. */
+    struct radix_tree_root mem_access_settings;
+};
 
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 09/15] xen/arm: Add set access required domctl
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (7 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 08/15] xen/arm: p2m type definitions and changes Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 19:10   ` Julien Grall
  2014-09-01 14:22 ` [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/arch/arm/domctl.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 45974e7..7cf719c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -31,6 +31,19 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         return p2m_cache_flush(d, s, e);
     }
 
+    case XEN_DOMCTL_set_access_required:
+    {
+        struct p2m_domain* p2m;
+
+        if ( current->domain == d )
+            return -EPERM;
+
+        p2m = p2m_get_hostp2m(d);
+        p2m->access_required = domctl->u.access_required.access_required;
+        return 0;
+    }
+    break;
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (8 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 09/15] xen/arm: Add set access required domctl Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 21:07   ` Julien Grall
  2014-09-01 14:22 ` [PATCH v3 11/15] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

This patch enables to store, set, check and deliver LPAE R/W mem_events.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v3: - Add new function for updating the PTE entries, p2m_set_entry.
    - Use the new struct npfec to pass violation information.
    - Implement n2rwx, rx2rw and listener required routines.

v2: - Patch been split to ease the review process.
    - Add definitions of data abort data fetch status codes (enum dabt_dfsc)
      and only call p2m_mem_access_check for traps caused by permission violations.
    - Only call p2m_write_pte in p2m_lookup if the PTE permission actually changed.
    - Properly save settings in the Radix tree and pause the VCPU with
      mem_event_vcpu_pause.
---
 xen/arch/arm/p2m.c              | 439 ++++++++++++++++++++++++++++++++++++----
 xen/arch/arm/traps.c            |  31 ++-
 xen/include/asm-arm/p2m.h       |  20 ++
 xen/include/asm-arm/processor.h |  30 +++
 4 files changed, 481 insertions(+), 39 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a6dea5b..e8f5671 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -10,6 +10,7 @@
 #include <asm/event.h>
 #include <asm/hardirq.h>
 #include <asm/page.h>
+#include <xen/radix-tree.h>
 #include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <xen/mem_access.h>
@@ -148,6 +149,74 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr)
     return __map_domain_page(page);
 }
 
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+    /* First apply type permissions */
+    switch (t)
+    {
+    case p2m_ram_rw:
+        e->p2m.xn = 0;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_ram_ro:
+        e->p2m.xn = 0;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct:
+        e->p2m.xn = 1;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_iommu_map_ro:
+    case p2m_grant_map_ro:
+    case p2m_invalid:
+        e->p2m.xn = 1;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_max_real_type:
+        BUG();
+        break;
+    }
+
+    /* Then restrict with access permissions */
+    switch(a)
+    {
+    case p2m_access_n:
+        e->p2m.read = e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_r:
+        e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_x:
+        e->p2m.write = 0;
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rx:
+        e->p2m.write = 0;
+        break;
+    case p2m_access_w:
+        e->p2m.read = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_rw:
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_wx:
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rwx:
+        break;
+    }
+}
+
 /*
  * Lookup the MFN corresponding to a domain's PFN.
  *
@@ -228,7 +297,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
 }
 
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
-                               p2m_type_t t)
+                               p2m_type_t t, p2m_access_t a)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     /* sh, xn and write bit will be defined in the following switches
@@ -258,37 +327,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
     }
 
-    switch (t)
-    {
-    case p2m_ram_rw:
-        e.p2m.xn = 0;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_ram_ro:
-        e.p2m.xn = 0;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_iommu_map_rw:
-    case p2m_map_foreign:
-    case p2m_grant_map_rw:
-    case p2m_mmio_direct:
-        e.p2m.xn = 1;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_iommu_map_ro:
-    case p2m_grant_map_ro:
-    case p2m_invalid:
-        e.p2m.xn = 1;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_max_real_type:
-        BUG();
-        break;
-    }
+    p2m_set_permission(&e, t, a);
 
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
@@ -346,7 +385,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
              pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
-                                    MATTR_MEM, t);
+                                    MATTR_MEM, t, p2m->default_access);
 
              /*
               * First and second level super pages set p2m.table = 0, but
@@ -366,7 +405,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
+    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid, p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
 
@@ -498,7 +537,7 @@ static int apply_one_level(struct domain *d,
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
+                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
                 p2m_write_pte(entry, pte, flush_cache);
@@ -533,7 +572,7 @@ static int apply_one_level(struct domain *d,
              (level == 3 || !p2m_table(orig_pte)) )
         {
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
+            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
@@ -640,6 +679,7 @@ static int apply_one_level(struct domain *d,
 
         memset(&pte, 0x00, sizeof(pte));
         p2m_write_pte(entry, pte, flush_cache);
+        radix_tree_delete(&p2m->mem_access_settings, paddr_to_pfn(*addr));
 
         *addr += level_size;
 
@@ -1080,6 +1120,333 @@ err:
     return page;
 }
 
+static bool_t p2m_set_entry(struct domain *d, paddr_t paddr, p2m_access_t p2ma)
+{
+    bool_t rc = 0;
+    struct p2m_domain *p2m = &d->arch.p2m;
+    lpae_t *pte, *first = NULL, *second = NULL, *third = NULL;
+
+    first = p2m_map_first(p2m, paddr);
+    if ( !first )
+        goto err;
+
+    pte = &first[first_table_offset(paddr)];
+    if ( !p2m_table(*pte) )
+        goto done;
+
+    second = map_domain_page(pte->p2m.base);
+    pte = &second[second_table_offset(paddr)];
+    if ( !p2m_table(*pte) )
+        goto done;
+
+    third = map_domain_page(pte->p2m.base);
+    pte = &third[third_table_offset(paddr)];
+
+    BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
+
+    /* This bit must be one in the level 3 entry */
+    if ( !p2m_table(*pte) )
+        pte->bits = 0;
+
+done:
+    if ( p2m_valid(*pte) )
+    {
+        ASSERT(pte->p2m.type != p2m_invalid);
+
+        p2m_set_permission(pte, pte->p2m.type, p2ma);
+        p2m_write_pte(pte, *pte, 1);
+
+        rc = 1;
+    }
+
+    if (third) unmap_domain_page(third);
+    if (second) unmap_domain_page(second);
+    if (first) unmap_domain_page(first);
+
+err:
+    return rc;
+}
+
+int p2m_mem_access_check(paddr_t gpa, vaddr_t gla, struct npfec npfec)
+{
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+    mem_event_request_t *req = NULL;
+    xenmem_access_t xma;
+    bool_t violation;
+    int rc;
+
+    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
+    if ( rc )
+    {
+        /* No setting was found, reinject */
+        return 1;
+    }
+    else
+    {
+        /* First, handle rx2rw and n2rwx conversion automatically. */
+        if ( npfec.write_access && xma == XENMEM_access_rx2rw )
+        {
+            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                    0, ~0, XENMEM_access_rw);
+            ASSERT(rc == 0);
+            return 0;
+        }
+        else if ( xma == XENMEM_access_n2rwx )
+        {
+            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                    0, ~0, XENMEM_access_rwx);
+            ASSERT(rc == 0);
+        }
+    }
+
+    /* Otherwise, check if there is a memory event listener, and send the message along */
+    if ( !mem_event_check_ring( &v->domain->mem_event->access ) )
+    {
+        /* No listener */
+        if ( p2m->access_required )
+        {
+            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
+                                  "no mem_event listener VCPU %d, dom %d\n",
+                                  v->vcpu_id, v->domain->domain_id);
+            domain_crash(v->domain);
+        }
+        else
+        {
+            /* n2rwx was already handled */
+            if ( xma != XENMEM_access_n2rwx)
+            {
+                /* A listener is not required, so clear the access
+                 * restrictions. */
+                rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                        0, ~0, XENMEM_access_rwx);
+                ASSERT(rc == 0);
+            }
+        }
+
+        /* No need to reinject */
+        return 0;
+    }
+
+    switch ( xma )
+    {
+    default:
+    case XENMEM_access_n:
+        violation = npfec.read_access || npfec.write_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_r:
+        violation = npfec.write_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_w:
+        violation = npfec.read_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_x:
+        violation = npfec.read_access || npfec.write_access;
+        break;
+    case XENMEM_access_rx:
+        violation = npfec.write_access;
+        break;
+    case XENMEM_access_wx:
+        violation = npfec.read_access;
+        break;
+    case XENMEM_access_rw:
+        violation = npfec.insn_fetch;
+        break;
+    case XENMEM_access_rwx:
+        violation = 0;
+        break;
+    }
+
+    if ( !violation )
+        return 1;
+
+    req = xzalloc(mem_event_request_t);
+    if ( req )
+    {
+        req->reason = MEM_EVENT_REASON_VIOLATION;
+        if ( xma != XENMEM_access_n2rwx )
+            req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        req->gfn = gpa >> PAGE_SHIFT;
+        req->offset =  gpa & ((1 << PAGE_SHIFT) - 1);
+        req->gla = gla;
+        req->gla_valid = npfec.gla_valid;
+        req->access_r = npfec.read_access;
+        req->access_w = npfec.write_access;
+        req->access_x = npfec.insn_fetch;
+        if ( npfec_kind_in_gpt == npfec.kind )
+            req->fault_in_gpt = 1;
+        if ( npfec_kind_with_gla == npfec.kind )
+            req->fault_with_gla = 1;
+        req->vcpu_id = v->vcpu_id;
+
+        mem_access_send_req(v->domain, req);
+        xfree(req);
+    }
+
+    /* Pause the current VCPU */
+    if ( xma != XENMEM_access_n2rwx )
+        mem_event_vcpu_pause(v);
+
+    return 0;
+}
+
+void p2m_mem_access_resume(struct domain *d)
+{
+    mem_event_response_t rsp;
+
+    /* Pull all responses off the ring */
+    while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
+    {
+        struct vcpu *v;
+
+        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+            continue;
+
+        /* Validate the vcpu_id in the response. */
+        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+            continue;
+
+        v = d->vcpu[rsp.vcpu_id];
+
+        /* Unpause domain */
+        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+            mem_event_vcpu_unpause(v);
+    }
+}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_access_t a;
+    long rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+#undef ACCESS
+    };
+
+    switch ( access )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        a = memaccess[access];
+        break;
+    case XENMEM_access_default:
+        a = p2m->default_access;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    /* If request to set default access */
+    if ( pfn == ~0ul )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+    for ( pfn += start; nr > start; ++pfn )
+    {
+
+        bool_t pte_update = p2m_set_entry(d, pfn_to_paddr(pfn), a);
+
+        if ( !pte_update )
+            break;
+
+        rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
+                                    radix_tree_int_to_ptr(a));
+
+        switch ( rc )
+        {
+        case 0:
+            /* Nothing to do, setting saved successfully */
+            break;
+        case -EEXIST:
+            /* If a setting existed already, change it to the new one */
+            radix_tree_replace_slot(
+                radix_tree_lookup_slot(
+                    &p2m->mem_access_settings, pfn),
+                radix_tree_int_to_ptr(a));
+            rc = 0;
+            break;
+        default:
+            /* If we fail to save the setting in the Radix tree, we
+             * need to reset the PTE permissions to default. */
+            p2m_set_entry(d, pfn_to_paddr(pfn), p2m->default_access);
+            break;
+        }
+
+        /* Check for continuation if it's not the last iteration. */
+        if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
+        {
+            rc = start;
+            break;
+        }
+    }
+
+    /* Flush the TLB of the domain to ensure consistency */
+    flush_tlb_domain(d);
+
+    spin_unlock(&p2m->lock);
+    return rc;
+}
+
+int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
+                       xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    void *i;
+    int index;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+#undef ACCESS
+    };
+
+    /* If request to get default access */
+    if ( gpfn == ~0ull )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+
+    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
+
+    spin_unlock(&p2m->lock);
+
+    if (!i)
+        return -ESRCH;
+
+    index = radix_tree_ptr_to_int(i);
+
+    if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
+        return -ERANGE;
+
+    *access =  memaccess[ (unsigned) index];
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 019991f..7eb875a 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1852,13 +1852,38 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     info.gva = READ_SYSREG64(FAR_EL2);
 #endif
 
-    if (dabt.s1ptw)
-        goto bad_data_abort;
-
     rc = gva_to_ipa(info.gva, &info.gpa);
     if ( rc == -EFAULT )
         goto bad_data_abort;
 
+    switch ( dabt.dfsc )
+    {
+    case DABT_DFSC_PERMISSION_1:
+    case DABT_DFSC_PERMISSION_2:
+    case DABT_DFSC_PERMISSION_3:
+    {
+        struct npfec npfec = {
+            .read_access = 1,
+            .write_access = dabt.write,
+            .gla_valid = 1,
+            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+
+    default:
+        break;
+    }
+
+    if ( dabt.s1ptw )
+        goto bad_data_abort;
+
     /* XXX: Decode the instruction if ISS is not valid */
     if ( !dabt.valid )
         goto bad_data_abort;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index afdbf84..2a9e0a3 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -233,6 +233,26 @@ static inline int get_page_and_type(struct page_info *page,
     return rc;
 }
 
+/* get host p2m table */
+#define p2m_get_hostp2m(d) (&((d)->arch.p2m))
+
+/* Send mem event based on the access (gla is -1ull if not available). Boolean
+ * return value indicates if trap needs to be injected into guest. */
+int p2m_mem_access_check(paddr_t gpa, vaddr_t gla, struct npfec npfec);
+
+/* Resumes the running of the VCPU, restarting the last instruction */
+void p2m_mem_access_resume(struct domain *d);
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access);
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+                       xenmem_access_t *access);
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 0cc5b6d..b844f1d 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -262,6 +262,36 @@ enum dabt_size {
     DABT_DOUBLE_WORD = 3,
 };
 
+/* Data abort data fetch status codes */
+enum dabt_dfsc {
+    DABT_DFSC_ADDR_SIZE_0       = 0b000000,
+    DABT_DFSC_ADDR_SIZE_1       = 0b000001,
+    DABT_DFSC_ADDR_SIZE_2       = 0b000010,
+    DABT_DFSC_ADDR_SIZE_3       = 0b000011,
+    DABT_DFSC_TRANSLATION_0     = 0b000100,
+    DABT_DFSC_TRANSLATION_1     = 0b000101,
+    DABT_DFSC_TRANSLATION_2     = 0b000110,
+    DABT_DFSC_TRANSLATION_3     = 0b000111,
+    DABT_DFSC_ACCESS_1          = 0b001001,
+    DABT_DFSC_ACCESS_2          = 0b001010,
+    DABT_DFSC_ACCESS_3          = 0b001011,
+    DABT_DFSC_PERMISSION_1      = 0b001101,
+    DABT_DFSC_PERMISSION_2      = 0b001110,
+    DABT_DFSC_PERMISSION_3      = 0b001111,
+    DABT_DFSC_SYNC_EXT          = 0b010000,
+    DABT_DFSC_SYNC_PARITY       = 0b011000,
+    DABT_DFSC_SYNC_EXT_TTW_0    = 0b010100,
+    DABT_DFSC_SYNC_EXT_TTW_1    = 0b010101,
+    DABT_DFSC_SYNC_EXT_TTW_2    = 0b010110,
+    DABT_DFSC_SYNC_EXT_TTW_3    = 0b010111,
+    DABT_DFSC_SYNC_PARITY_TTW_0 = 0b011100,
+    DABT_DFSC_SYNC_PARITY_TTW_1 = 0b011101,
+    DABT_DFSC_SYNC_PARITY_TTW_2 = 0b011110,
+    DABT_DFSC_SYNC_PARITY_TTW_3 = 0b011111,
+    DABT_DFSC_ALIGNMENT         = 0b100001,
+    DABT_DFSC_TLB_CONFLICT      = 0b110000,
+};
+
 union hsr {
     uint32_t bits;
     struct {
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 11/15] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (9 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 14:22 ` [PATCH v3 12/15] xen/arm: Shatter large pages when using mem_acces Tamas K Lengyel
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Add missing structure definition for iabt and update the trap handling
mechanism to only inject the exception if the mem_access checker
decides to do so.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Add definition for instruction abort instruction fetch status codes (enum iabt_ifsc)
       and only call p2m_mem_access_check for traps triggered for permission violations.
---
 xen/arch/arm/traps.c            | 43 ++++++++++++++++++++++++++++++++++++++++-
 xen/include/asm-arm/processor.h | 40 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 81 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 7eb875a..985f5b4 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1828,7 +1828,48 @@ done:
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
-    register_t addr = READ_SYSREG(FAR_EL2);
+    struct hsr_iabt iabt = hsr.iabt;
+    int rc;
+    register_t addr;
+    vaddr_t gva;
+    paddr_t gpa;
+
+#ifdef CONFIG_ARM_32
+    gva = READ_CP32(HIFAR);
+#else
+    gva = READ_SYSREG64(FAR_EL2);
+#endif
+
+    rc = gva_to_ipa(gva, &gpa);
+    if ( rc == -EFAULT )
+        return;
+
+    switch ( iabt.ifsc )
+    {
+    case IABT_IFSC_PERMISSION_1:
+    case IABT_IFSC_PERMISSION_2:
+    case IABT_IFSC_PERMISSION_3:
+    {
+        struct npfec npfec = {
+            .read_access = 1,
+            .insn_fetch = 1,
+            .gla_valid = 1,
+            .kind = iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(gpa, gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+
+    default:
+        break;
+    }
+
+    addr = READ_SYSREG(FAR_EL2);
     inject_iabt_exception(regs, addr, hsr.len);
 }
 
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index b844f1d..044de12 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -292,6 +292,36 @@ enum dabt_dfsc {
     DABT_DFSC_TLB_CONFLICT      = 0b110000,
 };
 
+/* Instruction abort instruction fault status codes */
+enum iabt_ifsc {
+    IABT_IFSC_ADDR_SIZE_0       = 0b000000,
+    IABT_IFSC_ADDR_SIZE_1       = 0b000001,
+    IABT_IFSC_ADDR_SIZE_2       = 0b000010,
+    IABT_IFSC_ADDR_SIZE_3       = 0b000011,
+    IABT_IFSC_TRANSLATION_0     = 0b000100,
+    IABT_IFSC_TRANSLATION_1     = 0b000101,
+    IABT_IFSC_TRANSLATION_2     = 0b000110,
+    IABT_IFSC_TRANSLATION_3     = 0b000111,
+    IABT_IFSC_ACCESS_1          = 0b001001,
+    IABT_IFSC_ACCESS_2          = 0b001010,
+    IABT_IFSC_ACCESS_3          = 0b001011,
+    IABT_IFSC_PERMISSION_1      = 0b001101,
+    IABT_IFSC_PERMISSION_2      = 0b001110,
+    IABT_IFSC_PERMISSION_3      = 0b001111,
+    IABT_IFSC_SYNC_EXT          = 0b010000,
+    IABT_IFSC_SYNC_PARITY       = 0b011000,
+    IABT_IFSC_SYNC_EXT_TTW_0    = 0b010100,
+    IABT_IFSC_SYNC_EXT_TTW_1    = 0b010101,
+    IABT_IFSC_SYNC_EXT_TTW_2    = 0b010110,
+    IABT_IFSC_SYNC_EXT_TTW_3    = 0b010111,
+    IABT_IFSC_SYNC_PARITY_TTW_0 = 0b011100,
+    IABT_IFSC_SYNC_PARITY_TTW_1 = 0b011101,
+    IABT_IFSC_SYNC_PARITY_TTW_2 = 0b011110,
+    IABT_IFSC_SYNC_PARITY_TTW_3 = 0b011111,
+    IABT_IFSC_ALIGNMENT         = 0b100001,
+    IABT_IFSC_TLB_CONFLICT      = 0b110000,
+};
+
 union hsr {
     uint32_t bits;
     struct {
@@ -371,10 +401,18 @@ union hsr {
     } sysreg; /* HSR_EC_SYSREG */
 #endif
 
+    struct hsr_iabt {
+        unsigned long ifsc:6;   /* Instruction fault status code */
+        unsigned long res0:1;
+        unsigned long s1ptw:1;  /* Fault during a stage 1 translation table walk */
+        unsigned long res1:1;
+        unsigned long ea:1;     /* External abort type */
+    } iabt; /* HSR_EC_INSTR_ABORT_* */
+
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-        unsigned long s1ptw:1; /* */
+        unsigned long s1ptw:1; /* Fault during a stage 1 translation table walk */
         unsigned long cache:1; /* Cache Maintenance */
         unsigned long eat:1;   /* External Abort Type */
 #ifdef CONFIG_ARM_32
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 12/15] xen/arm: Shatter large pages when using mem_acces
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (10 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 11/15] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 14:22 ` [PATCH v3 13/15] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

When using mem_acccess+mem_event, the coarsest granularity is required
in the pagetables, therefore we shatter each large page (1GB/2MB) as we
apply the permission changes.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/arch/arm/p2m.c | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e8f5671..0d447e2 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1132,12 +1132,30 @@ static bool_t p2m_set_entry(struct domain *d, paddr_t paddr, p2m_access_t p2ma)
 
     pte = &first[first_table_offset(paddr)];
     if ( !p2m_table(*pte) )
-        goto done;
+    {
+        /* This is a mapping of a 1GB page, shatter it */
+        rc = p2m_create_table(d, pte, FIRST_SHIFT - PAGE_SHIFT, 1);
+        if ( rc < 0 )
+            goto err;
+
+        p2m->stats.shattered[1]++;
+        p2m->stats.mappings[1]--;
+        p2m->stats.mappings[2] += LPAE_ENTRIES;
+    }
 
     second = map_domain_page(pte->p2m.base);
     pte = &second[second_table_offset(paddr)];
     if ( !p2m_table(*pte) )
-        goto done;
+    {
+        /* This is a mapping of 2MB page, shatter it */
+        rc = p2m_create_table(d, pte, SECOND_SHIFT - PAGE_SHIFT, 1);
+        if ( rc < 0 )
+            goto err;
+
+        p2m->stats.shattered[2]++;
+        p2m->stats.mappings[2]--;
+        p2m->stats.mappings[3] += LPAE_ENTRIES;
+    }
 
     third = map_domain_page(pte->p2m.base);
     pte = &third[third_table_offset(paddr)];
@@ -1148,7 +1166,6 @@ static bool_t p2m_set_entry(struct domain *d, paddr_t paddr, p2m_access_t p2ma)
     if ( !p2m_table(*pte) )
         pte->bits = 0;
 
-done:
     if ( p2m_valid(*pte) )
     {
         ASSERT(pte->p2m.type != p2m_invalid);
@@ -1159,11 +1176,11 @@ done:
         rc = 1;
     }
 
+err:
     if (third) unmap_domain_page(third);
     if (second) unmap_domain_page(second);
     if (first) unmap_domain_page(first);
 
-err:
     return rc;
 }
 
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 13/15] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (11 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 12/15] xen/arm: Shatter large pages when using mem_acces Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-03 14:38   ` Daniel De Graaf
  2014-09-01 14:22 ` [PATCH v3 14/15] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

This patch sets up the infrastructure to support mem_access and mem_event
on ARM and turns on compilation. We define the required XSM functions.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v3: Wrap mem_event related functions in XSM into #ifdef HAS_MEM_ACCESS
       blocks.
    Update XSM hooks in flask to properly wire it up on ARM.

v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
       use them instead of CONFIG_X86.
    Split domctl copy-back and p2m type definitions into separate
       patches and move this patch to the end of the series.
---
 xen/arch/arm/Rules.mk        |  1 +
 xen/common/mem_event.c       | 19 +++++++++++++++++++
 xen/include/asm-arm/p2m.h    | 11 +++++++++++
 xen/include/asm-x86/config.h |  3 +++
 xen/include/xsm/dummy.h      | 26 ++++++++++++++------------
 xen/include/xsm/xsm.h        | 29 +++++++++++++++++------------
 xen/xsm/dummy.c              |  7 +++++--
 xen/xsm/flask/hooks.c        | 33 ++++++++++++++++++++-------------
 8 files changed, 90 insertions(+), 39 deletions(-)

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 8658176..f6781b5 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -10,6 +10,7 @@ HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
 HAS_PASSTHROUGH := y
+HAS_MEM_ACCESS := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 584de16..d9af98c 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -27,8 +27,15 @@
 #include <asm/p2m.h>
 #include <xen/mem_event.h>
 #include <xen/mem_access.h>
+
+#ifdef CONFIG_MEM_PAGING
 #include <asm/mem_paging.h>
+#endif
+
+#ifdef CONFIG_MEM_SHARING
 #include <asm/mem_sharing.h>
+#endif
+
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
@@ -423,12 +430,14 @@ int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
         return mem_event_grab_slot(med, (current->domain != d));
 }
 
+#ifdef CONFIG_MEM_PAGING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
         p2m_mem_paging_resume(v->domain);
 }
+#endif
 
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_access_notification(struct vcpu *v, unsigned int port)
@@ -437,15 +446,20 @@ static void mem_access_notification(struct vcpu *v, unsigned int port)
         p2m_mem_access_resume(v->domain);
 }
 
+#ifdef CONFIG_MEM_SHARING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->mem_event->share.ring_page != NULL) )
         mem_sharing_sharing_resume(v->domain);
 }
+#endif
 
 int do_mem_event_op(int op, uint32_t domain, void *arg)
 {
+#if !defined(CONFIG_MEM_PAGING) && !defined(CONFIG_MEM_SHARING)
+    return -ENOSYS;
+#else
     int ret;
     struct domain *d;
 
@@ -472,6 +486,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
  out:
     rcu_unlock_domain(d);
     return ret;
+#endif
 }
 
 /* Clean up on domain destruction */
@@ -532,6 +547,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
     switch ( mec->mode )
     {
+#ifdef CONFIG_MEM_PAGING
     case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
     {
         struct mem_event_domain *med = &d->mem_event->paging;
@@ -582,6 +598,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
     }
     break;
+#endif
 
     case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
     {
@@ -618,6 +635,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
+#ifdef CONFIG_MEM_SHARING
     case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
     {
         struct mem_event_domain *med = &d->mem_event->share;
@@ -656,6 +674,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
     }
     break;
+#endif
 
     default:
         rc = -ENOSYS;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 2a9e0a3..e3056ea 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -236,6 +236,17 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&((d)->arch.p2m))
 
+/* mem_event and mem_access are supported on all ARM */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
+static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
 /* Send mem event based on the access (gla is -1ull if not available). Boolean
  * return value indicates if trap needs to be injected into guest. */
 int p2m_mem_access_check(paddr_t gpa, vaddr_t gla, struct npfec npfec);
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 210ff57..525ac44 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -57,6 +57,9 @@
 #define CONFIG_LATE_HWDOM 1
 #endif
 
+#define CONFIG_MEM_SHARING 1
+#define CONFIG_MEM_PAGING 1
+
 #define HZ 100
 
 #define OPT_CONSOLE_STR "vga"
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index c5aa316..cea2e63 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -507,6 +507,20 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
+static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
 {
@@ -550,18 +564,6 @@ static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
-{
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index a85045d..6c3032c 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -140,6 +140,11 @@ struct xsm_operations {
     int (*hvm_control) (struct domain *d, unsigned long op);
     int (*hvm_param_nested) (struct domain *d);
 
+#ifdef HAS_MEM_ACCESS
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
+#endif
+
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -148,8 +153,6 @@ struct xsm_operations {
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*hvm_ioreq_server) (struct domain *d, int op);
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -534,6 +537,18 @@ static inline int xsm_hvm_param_nested (xsm_default_t def, struct domain *d)
     return xsm_ops->hvm_param_nested(d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+{
+    return xsm_ops->mem_event_control(d, mode, op);
+}
+
+static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+{
+    return xsm_ops->mem_event_op(d, op);
+}
+#endif
+
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
 {
@@ -570,16 +585,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int
     return xsm_ops->hvm_ioreq_server(d, op);
 }
 
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
-{
-    return xsm_ops->mem_event_control(d, mode, op);
-}
-
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->mem_event_op(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index c95c803..e9cdc01 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -117,6 +117,11 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
+#endif
+
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
@@ -125,8 +130,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, hvm_ioreq_server);
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, platform_op);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f2f59ea..96efd0b 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -571,6 +571,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_irq_permission:
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_set_target:
+#ifdef HAS_MEM_ACCESS
+    case XEN_DOMCTL_mem_event_op:
+#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -579,7 +582,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_unbind_pt_irq:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_ioport_mapping:
-    case XEN_DOMCTL_mem_event_op:
     /* These have individual XSM hooks (drivers/passthrough/iommu.c) */
     case XEN_DOMCTL_get_device_group:
     case XEN_DOMCTL_test_assign_device:
@@ -1181,6 +1183,18 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+#endif /* HAS_MEM_ACCESS */
+
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1291,16 +1305,6 @@ static int flask_hvm_ioreq_server(struct domain *d, int op)
     return current_has_perm(d, SECCLASS_HVM, HVM__HVMCTL);
 }
 
-static int flask_mem_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
-static int flask_mem_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1567,6 +1571,11 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
+#ifdef HAS_MEM_ACCESS
+    .mem_event_control = flask_mem_event_control,
+    .mem_event_op = flask_mem_event_op,
+#endif
+
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
@@ -1575,8 +1584,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
     .hvm_inject_msi = flask_hvm_inject_msi,
     .hvm_ioreq_server = flask_hvm_ioreq_server,
-    .mem_event_control = flask_mem_event_control,
-    .mem_event_op = flask_mem_event_op,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .platform_op = flask_platform_op,
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 14/15] tools/libxc: Allocate magic page for mem access on ARM
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (12 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 13/15] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 14:22 ` [PATCH v3 15/15] tools/tests: Enable xen-access " Tamas K Lengyel
  2014-09-01 19:56 ` [PATCH v3 00/15] Mem_event and mem_access for ARM Julien Grall
  15 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
 tools/libxc/xc_dom_arm.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index 9b31b1f..13e881e 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -26,9 +26,10 @@
 #include "xg_private.h"
 #include "xc_dom.h"
 
-#define NR_MAGIC_PAGES 2
+#define NR_MAGIC_PAGES 3
 #define CONSOLE_PFN_OFFSET 0
 #define XENSTORE_PFN_OFFSET 1
+#define MEMACCESS_PFN_OFFSET 2
 
 #define LPAE_SHIFT 9
 
@@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
+    xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
             dom->console_pfn);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
             dom->xenstore_pfn);
+    xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
+            base + MEMACCESS_PFN_OFFSET);
     /* allocated by toolstack */
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
             dom->console_evtchn);
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v3 15/15] tools/tests: Enable xen-access on ARM
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (13 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 14/15] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
@ 2014-09-01 14:22 ` Tamas K Lengyel
  2014-09-01 21:26   ` Julien Grall
  2014-09-01 19:56 ` [PATCH v3 00/15] Mem_event and mem_access for ARM Julien Grall
  15 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 14:22 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

On ARM the guest memory doesn't start from 0, thus we include the
required headers and define GUEST_RAM_BASE_PFN in both architecture
to be passed to mem_access as the starting pfn.
We also define the ARM specific test_and_set_bit function.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 tools/tests/xen-access/Makefile     |  4 +--
 tools/tests/xen-access/xen-access.c | 53 +++++++++++++++++++++++++++++--------
 2 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
index 65eef99..698355c 100644
--- a/tools/tests/xen-access/Makefile
+++ b/tools/tests/xen-access/Makefile
@@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_xeninclude)
 
-TARGETS-y := 
-TARGETS-$(CONFIG_X86) += xen-access
-TARGETS := $(TARGETS-y)
+TARGETS := xen-access
 
 .PHONY: all
 all: build
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 090df5f..187c72f 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -41,22 +41,16 @@
 #include <xenctrl.h>
 #include <xen/mem_event.h>
 
-#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
-#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
-#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
-
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
+#ifdef CONFIG_X86
 
+#define GUEST_RAM_BASE_PFN 0ULL
 #define ADDR (*(volatile long *) addr)
+
 /**
  * test_and_set_bit - Set a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
  */
 static inline int test_and_set_bit(int nr, volatile void *addr)
 {
@@ -69,6 +63,43 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
     return oldbit;
 }
 
+#else /* CONFIG_X86 */
+
+#include <xen/arch-arm.h>
+
+#define PAGE_SHIFT              12
+#define GUEST_RAM_BASE_PFN      GUEST_RAM_BASE >> PAGE_SHIFT
+#define BITS_PER_WORD           32
+#define BIT_MASK(nr)            (1UL << ((nr) % BITS_PER_WORD))
+#define BIT_WORD(nr)            ((nr) / BITS_PER_WORD)
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ */
+static inline int test_and_set_bit(int nr, volatile void *addr)
+{
+        unsigned int mask = BIT_MASK(nr);
+        volatile unsigned int *p =
+                ((volatile unsigned int *)addr) + BIT_WORD(nr);
+        unsigned int old = *p;
+
+        *p = old | mask;
+        return (old & mask) != 0;
+}
+
+#endif
+
+#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
+#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
+#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
+
+/* Spinlock and mem event definitions */
+
+#define SPIN_LOCK_UNLOCKED 0
+
 typedef int spinlock_t;
 
 static inline void spin_lock(spinlock_t *lock)
@@ -492,7 +523,7 @@ int main(int argc, char *argv[])
         goto exit;
     }
 
-    rc = xc_set_mem_access(xch, domain_id, default_access, 0,
+    rc = xc_set_mem_access(xch, domain_id, default_access, GUEST_RAM_BASE_PFN,
                            xenaccess->domain_info->max_pages);
     if ( rc < 0 )
     {
@@ -520,7 +551,7 @@ int main(int argc, char *argv[])
 
             /* Unregister for every event */
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
-            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
+            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, GUEST_RAM_BASE_PFN,
                                    xenaccess->domain_info->max_pages);
             rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
 
-- 
2.1.0.rc1

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common.
  2014-09-01 14:21 ` [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-09-01 15:06   ` Jan Beulich
  2014-09-01 15:15     ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-09-01 15:06 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> --- /dev/null
> +++ b/xen/common/mem_access.c
> @@ -0,0 +1,133 @@
> +/******************************************************************************
> + * mem_access.c
> + *
> + * Memory access support.
> + *
> + * Copyright (c) 2011 Virtuata, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + */
> +
> +
> +#include <xen/sched.h>
> +#include <xen/guest_access.h>
> +#include <xen/hypercall.h>
> +#include <asm/p2m.h>
> +#include <public/memory.h>
> +#include <xen/mem_event.h>
> +#include <xsm/xsm.h>

I asked you on the previous round already to get this mixture of
headers from various subdirectories cleaned up: Unless anything
really can't be made built with properly grouped headers, please
have all xen/, asm/, public/, xsm/, etc headers each grouped
together.

> +int prepare_ring_for_helper(
> +    struct domain *d, unsigned long gmfn, struct page_info **_page,
> +    void **_va)
> +{
> +    struct page_info *page;
> +    p2m_type_t p2mt;
> +    void *va;
> +
> +    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
> +
> +#ifdef CONFIG_MEM_PAGING

I can't spot where this and CONFIG_MEM_SHARING get defined in
this patch.

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common.
  2014-09-01 15:06   ` Jan Beulich
@ 2014-09-01 15:15     ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 15:15 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 2203 bytes --]

On Mon, Sep 1, 2014 at 5:06 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> > --- /dev/null
> > +++ b/xen/common/mem_access.c
> > @@ -0,0 +1,133 @@
> >
> +/******************************************************************************
> > + * mem_access.c
> > + *
> > + * Memory access support.
> > + *
> > + * Copyright (c) 2011 Virtuata, Inc.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; if not, write to the Free Software
> > + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307  USA
> > + */
> > +
> > +
> > +#include <xen/sched.h>
> > +#include <xen/guest_access.h>
> > +#include <xen/hypercall.h>
> > +#include <asm/p2m.h>
> > +#include <public/memory.h>
> > +#include <xen/mem_event.h>
> > +#include <xsm/xsm.h>
>
> I asked you on the previous round already to get this mixture of
> headers from various subdirectories cleaned up: Unless anything
> really can't be made built with properly grouped headers, please
> have all xen/, asm/, public/, xsm/, etc headers each grouped
> together.
>
> > +int prepare_ring_for_helper(
> > +    struct domain *d, unsigned long gmfn, struct page_info **_page,
> > +    void **_va)
> > +{
> > +    struct page_info *page;
> > +    p2m_type_t p2mt;
> > +    void *va;
> > +
> > +    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
> > +
> > +#ifdef CONFIG_MEM_PAGING
>
> I can't spot where this and CONFIG_MEM_SHARING get defined in
> this patch.
>
> Jan


Right, they only get defined later in patch [13/15], I forgot to move them
up front.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3075 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 02/15] xen: Relocate struct npfec definition into common
  2014-09-01 14:21 ` [PATCH v3 02/15] xen: Relocate struct npfec definition " Tamas K Lengyel
@ 2014-09-01 15:44   ` Jan Beulich
  0 siblings, 0 replies; 48+ messages in thread
From: Jan Beulich @ 2014-09-01 15:44 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> Nested page fault exception code definitions can be reused on ARM as well.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>

Acked-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 14:21 ` [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
@ 2014-09-01 15:46   ` Jan Beulich
  2014-09-01 16:25     ` Tamas K Lengyel
  2014-09-01 18:11   ` Julien Grall
  1 sibling, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-09-01 15:46 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> --- a/xen/arch/x86/x86_64/compat/mm.c
> +++ b/xen/arch/x86/x86_64/compat/mm.c
> @@ -198,10 +198,6 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>      }
>  
> -    case XENMEM_access_op:
> -        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
> -        break;
> -

I don't think you can simply drop this.

> @@ -967,6 +968,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>      }
>      break;
>  
> +    case XEN_DOMCTL_mem_event_op:
> +    {
> +        ret = mem_event_domctl(d, &op->u.mem_event_op,
> +                              guest_handle_cast(u_domctl, void));
> +        copyback = 1;
> +    }
> +    break;

Please drop the unnecessary braces.

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 05/15] xen/mem_event: Relax error condition on debug builds
  2014-09-01 14:21 ` [PATCH v3 05/15] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
@ 2014-09-01 15:47   ` Jan Beulich
  0 siblings, 0 replies; 48+ messages in thread
From: Jan Beulich @ 2014-09-01 15:47 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -278,7 +278,11 @@ void mem_event_put_request(struct domain *d,
>      if ( current->domain != d )
>      {
>          req->flags |= MEM_EVENT_FLAG_FOREIGN;
> -        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
> +#ifndef NDEBUG
> +        if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
> +            gdprintk(XENLOG_G_WARNING, "VCPU %"PRIu32" was not paused.\n",
> +                     req->vcpu_id);
> +#endif

This still doesn't tell us which domain the problem was with.

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 07/15] xen/mem_access: Abstract architecture specific sanity check
  2014-09-01 14:22 ` [PATCH v3 07/15] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
@ 2014-09-01 15:50   ` Jan Beulich
  0 siblings, 0 replies; 48+ messages in thread
From: Jan Beulich @ 2014-09-01 15:50 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, xen-devel,
	stefano.stabellini, andres, dgdegra

>>> On 01.09.14 at 16:22, <tklengyel@sec.in.tum.de> wrote:
> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -43,9 +43,11 @@ int mem_access_memop(unsigned long cmd,
>      if ( rc )
>          return rc;
>  
> -    rc = -EINVAL;
> -    if ( !is_hvm_domain(d) )
> +    if ( !p2m_mem_access_sanity_check(d) )
> +    {
> +        rc = -EINVAL;
>          goto out;
> +    }

Please keep the original code structure and only replace the function
name.

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 15:46   ` Jan Beulich
@ 2014-09-01 16:25     ` Tamas K Lengyel
  2014-09-02  6:30       ` Jan Beulich
  0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 16:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1085 bytes --]

On Mon, Sep 1, 2014 at 5:46 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> > --- a/xen/arch/x86/x86_64/compat/mm.c
> > +++ b/xen/arch/x86/x86_64/compat/mm.c
> > @@ -198,10 +198,6 @@ int compat_arch_memory_op(unsigned long cmd,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          break;
> >      }
> >
> > -    case XENMEM_access_op:
> > -        rc = mem_access_memop(cmd, guest_handle_cast(arg,
> xen_mem_access_op_t));
> > -        break;
> > -
>
> I don't think you can simply drop this.
>

OK.


>
> > @@ -967,6 +968,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
> u_domctl)
> >      }
> >      break;
> >
> > +    case XEN_DOMCTL_mem_event_op:
> > +    {
> > +        ret = mem_event_domctl(d, &op->u.mem_event_op,
> > +                              guest_handle_cast(u_domctl, void));
> > +        copyback = 1;
> > +    }
> > +    break;
>
> Please drop the unnecessary braces.
>
> Jan


All other cases have braces around them here, even when not required, so
this just follows the established style.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1892 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 14:21 ` [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
  2014-09-01 15:46   ` Jan Beulich
@ 2014-09-01 18:11   ` Julien Grall
  2014-09-01 20:51     ` Tamas K Lengyel
  1 sibling, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-01 18:11 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, dgdegra

Hello Tamas,

On 01/09/14 10:21, Tamas K Lengyel wrote:
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index cc8a3d0..4e530bf 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -25,6 +25,7 @@
>   #include <asm/hardirq.h>
>   #include <asm/p2m.h>
>   #include <xen/numa.h>
> +#include <xen/mem_access.h>
>   #include <public/memory.h>
>   #include <xsm/xsm.h>
>   #include <xen/trace.h>
> @@ -969,6 +970,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>   
>           break;
>   
> +    case XENMEM_access_op:
> +        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
> +        break;
> +

Every patch should be able to compile without requiring a future patch.
This is very useful when we need to bisect the tree.

Actually I got the following error:

In file included from /home/julieng/works/xen/xen/include/asm/system.h:6:0,
                 from /home/julieng/works/xen/xen/include/xen/list.h:11,
                 from /home/julieng/works/xen/xen/include/xen/mm.h:32,
                 from memory.c:13:
/home/julieng/works/xen/xen/include/public/arch-arm.h:196:41: error: unknown type name ‘__guest_handle_xen_mem_access_op_t’
 #define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
                                         ^
/home/julieng/works/xen/xen/include/xen/mem_access.h:36:22: note: in expansion of macro ‘XEN_GUEST_HANDLE_PARAM’
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
                      ^
memory.c: In function ‘do_memory_op’:
memory.c:974:9: error: implicit declaration of function ‘mem_access_memop’ [-Werror=implicit-function-declaration]
         rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
         ^
memory.c:974:9: error: nested extern declaration of ‘mem_access_memop’ [-Werror=nested-externs]

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 09/15] xen/arm: Add set access required domctl
  2014-09-01 14:22 ` [PATCH v3 09/15] xen/arm: Add set access required domctl Tamas K Lengyel
@ 2014-09-01 19:10   ` Julien Grall
  2014-09-02  7:48     ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-01 19:10 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, dgdegra

Hello Tamas,

On 01/09/14 10:22, Tamas K Lengyel wrote:
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
>   xen/arch/arm/domctl.c | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
>
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 45974e7..7cf719c 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -31,6 +31,19 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>           return p2m_cache_flush(d, s, e);
>       }
>
> +    case XEN_DOMCTL_set_access_required:
> +    {
> +        struct p2m_domain* p2m;
> +
> +        if ( current->domain == d )
> +            return -EPERM;
> +
> +        p2m = p2m_get_hostp2m(d);

p2m_get_hostp2m is only defined in the next patch (#9), therefore this 
patch won't compile.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 00/15] Mem_event and mem_access for ARM
  2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (14 preceding siblings ...)
  2014-09-01 14:22 ` [PATCH v3 15/15] tools/tests: Enable xen-access " Tamas K Lengyel
@ 2014-09-01 19:56 ` Julien Grall
  2014-09-02  9:47   ` Tamas K Lengyel
  15 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-01 19:56 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, dgdegra

Hello Tamas

On 01/09/14 10:21, Tamas K Lengyel wrote:
> The ARM virtualization extension provides 2-stage paging, a similar mechanisms
> to Intel's EPT, which can be used to trace the memory accesses performed by
> the guest systems. This series moves the mem_access and mem_event codebase
> into Xen common, performs some code cleanup and architecture specific division
> of components, then sets up the necessary infrastructure in the ARM code
> to deliver the event on R/W/X traps. Finally, we turn on the compilation of
> mem_access and mem_event on ARM and perform the necessary changes to the
> tools side.
> 
> This version of the series has been fully tested and is functional on an
> Arndale board.

domain_get_maximum_gpfn used in common code is defined as -ENOSYS.

I've sent a patch a month ago on the mailing list about it (see patch below).

Ian: Can you reconsider to apply the patch? (I will also reply to the thread). FYI,
I have a patch to fix xc_dom_gnttab_hvm_seed in libxc. I will try to send it next week.

Regards,

====================================================================

commit 7aa592b7a6f357b0003cd523e446d9d91dc96730
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Jun 30 17:21:13 2014 +0100

    xen/arm: Implement domain_get_maximum_gpfn
    
    The function domain_get_maximum_gpfn is returning the maximum gpfn ever
    mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..e4a1e5e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -954,7 +954,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,



-- 
Julien Grall

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 18:11   ` Julien Grall
@ 2014-09-01 20:51     ` Tamas K Lengyel
  2014-09-02  6:53       ` Jan Beulich
  0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-01 20:51 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 2426 bytes --]

On Mon, Sep 1, 2014 at 8:11 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
> On 01/09/14 10:21, Tamas K Lengyel wrote:
> > diff --git a/xen/common/memory.c b/xen/common/memory.c
> > index cc8a3d0..4e530bf 100644
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -25,6 +25,7 @@
> >   #include <asm/hardirq.h>
> >   #include <asm/p2m.h>
> >   #include <xen/numa.h>
> > +#include <xen/mem_access.h>
> >   #include <public/memory.h>
>

@Julien: the problem causing your compile issue is here with the order of
the inclusion of the headers. public/memory.h needs to be included before
xen/mem_access.h, I'll fix it in the next iteration..


> >   #include <xsm/xsm.h>
> >   #include <xen/trace.h>
> > @@ -969,6 +970,10 @@ long do_memory_op(unsigned long cmd,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >
> >           break;
> >
> > +    case XENMEM_access_op:
> > +        rc = mem_access_memop(cmd, guest_handle_cast(arg,
> xen_mem_access_op_t));
> > +        break;
> > +
>
> Every patch should be able to compile without requiring a future patch.
> This is very useful when we need to bisect the tree.
>
> Actually I got the following error:
>
> In file included from /home/julieng/works/xen/xen/include/asm/system.h:6:0,
>                  from /home/julieng/works/xen/xen/include/xen/list.h:11,
>                  from /home/julieng/works/xen/xen/include/xen/mm.h:32,
>                  from memory.c:13:
> /home/julieng/works/xen/xen/include/public/arch-arm.h:196:41: error:
> unknown type name '__guest_handle_xen_mem_access_op_t'
>  #define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
>                                          ^
> /home/julieng/works/xen/xen/include/xen/mem_access.h:36:22: note: in
> expansion of macro 'XEN_GUEST_HANDLE_PARAM'
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
>                       ^
> memory.c: In function 'do_memory_op':
> memory.c:974:9: error: implicit declaration of function 'mem_access_memop'
> [-Werror=implicit-function-declaration]
>          rc = mem_access_memop(cmd, guest_handle_cast(arg,
> xen_mem_access_op_t));
>          ^
> memory.c:974:9: error: nested extern declaration of 'mem_access_memop'
> [-Werror=nested-externs]
>
> Regards,
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 4027 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-01 14:22 ` [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-09-01 21:07   ` Julien Grall
  2014-09-02  9:06     ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-01 21:07 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, dgdegra

Hello Tamas,

On 01/09/14 10:22, Tamas K Lengyel wrote:
> This patch enables to store, set, check and deliver LPAE R/W mem_events.

I would expand a bit more the commit message to explain the logic of mem 
event in ARM.

I know you already explain it in patch #8 ("p2m type definitions and 
changes") but I think it's more relevant to explain where the "real" 
code is.

> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a6dea5b..e8f5671 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -10,6 +10,7 @@
>   #include <asm/event.h>
>   #include <asm/hardirq.h>
>   #include <asm/page.h>
> +#include <xen/radix-tree.h>
>   #include <xen/mem_event.h>
>   #include <public/mem_event.h>
>   #include <xen/mem_access.h>
> @@ -148,6 +149,74 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr)
>       return __map_domain_page(page);
>   }
>
> +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
> +{
> +    /* First apply type permissions */
> +    switch (t)

switch ( t )

[..]

> +    /* Then restrict with access permissions */
> +    switch(a)

switch ( a )

[..]

>   /*
>    * Lookup the MFN corresponding to a domain's PFN.
>    *
> @@ -228,7 +297,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
>   }
>
>   static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
> -                               p2m_type_t t)
> +                               p2m_type_t t, p2m_access_t a)

It looks strange to modify nearly all prototypes except this one in #6.

[..]

> @@ -346,7 +385,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>            for ( i=0 ; i < LPAE_ENTRIES; i++ )
>            {
>                pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
> -                                    MATTR_MEM, t);
> +                                    MATTR_MEM, t, p2m->default_access);

I'm not familiar with xen memaccess. Do we need to track access to 
intermediate page table (i.e Level 1 & 2)?

>                /*
>                 * First and second level super pages set p2m.table = 0, but
> @@ -366,7 +405,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>
>       unmap_domain_page(p);
>
> -    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
> +    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid, p2m->default_access);

Same question here.

[..]

> +int p2m_mem_access_check(paddr_t gpa, vaddr_t gla, struct npfec npfec)

This function only return 0/1, I would use bool_t here.

To reply on your answer on the last version, this function is not used 
in common code and x86 also use bool_t as return type.

> +{
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
> +    mem_event_request_t *req = NULL;
> +    xenmem_access_t xma;
> +    bool_t violation;
> +    int rc;
> +
> +    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
> +    if ( rc )
> +    {
> +        /* No setting was found, reinject */
> +        return 1;
> +    }
> +    else
> +    {
> +        /* First, handle rx2rw and n2rwx conversion automatically. */
> +        if ( npfec.write_access && xma == XENMEM_access_rx2rw )
> +        {
> +            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
> +                                    0, ~0, XENMEM_access_rw);
> +            ASSERT(rc == 0);

It's not a good idea the ASSERT here. The function call 
radix_tree_insert in p2m_set_mem_access may fail because there is a 
memory allocation via xmalloc inside.

I suspect x86 adds the ASSERT because the mapping always exists and 
there is no memory allocation in set_entry. I will let x86 folks confirm 
or not my purpose.

> +            return 0;
> +        }
> +        else if ( xma == XENMEM_access_n2rwx )
> +        {
> +            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
> +                                    0, ~0, XENMEM_access_rwx);
> +            ASSERT(rc == 0);

Same remark here.

> +        }
> +    }
> +
> +    /* Otherwise, check if there is a memory event listener, and send the message along */
> +    if ( !mem_event_check_ring( &v->domain->mem_event->access ) )
> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no mem_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, v->domain->domain_id);
> +            domain_crash(v->domain);
> +        }
> +        else
> +        {
> +            /* n2rwx was already handled */
> +            if ( xma != XENMEM_access_n2rwx)
> +            {
> +                /* A listener is not required, so clear the access
> +                 * restrictions. */
> +                rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
> +                                        0, ~0, XENMEM_access_rwx);
> +                ASSERT(rc == 0);

Same here.

> +void p2m_mem_access_resume(struct domain *d)
> +{
> +    mem_event_response_t rsp;
> +
> +    /* Pull all responses off the ring */
> +    while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
> +    {
> +        struct vcpu *v;
> +
> +        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
> +            continue;
> +
> +        /* Validate the vcpu_id in the response. */
> +        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
> +            continue;
> +
> +        v = d->vcpu[rsp.vcpu_id];
> +
> +        /* Unpause domain */
> +        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> +            mem_event_vcpu_unpause(v);
> +    }
> +}

This function looks very similar, if not a copy, of the x86 one. Can't 
we share the code?

> +/* Set access type for a region of pfns.
> + * If start_pfn == -1ul, sets the default access type */
> +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_access_t a;
> +    long rc = 0;
> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +#undef ACCESS
> +    };
> +
> +    switch ( access )
> +    {
> +    case 0 ... ARRAY_SIZE(memaccess) - 1:
> +        a = memaccess[access];
> +        break;
> +    case XENMEM_access_default:
> +        a = p2m->default_access;
> +        break;
> +    default:
> +        return -EINVAL;
> +    }
> +
> +    /* If request to set default access */
> +    if ( pfn == ~0ul )
> +    {
> +        p2m->default_access = a;
> +        return 0;
> +    }
> +
> +    spin_lock(&p2m->lock);
> +    for ( pfn += start; nr > start; ++pfn )

Why don't you reuse apply_p2m_changes? everything to get/update a pte is 
there and it contains few optimization.

Also this would avoid to duplicate the shatter code in p2m_set_entry.

> +    {
> +
> +        bool_t pte_update = p2m_set_entry(d, pfn_to_paddr(pfn), a);
> +
> +        if ( !pte_update )
> +            break;

Shouldn't you continue here? The other pages in the batch may require 
updates.

[..]

> +int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
> +                       xenmem_access_t *access)
> +{

I think this function is not complete. You only set the variable access 
when the page frame number has been found in the radix tree. But the 
page may use the default access which could result to a trap in Xen.

On x86, We agree that the default access value is stored in the entry. 
So if the default access value changes, Xen will retrieved the value 
previously stored in the pte.

With your current solution, reproduce this behavior on ARM will be 
difficult, unless you add every page in the radix tree.

But I don't have better idea for now.

> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    void *i;
> +    int index;
> +
> +    static const xenmem_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = XENMEM_access_##ac
> +            ACCESS(n),
> +            ACCESS(r),
> +            ACCESS(w),
> +            ACCESS(rw),
> +            ACCESS(x),
> +            ACCESS(rx),
> +            ACCESS(wx),
> +            ACCESS(rwx),
> +#undef ACCESS
> +    };
> +
> +    /* If request to get default access */
> +    if ( gpfn == ~0ull )
> +    {
> +        *access = memaccess[p2m->default_access];
> +        return 0;
> +    }
> +
> +    spin_lock(&p2m->lock);
> +
> +    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
> +
> +    spin_unlock(&p2m->lock);
> +
> +    if (!i)

if ( !i )

> +        return -ESRCH;
> +
> +    index = radix_tree_ptr_to_int(i);
> +
> +    if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
> +        return -ERANGE;

You are casting to unsigned all usage of index within this function. Why 
not directly define index as an "unsigned int"?

> +
> +    *access =  memaccess[ (unsigned) index];

memaccess[(unsigned)index];

> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index 0cc5b6d..b844f1d 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -262,6 +262,36 @@ enum dabt_size {
>       DABT_DOUBLE_WORD = 3,
>   };
>
> +/* Data abort data fetch status codes */
> +enum dabt_dfsc {
> +    DABT_DFSC_ADDR_SIZE_0       = 0b000000,
> +    DABT_DFSC_ADDR_SIZE_1       = 0b000001,
> +    DABT_DFSC_ADDR_SIZE_2       = 0b000010,
> +    DABT_DFSC_ADDR_SIZE_3       = 0b000011,
> +    DABT_DFSC_TRANSLATION_0     = 0b000100,
> +    DABT_DFSC_TRANSLATION_1     = 0b000101,
> +    DABT_DFSC_TRANSLATION_2     = 0b000110,
> +    DABT_DFSC_TRANSLATION_3     = 0b000111,
> +    DABT_DFSC_ACCESS_1          = 0b001001,
> +    DABT_DFSC_ACCESS_2          = 0b001010,
> +    DABT_DFSC_ACCESS_3          = 0b001011,	
> +    DABT_DFSC_PERMISSION_1      = 0b001101,
> +    DABT_DFSC_PERMISSION_2      = 0b001110,
> +    DABT_DFSC_PERMISSION_3      = 0b001111,
> +    DABT_DFSC_SYNC_EXT          = 0b010000,
> +    DABT_DFSC_SYNC_PARITY       = 0b011000,
> +    DABT_DFSC_SYNC_EXT_TTW_0    = 0b010100,
> +    DABT_DFSC_SYNC_EXT_TTW_1    = 0b010101,
> +    DABT_DFSC_SYNC_EXT_TTW_2    = 0b010110,
> +    DABT_DFSC_SYNC_EXT_TTW_3    = 0b010111,
> +    DABT_DFSC_SYNC_PARITY_TTW_0 = 0b011100,
> +    DABT_DFSC_SYNC_PARITY_TTW_1 = 0b011101,
> +    DABT_DFSC_SYNC_PARITY_TTW_2 = 0b011110,
> +    DABT_DFSC_SYNC_PARITY_TTW_3 = 0b011111,
> +    DABT_DFSC_ALIGNMENT         = 0b100001,
> +    DABT_DFSC_TLB_CONFLICT      = 0b110000,
> +};
> +

I'm not sure if it's necessary to define every possible value.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 15/15] tools/tests: Enable xen-access on ARM
  2014-09-01 14:22 ` [PATCH v3 15/15] tools/tests: Enable xen-access " Tamas K Lengyel
@ 2014-09-01 21:26   ` Julien Grall
  2014-09-02  8:49     ` Tamas K Lengyel
  2014-09-02 12:15     ` Tamas K Lengyel
  0 siblings, 2 replies; 48+ messages in thread
From: Julien Grall @ 2014-09-01 21:26 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
	jbeulich, dgdegra

Hello Tamas,

On 01/09/14 10:22, Tamas K Lengyel wrote:
> diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
> index 65eef99..698355c 100644
> --- a/tools/tests/xen-access/Makefile
> +++ b/tools/tests/xen-access/Makefile
> @@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
>   CFLAGS += $(CFLAGS_libxenguest)
>   CFLAGS += $(CFLAGS_xeninclude)
>
> -TARGETS-y :=
> -TARGETS-$(CONFIG_X86) += xen-access
> -TARGETS := $(TARGETS-y)
> +TARGETS := xen-access

I would move the definition of HAS_MEM_ACCESS from arch/*/Rules.mk to 
config/*.mk and use the defition here to build or not xen-access.

> @@ -520,7 +551,7 @@ int main(int argc, char *argv[])
>
>               /* Unregister for every event */
>               rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
> -            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
> +            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, GUEST_RAM_BASE_PFN,
>                                      xenaccess->domain_info->max_pages);

ARM may contains multiple banks non-contiguous banks. On Xen 4.5, there 
is 2 banks with a hole (see GUEST_RAM{0,1}_* in 
xen/include/public/arch-arm.h).

This change won't work with guest using more than 3G of RAM.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 16:25     ` Tamas K Lengyel
@ 2014-09-02  6:30       ` Jan Beulich
  2014-09-02  7:43         ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-09-02  6:30 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel

>>> On 01.09.14 at 18:25, <tamas.lengyel@zentific.com> wrote:
> On Mon, Sep 1, 2014 at 5:46 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
>> > --- a/xen/arch/x86/x86_64/compat/mm.c
>> > +++ b/xen/arch/x86/x86_64/compat/mm.c
>> > @@ -198,10 +198,6 @@ int compat_arch_memory_op(unsigned long cmd,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>> >          break;
>> >      }
>> >
>> > -    case XENMEM_access_op:
>> > -        rc = mem_access_memop(cmd, guest_handle_cast(arg,
>> xen_mem_access_op_t));
>> > -        break;
>> > -
>>
>> I don't think you can simply drop this.
>>
> 
> OK.
> 
> 
>>
>> > @@ -967,6 +968,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
>> u_domctl)
>> >      }
>> >      break;
>> >
>> > +    case XEN_DOMCTL_mem_event_op:
>> > +    {
>> > +        ret = mem_event_domctl(d, &op->u.mem_event_op,
>> > +                              guest_handle_cast(u_domctl, void));
>> > +        copyback = 1;
>> > +    }
>> > +    break;
>>
>> Please drop the unnecessary braces.
> 
> All other cases have braces around them here, even when not required, so
> this just follows the established style.

This is so because everyone uses that same argument, blindly
putting braces in place even when they're not needed. As pointed
out recently to someone else (also changing a domctl, istr it was
Aravindh), this breaks indentation of brace-enclosed blocks, so
should be used only when indeed needed.

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-01 20:51     ` Tamas K Lengyel
@ 2014-09-02  6:53       ` Jan Beulich
  2014-09-02  7:41         ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-09-02  6:53 UTC (permalink / raw)
  To: Julien Grall, Tamas K Lengyel
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel

>>> On 01.09.14 at 22:51, <tamas.lengyel@zentific.com> wrote:
> On Mon, Sep 1, 2014 at 8:11 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> On 01/09/14 10:21, Tamas K Lengyel wrote:
>> > diff --git a/xen/common/memory.c b/xen/common/memory.c
>> > index cc8a3d0..4e530bf 100644
>> > --- a/xen/common/memory.c
>> > +++ b/xen/common/memory.c
>> > @@ -25,6 +25,7 @@
>> >   #include <asm/hardirq.h>
>> >   #include <asm/p2m.h>
>> >   #include <xen/numa.h>
>> > +#include <xen/mem_access.h>
>> >   #include <public/memory.h>
>>
> 
> @Julien: the problem causing your compile issue is here with the order of
> the inclusion of the headers. public/memory.h needs to be included before
> xen/mem_access.h, I'll fix it in the next iteration..

But then please by making xen/mem_access.h include the prereq
header rather than altering the order here.

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-02  6:53       ` Jan Beulich
@ 2014-09-02  7:41         ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  7:41 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1053 bytes --]

On Tue, Sep 2, 2014 at 8:53 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 01.09.14 at 22:51, <tamas.lengyel@zentific.com> wrote:
> > On Mon, Sep 1, 2014 at 8:11 PM, Julien Grall <julien.grall@linaro.org>
> wrote:
> >> On 01/09/14 10:21, Tamas K Lengyel wrote:
> >> > diff --git a/xen/common/memory.c b/xen/common/memory.c
> >> > index cc8a3d0..4e530bf 100644
> >> > --- a/xen/common/memory.c
> >> > +++ b/xen/common/memory.c
> >> > @@ -25,6 +25,7 @@
> >> >   #include <asm/hardirq.h>
> >> >   #include <asm/p2m.h>
> >> >   #include <xen/numa.h>
> >> > +#include <xen/mem_access.h>
> >> >   #include <public/memory.h>
> >>
> >
> > @Julien: the problem causing your compile issue is here with the order of
> > the inclusion of the headers. public/memory.h needs to be included before
> > xen/mem_access.h, I'll fix it in the next iteration..
>
> But then please by making xen/mem_access.h include the prereq
> header rather than altering the order here.
>
> Jan
>

I was thinking of the same thing, that would be a lot more robust solution.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1823 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-02  6:30       ` Jan Beulich
@ 2014-09-02  7:43         ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  7:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1712 bytes --]

On Tue, Sep 2, 2014 at 8:30 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 01.09.14 at 18:25, <tamas.lengyel@zentific.com> wrote:
> > On Mon, Sep 1, 2014 at 5:46 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 01.09.14 at 16:21, <tklengyel@sec.in.tum.de> wrote:
> >> > --- a/xen/arch/x86/x86_64/compat/mm.c
> >> > +++ b/xen/arch/x86/x86_64/compat/mm.c
> >> > @@ -198,10 +198,6 @@ int compat_arch_memory_op(unsigned long cmd,
> >> XEN_GUEST_HANDLE_PARAM(void) arg)
> >> >          break;
> >> >      }
> >> >
> >> > -    case XENMEM_access_op:
> >> > -        rc = mem_access_memop(cmd, guest_handle_cast(arg,
> >> xen_mem_access_op_t));
> >> > -        break;
> >> > -
> >>
> >> I don't think you can simply drop this.
> >>
> >
> > OK.
> >
> >
> >>
> >> > @@ -967,6 +968,14 @@ long
> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
> >> u_domctl)
> >> >      }
> >> >      break;
> >> >
> >> > +    case XEN_DOMCTL_mem_event_op:
> >> > +    {
> >> > +        ret = mem_event_domctl(d, &op->u.mem_event_op,
> >> > +                              guest_handle_cast(u_domctl, void));
> >> > +        copyback = 1;
> >> > +    }
> >> > +    break;
> >>
> >> Please drop the unnecessary braces.
> >
> > All other cases have braces around them here, even when not required, so
> > this just follows the established style.
>
> This is so because everyone uses that same argument, blindly
> putting braces in place even when they're not needed. As pointed
> out recently to someone else (also changing a domctl, istr it was
> Aravindh), this breaks indentation of brace-enclosed blocks, so
> should be used only when indeed needed.
>
> Jan
>

Makes sense, I'll fix the style in the next iteration.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2787 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 09/15] xen/arm: Add set access required domctl
  2014-09-01 19:10   ` Julien Grall
@ 2014-09-02  7:48     ` Tamas K Lengyel
  2014-09-02  8:17       ` Jan Beulich
  0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  7:48 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1330 bytes --]

On Mon, Sep 1, 2014 at 9:10 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
>
> On 01/09/14 10:22, Tamas K Lengyel wrote:
>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>> ---
>>   xen/arch/arm/domctl.c | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>> index 45974e7..7cf719c 100644
>> --- a/xen/arch/arm/domctl.c
>> +++ b/xen/arch/arm/domctl.c
>> @@ -31,6 +31,19 @@ long arch_do_domctl(struct xen_domctl *domctl, struct
>> domain *d,
>>           return p2m_cache_flush(d, s, e);
>>       }
>>
>> +    case XEN_DOMCTL_set_access_required:
>> +    {
>> +        struct p2m_domain* p2m;
>> +
>> +        if ( current->domain == d )
>> +            return -EPERM;
>> +
>> +        p2m = p2m_get_hostp2m(d);
>>
>
> p2m_get_hostp2m is only defined in the next patch (#9), therefore this
> patch won't compile.
>
> Regards,
>
> --
> Julien Grall
>

Doh.. Thanks for spotting it.

I wish there was some automated testing infrastructure that would catch
dumb mistakes like that. Do you guys have your own custom script setup that
you do tests with or some continuous integration tests running on git
branches? I have found it very useful on github to do automated compile +
feature tests with Jenkins for pull requests.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2022 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 09/15] xen/arm: Add set access required domctl
  2014-09-02  7:48     ` Tamas K Lengyel
@ 2014-09-02  8:17       ` Jan Beulich
  2014-09-02  9:23         ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-09-02  8:17 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel

>>> On 02.09.14 at 09:48, <tamas.lengyel@zentific.com> wrote:
> I wish there was some automated testing infrastructure that would catch
> dumb mistakes like that. Do you guys have your own custom script setup that
> you do tests with or some continuous integration tests running on git
> branches? I have found it very useful on github to do automated compile +
> feature tests with Jenkins for pull requests.

Do you really need automation to build test your series after each
individual place?

Jan

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 15/15] tools/tests: Enable xen-access on ARM
  2014-09-01 21:26   ` Julien Grall
@ 2014-09-02  8:49     ` Tamas K Lengyel
  2014-09-02 12:15     ` Tamas K Lengyel
  1 sibling, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  8:49 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1712 bytes --]

On Mon, Sep 1, 2014 at 11:26 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
>
> On 01/09/14 10:22, Tamas K Lengyel wrote:
>
>> diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/
>> Makefile
>> index 65eef99..698355c 100644
>> --- a/tools/tests/xen-access/Makefile
>> +++ b/tools/tests/xen-access/Makefile
>> @@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
>>   CFLAGS += $(CFLAGS_libxenguest)
>>   CFLAGS += $(CFLAGS_xeninclude)
>>
>> -TARGETS-y :=
>> -TARGETS-$(CONFIG_X86) += xen-access
>> -TARGETS := $(TARGETS-y)
>> +TARGETS := xen-access
>>
>
> I would move the definition of HAS_MEM_ACCESS from arch/*/Rules.mk to
> config/*.mk and use the defition here to build or not xen-access.
>
>
>  @@ -520,7 +551,7 @@ int main(int argc, char *argv[])
>>
>>               /* Unregister for every event */
>>               rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx,
>> ~0ull, 0);
>> -            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
>> +            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx,
>> GUEST_RAM_BASE_PFN,
>>                                      xenaccess->domain_info->max_pages);
>>
>
> ARM may contains multiple banks non-contiguous banks. On Xen 4.5, there is
> 2 banks with a hole (see GUEST_RAM{0,1}_* in xen/include/public/arch-arm.h)
> .
>
> This change won't work with guest using more than 3G of RAM.
>

I guess it would only partially work. I'll add an #ifdef CONFIG_ARM here to
set the second bank's permissions also.

Tamas


>
> Regards,
>
> --
> Julien Grall
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 2829 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-01 21:07   ` Julien Grall
@ 2014-09-02  9:06     ` Tamas K Lengyel
  2014-09-03 20:20       ` Julien Grall
  0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  9:06 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 13097 bytes --]

On Mon, Sep 1, 2014 at 11:07 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
>
> On 01/09/14 10:22, Tamas K Lengyel wrote:
>
>> This patch enables to store, set, check and deliver LPAE R/W mem_events.
>>
>
> I would expand a bit more the commit message to explain the logic of mem
> event in ARM.
>
> I know you already explain it in patch #8 ("p2m type definitions and
> changes") but I think it's more relevant to explain where the "real" code
> is.


Ack.


>
>
>  diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index a6dea5b..e8f5671 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -10,6 +10,7 @@
>>   #include <asm/event.h>
>>   #include <asm/hardirq.h>
>>   #include <asm/page.h>
>> +#include <xen/radix-tree.h>
>>   #include <xen/mem_event.h>
>>   #include <public/mem_event.h>
>>   #include <xen/mem_access.h>
>> @@ -148,6 +149,74 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m,
>> paddr_t addr)
>>       return __map_domain_page(page);
>>   }
>>
>> +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>> +{
>> +    /* First apply type permissions */
>> +    switch (t)
>>
>
> switch ( t )
>
>
Ack.


> [..]
>
>
>  +    /* Then restrict with access permissions */
>> +    switch(a)
>>
>
> switch ( a )
>
> [..]
>
> Ack.


>
>    /*
>>    * Lookup the MFN corresponding to a domain's PFN.
>>    *
>> @@ -228,7 +297,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
>>   }
>>
>>   static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
>> -                               p2m_type_t t)
>> +                               p2m_type_t t, p2m_access_t a)
>>
>
> It looks strange to modify nearly all prototypes except this one in #6.
>
>
Ack.


> [..]
>
>
>  @@ -346,7 +385,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>> *entry,
>>            for ( i=0 ; i < LPAE_ENTRIES; i++ )
>>            {
>>                pte = mfn_to_p2m_entry(base_pfn +
>> (i<<(level_shift-LPAE_SHIFT)),
>> -                                    MATTR_MEM, t);
>> +                                    MATTR_MEM, t, p2m->default_access);
>>
>
> I'm not familiar with xen memaccess. Do we need to track access to
> intermediate page table (i.e Level 1 & 2)?
>
>
Yes, we need to track if a violation happened as a result of mem-access
restricting the pte permission or otherwise. From that perspective there is
no difference if the violation happened on a large or 4k page. From a
tracing perspective it makes more sense to have more granularity (4k pages
only) so I do shatter the large pages when setting mem-access permissions.


>
>                 /*
>>                 * First and second level super pages set p2m.table = 0,
>> but
>> @@ -366,7 +405,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>> *entry,
>>
>>       unmap_domain_page(p);
>>
>> -    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
>> +    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
>> p2m->default_access);
>>
>
> Same question here.
>
> [..]
>
>
>  +int p2m_mem_access_check(paddr_t gpa, vaddr_t gla, struct npfec npfec)
>>
>
> This function only return 0/1, I would use bool_t here.
>
> To reply on your answer on the last version, this function is not used in
> common code and x86 also use bool_t as return type.
>
>
You are right, I confused it with p2m_set_mem_access.


>
>  +{
>> +    struct vcpu *v = current;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
>> +    mem_event_request_t *req = NULL;
>> +    xenmem_access_t xma;
>> +    bool_t violation;
>> +    int rc;
>> +
>> +    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
>> +    if ( rc )
>> +    {
>> +        /* No setting was found, reinject */
>> +        return 1;
>> +    }
>> +    else
>> +    {
>> +        /* First, handle rx2rw and n2rwx conversion automatically. */
>> +        if ( npfec.write_access && xma == XENMEM_access_rx2rw )
>> +        {
>> +            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
>> +                                    0, ~0, XENMEM_access_rw);
>> +            ASSERT(rc == 0);
>>
>
> It's not a good idea the ASSERT here. The function call radix_tree_insert
> in p2m_set_mem_access may fail because there is a memory allocation via
> xmalloc inside.
>
> I suspect x86 adds the ASSERT because the mapping always exists and there
> is no memory allocation in set_entry. I will let x86 folks confirm or not
> my purpose.
>
>
We can certainly adjust it here if required, this is  mainly just
copy-paste from x86.


>
>  +            return 0;
>> +        }
>> +        else if ( xma == XENMEM_access_n2rwx )
>> +        {
>> +            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
>> +                                    0, ~0, XENMEM_access_rwx);
>> +            ASSERT(rc == 0);
>>
>
> Same remark here.
>
>
Ack.


>
>  +        }
>> +    }
>> +
>> +    /* Otherwise, check if there is a memory event listener, and send
>> the message along */
>> +    if ( !mem_event_check_ring( &v->domain->mem_event->access ) )
>> +    {
>> +        /* No listener */
>> +        if ( p2m->access_required )
>> +        {
>> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
>> +                                  "no mem_event listener VCPU %d, dom
>> %d\n",
>> +                                  v->vcpu_id, v->domain->domain_id);
>> +            domain_crash(v->domain);
>> +        }
>> +        else
>> +        {
>> +            /* n2rwx was already handled */
>> +            if ( xma != XENMEM_access_n2rwx)
>> +            {
>> +                /* A listener is not required, so clear the access
>> +                 * restrictions. */
>> +                rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
>> +                                        0, ~0, XENMEM_access_rwx);
>> +                ASSERT(rc == 0);
>>
>
> Same here.
>
>
Ack.


>
>  +void p2m_mem_access_resume(struct domain *d)
>> +{
>> +    mem_event_response_t rsp;
>> +
>> +    /* Pull all responses off the ring */
>> +    while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
>> +    {
>> +        struct vcpu *v;
>> +
>> +        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>> +            continue;
>> +
>> +        /* Validate the vcpu_id in the response. */
>> +        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>> +            continue;
>> +
>> +        v = d->vcpu[rsp.vcpu_id];
>> +
>> +        /* Unpause domain */
>> +        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
>> +            mem_event_vcpu_unpause(v);
>> +    }
>> +}
>>
>
> This function looks very similar, if not a copy, of the x86 one. Can't we
> share the code?
>
>
That would make sense, abstract it out into common mem_access code.


>
>  +/* Set access type for a region of pfns.
>> + * If start_pfn == -1ul, sets the default access type */
>> +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
>> +                        uint32_t start, uint32_t mask, xenmem_access_t
>> access)
>> +{
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    p2m_access_t a;
>> +    long rc = 0;
>> +
>> +    static const p2m_access_t memaccess[] = {
>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>> +        ACCESS(n),
>> +        ACCESS(r),
>> +        ACCESS(w),
>> +        ACCESS(rw),
>> +        ACCESS(x),
>> +        ACCESS(rx),
>> +        ACCESS(wx),
>> +        ACCESS(rwx),
>> +#undef ACCESS
>> +    };
>> +
>> +    switch ( access )
>> +    {
>> +    case 0 ... ARRAY_SIZE(memaccess) - 1:
>> +        a = memaccess[access];
>> +        break;
>> +    case XENMEM_access_default:
>> +        a = p2m->default_access;
>> +        break;
>> +    default:
>> +        return -EINVAL;
>> +    }
>> +
>> +    /* If request to set default access */
>> +    if ( pfn == ~0ul )
>> +    {
>> +        p2m->default_access = a;
>> +        return 0;
>> +    }
>> +
>> +    spin_lock(&p2m->lock);
>> +    for ( pfn += start; nr > start; ++pfn )
>>
>
> Why don't you reuse apply_p2m_changes? everything to get/update a pte is
> there and it contains few optimization.
>
> Also this would avoid to duplicate the shatter code in p2m_set_entry.


Honestly, I tried to wrap my head around apply_p2m_changes and it is
already quite complex. While I see I could apply the type/access
permissions with it over a range, I'm not entirely sure how I would make it
force-shatter all large pages. It was just easier (for now) to do it
manually.


>
>
>  +    {
>> +
>> +        bool_t pte_update = p2m_set_entry(d, pfn_to_paddr(pfn), a);
>> +
>> +        if ( !pte_update )
>> +            break;
>>
>
> Shouldn't you continue here? The other pages in the batch may require
> updates.
>
>
This is the same approach as in x86.


> [..]
>
>
>  +int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
>> +                       xenmem_access_t *access)
>> +{
>>
>
> I think this function is not complete. You only set the variable access
> when the page frame number has been found in the radix tree. But the page
> may use the default access which could result to a trap in Xen.
>
> On x86, We agree that the default access value is stored in the entry. So
> if the default access value changes, Xen will retrieved the value
> previously stored in the pte.
>
> With your current solution, reproduce this behavior on ARM will be
> difficult, unless you add every page in the radix tree.
>
> But I don't have better idea for now.


Right, I missed page additions with default_access. With so few software
programmable bits available in the PTE, we have no other choice but to
store the permission settings separately. I just need to make sure that the
radix tree is updated when a pte is added/removed.


>
>
>  +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    void *i;
>> +    int index;
>> +
>> +    static const xenmem_access_t memaccess[] = {
>> +#define ACCESS(ac) [XENMEM_access_##ac] = XENMEM_access_##ac
>> +            ACCESS(n),
>> +            ACCESS(r),
>> +            ACCESS(w),
>> +            ACCESS(rw),
>> +            ACCESS(x),
>> +            ACCESS(rx),
>> +            ACCESS(wx),
>> +            ACCESS(rwx),
>> +#undef ACCESS
>> +    };
>> +
>> +    /* If request to get default access */
>> +    if ( gpfn == ~0ull )
>> +    {
>> +        *access = memaccess[p2m->default_access];
>> +        return 0;
>> +    }
>> +
>> +    spin_lock(&p2m->lock);
>> +
>> +    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
>> +
>> +    spin_unlock(&p2m->lock);
>> +
>> +    if (!i)
>>
>
> if ( !i )
>
>
Ack.


>
>  +        return -ESRCH;
>> +
>> +    index = radix_tree_ptr_to_int(i);
>> +
>> +    if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
>> +        return -ERANGE;
>>
>
> You are casting to unsigned all usage of index within this function. Why
> not directly define index as an "unsigned int"?
>
>
The radix tree returns an int but I guess could do that.


>
>  +
>> +    *access =  memaccess[ (unsigned) index];
>>
>
> memaccess[(unsigned)index];
>
>
>  diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/
>> processor.h
>> index 0cc5b6d..b844f1d 100644
>> --- a/xen/include/asm-arm/processor.h
>> +++ b/xen/include/asm-arm/processor.h
>> @@ -262,6 +262,36 @@ enum dabt_size {
>>       DABT_DOUBLE_WORD = 3,
>>   };
>>
>> +/* Data abort data fetch status codes */
>> +enum dabt_dfsc {
>> +    DABT_DFSC_ADDR_SIZE_0       = 0b000000,
>> +    DABT_DFSC_ADDR_SIZE_1       = 0b000001,
>> +    DABT_DFSC_ADDR_SIZE_2       = 0b000010,
>> +    DABT_DFSC_ADDR_SIZE_3       = 0b000011,
>> +    DABT_DFSC_TRANSLATION_0     = 0b000100,
>> +    DABT_DFSC_TRANSLATION_1     = 0b000101,
>> +    DABT_DFSC_TRANSLATION_2     = 0b000110,
>> +    DABT_DFSC_TRANSLATION_3     = 0b000111,
>> +    DABT_DFSC_ACCESS_1          = 0b001001,
>> +    DABT_DFSC_ACCESS_2          = 0b001010,
>> +    DABT_DFSC_ACCESS_3          = 0b001011,
>> +    DABT_DFSC_PERMISSION_1      = 0b001101,
>> +    DABT_DFSC_PERMISSION_2      = 0b001110,
>> +    DABT_DFSC_PERMISSION_3      = 0b001111,
>> +    DABT_DFSC_SYNC_EXT          = 0b010000,
>> +    DABT_DFSC_SYNC_PARITY       = 0b011000,
>> +    DABT_DFSC_SYNC_EXT_TTW_0    = 0b010100,
>> +    DABT_DFSC_SYNC_EXT_TTW_1    = 0b010101,
>> +    DABT_DFSC_SYNC_EXT_TTW_2    = 0b010110,
>> +    DABT_DFSC_SYNC_EXT_TTW_3    = 0b010111,
>> +    DABT_DFSC_SYNC_PARITY_TTW_0 = 0b011100,
>> +    DABT_DFSC_SYNC_PARITY_TTW_1 = 0b011101,
>> +    DABT_DFSC_SYNC_PARITY_TTW_2 = 0b011110,
>> +    DABT_DFSC_SYNC_PARITY_TTW_3 = 0b011111,
>> +    DABT_DFSC_ALIGNMENT         = 0b100001,
>> +    DABT_DFSC_TLB_CONFLICT      = 0b110000,
>> +};
>> +
>>
>
> I'm not sure if it's necessary to define every possible value.
>
>
I figured for the sake of completeness it would make sense. I really only
use the PERMISSION values so the rest could be removed.

Thanks,
Tamas


> Regards,
>
> --
> Julien Grall
>
>
>

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 20612 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 09/15] xen/arm: Add set access required domctl
  2014-09-02  8:17       ` Jan Beulich
@ 2014-09-02  9:23         ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  9:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
	Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1131 bytes --]

On Tue, Sep 2, 2014 at 10:17 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 02.09.14 at 09:48, <tamas.lengyel@zentific.com> wrote:
> > I wish there was some automated testing infrastructure that would catch
> > dumb mistakes like that. Do you guys have your own custom script setup
> that
> > you do tests with or some continuous integration tests running on git
> > branches? I have found it very useful on github to do automated compile +
> > feature tests with Jenkins for pull requests.
>
> Do you really need automation to build test your series after each
> individual place?
>
> Jan
>
>
Would be convenient for sure. I try to test them every time but since the
series has been growing and I'm shuffling bits and pieces, apparently I
keep missing which one got tested and which one didn't. It's a learning
process working with such a large series without automated tests. I
honestly prefer github's PR system over mailinglist patchbombs, as dumb
mistakes like this can be easily fixed in an open PR without having to open
a new one and wasting people's time combing through a new series just to
see a minor fix.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1690 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 00/15] Mem_event and mem_access for ARM
  2014-09-01 19:56 ` [PATCH v3 00/15] Mem_event and mem_access for ARM Julien Grall
@ 2014-09-02  9:47   ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02  9:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 2408 bytes --]

On Mon, Sep 1, 2014 at 9:56 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas
>
> On 01/09/14 10:21, Tamas K Lengyel wrote:
> > The ARM virtualization extension provides 2-stage paging, a similar
> mechanisms
> > to Intel's EPT, which can be used to trace the memory accesses performed
> by
> > the guest systems. This series moves the mem_access and mem_event
> codebase
> > into Xen common, performs some code cleanup and architecture specific
> division
> > of components, then sets up the necessary infrastructure in the ARM code
> > to deliver the event on R/W/X traps. Finally, we turn on the compilation
> of
> > mem_access and mem_event on ARM and perform the necessary changes to the
> > tools side.
> >
> > This version of the series has been fully tested and is functional on an
> > Arndale board.
>
> domain_get_maximum_gpfn used in common code is defined as -ENOSYS.
>
>
Right, and I don't really understand why the safety check that uses this
function never triggers in mem_access with this function just returning
-errno.

I've sent a patch a month ago on the mailing list about it (see patch
> below).
>
> Ian: Can you reconsider to apply the patch? (I will also reply to the
> thread). FYI,
> I have a patch to fix xc_dom_gnttab_hvm_seed in libxc. I will try to send
> it next week.
>
> Regards,
>
> ====================================================================
>
> commit 7aa592b7a6f357b0003cd523e446d9d91dc96730
> Author: Julien Grall <julien.grall@linaro.org>
> Date:   Mon Jun 30 17:21:13 2014 +0100
>
>     xen/arm: Implement domain_get_maximum_gpfn
>
>     The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>     mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this
> purpose.
>
>     Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0a243b0..e4a1e5e 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -954,7 +954,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long
> mem_type)
>
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    return d->arch.p2m.max_mapped_gfn;
>  }
>
>  void share_xen_page_with_guest(struct page_info *page,
>
>
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 3427 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 15/15] tools/tests: Enable xen-access on ARM
  2014-09-01 21:26   ` Julien Grall
  2014-09-02  8:49     ` Tamas K Lengyel
@ 2014-09-02 12:15     ` Tamas K Lengyel
  2014-09-03 20:27       ` Julien Grall
  1 sibling, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-02 12:15 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1742 bytes --]

On Mon, Sep 1, 2014 at 11:26 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
>
> On 01/09/14 10:22, Tamas K Lengyel wrote:
>
>> diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/
>> Makefile
>> index 65eef99..698355c 100644
>> --- a/tools/tests/xen-access/Makefile
>> +++ b/tools/tests/xen-access/Makefile
>> @@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
>>   CFLAGS += $(CFLAGS_libxenguest)
>>   CFLAGS += $(CFLAGS_xeninclude)
>>
>> -TARGETS-y :=
>> -TARGETS-$(CONFIG_X86) += xen-access
>> -TARGETS := $(TARGETS-y)
>> +TARGETS := xen-access
>>
>
> I would move the definition of HAS_MEM_ACCESS from arch/*/Rules.mk to
> config/*.mk and use the defition here to build or not xen-access.


I'm not a fan of that approach. How about this:

TARGETS-y :=
TARGETS-$(CONFIG_X86) = xen-access
TARGETS-$(CONFIG_ARM) = xen-access
TARGETS := $(TARGETS-y)


>
>  @@ -520,7 +551,7 @@ int main(int argc, char *argv[])
>>
>>               /* Unregister for every event */
>>               rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx,
>> ~0ull, 0);
>> -            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
>> +            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx,
>> GUEST_RAM_BASE_PFN,
>>                                      xenaccess->domain_info->max_pages);
>>
>
> ARM may contains multiple banks non-contiguous banks. On Xen 4.5, there is
> 2 banks with a hole (see GUEST_RAM{0,1}_* in xen/include/public/arch-arm.h)
> .
>
> This change won't work with guest using more than 3G of RAM.
>
> Regards,
>
> --
> Julien Grall
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 2913 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 13/15] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-09-01 14:22 ` [PATCH v3 13/15] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-09-03 14:38   ` Daniel De Graaf
  0 siblings, 0 replies; 48+ messages in thread
From: Daniel De Graaf @ 2014-09-03 14:38 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich

On 09/01/2014 10:22 AM, Tamas K Lengyel wrote:
> This patch sets up the infrastructure to support mem_access and mem_event
> on ARM and turns on compilation. We define the required XSM functions.
>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

-- 
Daniel De Graaf
National Security Agency

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-02  9:06     ` Tamas K Lengyel
@ 2014-09-03 20:20       ` Julien Grall
  2014-09-03 21:56         ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-03 20:20 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


Hello Tamas,

On 02/09/14 02:06, Tamas K Lengyel wrote:
> Honestly, I tried to wrap my head around apply_p2m_changes and it is
> already quite complex. While I see I could apply the type/access
> permissions with it over a range, I'm not entirely sure how I would make
> it force-shatter all large pages. It was just easier (for now) to do it
> manually.

To shatter large pages, you can give a look to the REMOVE case in 
apply_one_level. I would even create an helper to shatter a page

apply_one_level doesn't look very complex. Almost everything is in a 
case. I would prefer that you extend the function rather than creating a 
new function.

>
>
>         +    {
>         +
>         +        bool_t pte_update = p2m_set_entry(d, pfn_to_paddr(pfn), a);
>         +
>         +        if ( !pte_update )
>         +            break;
>
>
>     Shouldn't you continue here? The other pages in the batch may
>     require updates.
>
>
> This is the same approach as in x86.

Hmmm ... no. x86 will break if an error has occured (i.e rc != 0) and 
return the rc later.

Here, you p2m_set_entry will return a boolean (I don't really understand 
why because you are mixing bool and int for rc withing this function).

If p2m_set_entry returns false, you will break in p2m_set_mem_access and 
return 0 (rc has been initialized to 0 earlier). Therefore the userspace 
application will think everything has been correctly updated but it's wrong!

> Right, I missed page additions with default_access. With so few software
> programmable bits available in the PTE, we have no other choice but to
> store the permission settings separately. I just need to make sure that
> the radix tree is updated when a pte is added/removed.

I'm not sure to fully understand your plan. Do you intend to add every 
page in the radix tree? If so, I'm a bit worry about the size of the 
radix tree. Xen will waste memory when xen access is not used (IHMO, the 
feature is only used in few cases).

>         +        return -ESRCH;
>         +
>         +    index = radix_tree_ptr_to_int(i);
>         +
>         +    if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
>         +        return -ERANGE;
>
>
>     You are casting to unsigned all usage of index within this function.
>     Why not directly define index as an "unsigned int"?
>
>
> The radix tree returns an int but I guess could do that.

Which is fine because, IIRC, the compiler will implicitly cast the value 
in unsigned.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 15/15] tools/tests: Enable xen-access on ARM
  2014-09-02 12:15     ` Tamas K Lengyel
@ 2014-09-03 20:27       ` Julien Grall
  2014-09-03 22:06         ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-03 20:27 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

Hello Tamas,

On 02/09/14 05:15, Tamas K Lengyel wrote:
> On Mon, Sep 1, 2014 at 11:26 PM, Julien Grall <julien.grall@linaro.org
> <mailto:julien.grall@linaro.org>> wrote:
>     On 01/09/14 10:22, Tamas K Lengyel wrote:
>
>         diff --git a/tools/tests/xen-access/__Makefile
>         b/tools/tests/xen-access/__Makefile
>         index 65eef99..698355c 100644
>         --- a/tools/tests/xen-access/__Makefile
>         +++ b/tools/tests/xen-access/__Makefile
>         @@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
>            CFLAGS += $(CFLAGS_libxenguest)
>            CFLAGS += $(CFLAGS_xeninclude)
>
>         -TARGETS-y :=
>         -TARGETS-$(CONFIG_X86) += xen-access
>         -TARGETS := $(TARGETS-y)
>         +TARGETS := xen-access
>
>
>     I would move the definition of HAS_MEM_ACCESS from arch/*/Rules.mk
>     to config/*.mk and use the defition here to build or not xen-access.
>
>
> I'm not a fan of that approach. How about this:

What is the problem with this solution? xen-access should be compiled 
when Xen has HAS_MEM_ACCESS=y.
As you enabled HAS_MEM_ACCESS by default when the architecture is 
supported, using TARGETS-$(HAS_MEM_ACCESS) += xen-access will make the 
support for a new architecture easier.

Anyway, I will let the maintainers decide what is the best solution.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-03 20:20       ` Julien Grall
@ 2014-09-03 21:56         ` Tamas K Lengyel
  2014-09-08 20:41           ` Julien Grall
  0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-03 21:56 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 3790 bytes --]

On Wed, Sep 3, 2014 at 10:20 PM, Julien Grall <julien.grall@linaro.org>
wrote:

>
> Hello Tamas,
>
>
> On 02/09/14 02:06, Tamas K Lengyel wrote:
>
>> Honestly, I tried to wrap my head around apply_p2m_changes and it is
>> already quite complex. While I see I could apply the type/access
>> permissions with it over a range, I'm not entirely sure how I would make
>> it force-shatter all large pages. It was just easier (for now) to do it
>> manually.
>>
>
> To shatter large pages, you can give a look to the REMOVE case in
> apply_one_level. I would even create an helper to shatter a page
>
> apply_one_level doesn't look very complex. Almost everything is in a case.
> I would prefer that you extend the function rather than creating a new
> function.
>
>
I'm not entirely sure I could easily extend
apply_one_level/apply_p2m_changes without significant changes and a
potential to affect something else along the way.. I can give it a try
though but I'm a bit hesitant.. Do you see an immediate performance reason
to try to hook it into those functions instead of keeping it separate? The
current function is pretty straight forward in what it is doing as it is.


>
>>
>>         +    {
>>         +
>>         +        bool_t pte_update = p2m_set_entry(d, pfn_to_paddr(pfn),
>> a);
>>         +
>>         +        if ( !pte_update )
>>         +            break;
>>
>>
>>     Shouldn't you continue here? The other pages in the batch may
>>     require updates.
>>
>>
>> This is the same approach as in x86.
>>
>
> Hmmm ... no. x86 will break if an error has occured (i.e rc != 0) and
> return the rc later.
>
> Here, you p2m_set_entry will return a boolean (I don't really understand
> why because you are mixing bool and int for rc withing this function).
>
> If p2m_set_entry returns false, you will break in p2m_set_mem_access and
> return 0 (rc has been initialized to 0 earlier). Therefore the userspace
> application will think everything has been correctly updated but it's wrong!


Hm, I guess I should set rc in that case to an appropriate -errno. I don't
think we should continue setting permission if we had a failure midway
through. I'll look into this a bit more.


>
>
>  Right, I missed page additions with default_access. With so few software
>> programmable bits available in the PTE, we have no other choice but to
>> store the permission settings separately. I just need to make sure that
>> the radix tree is updated when a pte is added/removed.
>>
>
> I'm not sure to fully understand your plan. Do you intend to add every
> page in the radix tree? If so, I'm a bit worry about the size of the radix
> tree. Xen will waste memory when xen access is not used (IHMO, the feature
> is only used in few cases).


Not for every pte but only for those pte's where the access permission is
not p2m_access_rwx. This is what I currently have in plans for v4:
https://github.com/tklengyel/xen/compare/arm_memaccess4?expand=1#diff-448087f66572e941e5aab286c05c8efaR481

I'm also thinking this function should attempt to delete the node from the
radix tree in case the setting being set is p2m_access_rwx as to remove
stale entries.


>
>          +        return -ESRCH;
>>         +
>>         +    index = radix_tree_ptr_to_int(i);
>>         +
>>         +    if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
>>         +        return -ERANGE;
>>
>>
>>     You are casting to unsigned all usage of index within this function.
>>     Why not directly define index as an "unsigned int"?
>>
>>
>> The radix tree returns an int but I guess could do that.
>>
>
> Which is fine because, IIRC, the compiler will implicitly cast the value
> in unsigned.
>

Yeap, you are right, it does work with unsigned from the beginning.


> Regards,
>
> --
> Julien Grall
>

Thanks!
Tamas

[-- Attachment #1.2: Type: text/html, Size: 5888 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 15/15] tools/tests: Enable xen-access on ARM
  2014-09-03 20:27       ` Julien Grall
@ 2014-09-03 22:06         ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-03 22:06 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1836 bytes --]

On Wed, Sep 3, 2014 at 10:27 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
>
> On 02/09/14 05:15, Tamas K Lengyel wrote:
>
>> On Mon, Sep 1, 2014 at 11:26 PM, Julien Grall <julien.grall@linaro.org
>> <mailto:julien.grall@linaro.org>> wrote:
>>     On 01/09/14 10:22, Tamas K Lengyel wrote:
>>
>>         diff --git a/tools/tests/xen-access/__Makefile
>>         b/tools/tests/xen-access/__Makefile
>>         index 65eef99..698355c 100644
>>         --- a/tools/tests/xen-access/__Makefile
>>         +++ b/tools/tests/xen-access/__Makefile
>>
>>         @@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
>>            CFLAGS += $(CFLAGS_libxenguest)
>>            CFLAGS += $(CFLAGS_xeninclude)
>>
>>         -TARGETS-y :=
>>         -TARGETS-$(CONFIG_X86) += xen-access
>>         -TARGETS := $(TARGETS-y)
>>         +TARGETS := xen-access
>>
>>
>>     I would move the definition of HAS_MEM_ACCESS from arch/*/Rules.mk
>>     to config/*.mk and use the defition here to build or not xen-access.
>>
>>
>> I'm not a fan of that approach. How about this:
>>
>
> What is the problem with this solution?


Honestly, I tried to wrap my head around what is going on in config/*.mk
but it didn't really click yet. It does look more complicated to have the
HAS_MEM_ACCESS in config/*.mk then in arch/*/Rules.mk..


> xen-access should be compiled when Xen has HAS_MEM_ACCESS=y.
> As you enabled HAS_MEM_ACCESS by default when the architecture is
> supported, using TARGETS-$(HAS_MEM_ACCESS) += xen-access will make the
> support for a new architecture easier.
>
> Anyway, I will let the maintainers decide what is the best solution.
>

Of course, if that's the way to go I don't have any real arguments against
it. I could use some pointers in how to go about it though.


> Regards,
>
> --
> Julien Grall
>

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 3020 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-03 21:56         ` Tamas K Lengyel
@ 2014-09-08 20:41           ` Julien Grall
  2014-09-09  9:20             ` Ian Campbell
  0 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-09-08 20:41 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel



On 03/09/14 14:56, Tamas K Lengyel wrote:
  > I'm not entirely sure I could easily extend
> apply_one_level/apply_p2m_changes without significant changes and a
> potential to affect something else along the way.. I can give it a try
> though but I'm a bit hesitant.. Do you see an immediate performance
> reason to try to hook it into those functions instead of keeping it
> separate? The current function is pretty straight forward in what it is
> doing as it is.

I would like to avoid standalone function that manipulate the p2m. It 
will be easier to fix bug, hence the code if everything is using the 
same code.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-08 20:41           ` Julien Grall
@ 2014-09-09  9:20             ` Ian Campbell
  2014-09-09 13:08               ` Tamas K Lengyel
  0 siblings, 1 reply; 48+ messages in thread
From: Ian Campbell @ 2014-09-09  9:20 UTC (permalink / raw)
  To: Julien Grall
  Cc: Tamas K Lengyel, Ian Jackson, Tim Deegan, Stefano Stabellini,
	Andres Lagar-Cavilla, Jan Beulich, Daniel De Graaf, xen-devel,
	Tamas K Lengyel

On Mon, 2014-09-08 at 13:41 -0700, Julien Grall wrote:
> 
> On 03/09/14 14:56, Tamas K Lengyel wrote:
>   > I'm not entirely sure I could easily extend
> > apply_one_level/apply_p2m_changes without significant changes and a
> > potential to affect something else along the way.. I can give it a try
> > though but I'm a bit hesitant.. Do you see an immediate performance
> > reason to try to hook it into those functions instead of keeping it
> > separate? The current function is pretty straight forward in what it is
> > doing as it is.
> 
> I would like to avoid standalone function that manipulate the p2m. It 
> will be easier to fix bug, hence the code if everything is using the 
> same code.

Agreed, having two functions which do essentially the same thing (walk
the p2m and manipulate it) is not good from a maintenance point of view.
There would need to be a more compelling reason for this than a worry
about breaking the existing code.

Ian.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-09  9:20             ` Ian Campbell
@ 2014-09-09 13:08               ` Tamas K Lengyel
  0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-09-09 13:08 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1133 bytes --]

On Tue, Sep 9, 2014 at 11:20 AM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Mon, 2014-09-08 at 13:41 -0700, Julien Grall wrote:
> >
> > On 03/09/14 14:56, Tamas K Lengyel wrote:
> >   > I'm not entirely sure I could easily extend
> > > apply_one_level/apply_p2m_changes without significant changes and a
> > > potential to affect something else along the way.. I can give it a try
> > > though but I'm a bit hesitant.. Do you see an immediate performance
> > > reason to try to hook it into those functions instead of keeping it
> > > separate? The current function is pretty straight forward in what it is
> > > doing as it is.
> >
> > I would like to avoid standalone function that manipulate the p2m. It
> > will be easier to fix bug, hence the code if everything is using the
> > same code.
>
> Agreed, having two functions which do essentially the same thing (walk
> the p2m and manipulate it) is not good from a maintenance point of view.
> There would need to be a more compelling reason for this than a worry
> about breaking the existing code.
>
> Ian.
>

Fair enough. I'll get it in for the next iteration.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1721 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2014-09-09 13:08 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-01 14:21 [PATCH v3 00/15] Mem_event and mem_access for ARM Tamas K Lengyel
2014-09-01 14:21 ` [PATCH v3 01/15] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
2014-09-01 15:06   ` Jan Beulich
2014-09-01 15:15     ` Tamas K Lengyel
2014-09-01 14:21 ` [PATCH v3 02/15] xen: Relocate struct npfec definition " Tamas K Lengyel
2014-09-01 15:44   ` Jan Beulich
2014-09-01 14:21 ` [PATCH v3 03/15] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
2014-09-01 15:46   ` Jan Beulich
2014-09-01 16:25     ` Tamas K Lengyel
2014-09-02  6:30       ` Jan Beulich
2014-09-02  7:43         ` Tamas K Lengyel
2014-09-01 18:11   ` Julien Grall
2014-09-01 20:51     ` Tamas K Lengyel
2014-09-02  6:53       ` Jan Beulich
2014-09-02  7:41         ` Tamas K Lengyel
2014-09-01 14:21 ` [PATCH v3 04/15] xen/mem_event: Clean out superfluous white-spaces Tamas K Lengyel
2014-09-01 14:21 ` [PATCH v3 05/15] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
2014-09-01 15:47   ` Jan Beulich
2014-09-01 14:22 ` [PATCH v3 06/15] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 07/15] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
2014-09-01 15:50   ` Jan Beulich
2014-09-01 14:22 ` [PATCH v3 08/15] xen/arm: p2m type definitions and changes Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 09/15] xen/arm: Add set access required domctl Tamas K Lengyel
2014-09-01 19:10   ` Julien Grall
2014-09-02  7:48     ` Tamas K Lengyel
2014-09-02  8:17       ` Jan Beulich
2014-09-02  9:23         ` Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 10/15] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
2014-09-01 21:07   ` Julien Grall
2014-09-02  9:06     ` Tamas K Lengyel
2014-09-03 20:20       ` Julien Grall
2014-09-03 21:56         ` Tamas K Lengyel
2014-09-08 20:41           ` Julien Grall
2014-09-09  9:20             ` Ian Campbell
2014-09-09 13:08               ` Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 11/15] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 12/15] xen/arm: Shatter large pages when using mem_acces Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 13/15] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
2014-09-03 14:38   ` Daniel De Graaf
2014-09-01 14:22 ` [PATCH v3 14/15] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
2014-09-01 14:22 ` [PATCH v3 15/15] tools/tests: Enable xen-access " Tamas K Lengyel
2014-09-01 21:26   ` Julien Grall
2014-09-02  8:49     ` Tamas K Lengyel
2014-09-02 12:15     ` Tamas K Lengyel
2014-09-03 20:27       ` Julien Grall
2014-09-03 22:06         ` Tamas K Lengyel
2014-09-01 19:56 ` [PATCH v3 00/15] Mem_event and mem_access for ARM Julien Grall
2014-09-02  9:47   ` Tamas K Lengyel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.