All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM
@ 2014-09-15 14:02 Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 01/17] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
                   ` (16 more replies)
  0 siblings, 17 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

The ARM virtualization extension provides 2-stage paging, a similar mechanisms
to Intel's EPT, which can be used to trace the memory accesses performed by
the guest systems. This series moves the mem_access and mem_event codebase
into Xen common, performs some code cleanup and architecture specific division
of components, then sets up the necessary infrastructure in the ARM code
to deliver the event on R/W/X traps. Finally, we turn on the compilation of
mem_access and mem_event on ARM and perform the necessary changes to the tools side.

This version of the series has been fully tested and is functional on an
Arndale board.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/arm_memaccess6

Tamas K Lengyel (17):
  xen: Relocate mem_access and mem_event into common.
  xen: Relocate p2m_mem_access_resume to mem_access common
  xen: Relocate struct npfec definition into common
  xen: Relocate mem_event_op domctl and access_op memop into common.
  x86/p2m: Typo fix for spelling ambiguous
  xen/mem_event: Clean out superfluous white-spaces
  xen/mem_event: Relax error condition on debug builds
  xen/mem_event: Abstract architecture specific sanity checks
  xen/mem_access: Abstract architecture specific sanity check
  xen/arm: p2m type definitions and changes
  xen/arm: Add set access required domctl
  xen/arm: Implement domain_get_maximum_gpfn
  xen/arm: Data abort exception (R/W) mem_events.
  xen/arm: Instruction prefetch abort (X) mem_event handling
  xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  tools/libxc: Allocate magic page for mem access on ARM
  tools/tests: Enable xen-access on ARM

 MAINTAINERS                         |   6 +
 config/arm32.mk                     |   1 +
 config/arm64.mk                     |   1 +
 config/x86_32.mk                    |   2 +
 config/x86_64.mk                    |   2 +
 tools/libxc/xc_dom_arm.c            |   6 +-
 tools/tests/xen-access/Makefile     |   9 +-
 tools/tests/xen-access/xen-access.c |  79 ++--
 xen/Rules.mk                        |   1 +
 xen/arch/arm/domctl.c               |  13 +
 xen/arch/arm/mm.c                   |   2 +-
 xen/arch/arm/p2m.c                  | 546 +++++++++++++++++++++++----
 xen/arch/arm/traps.c                |  67 +++-
 xen/arch/x86/domctl.c               |  10 +-
 xen/arch/x86/hvm/hvm.c              |  63 +---
 xen/arch/x86/mm/Makefile            |   2 -
 xen/arch/x86/mm/hap/nested_ept.c    |   6 +-
 xen/arch/x86/mm/hap/nested_hap.c    |   6 +-
 xen/arch/x86/mm/mem_access.c        | 133 -------
 xen/arch/x86/mm/mem_event.c         | 705 -----------------------------------
 xen/arch/x86/mm/mem_paging.c        |   2 +-
 xen/arch/x86/mm/mem_sharing.c       |   4 +-
 xen/arch/x86/mm/p2m-pod.c           |   8 +-
 xen/arch/x86/mm/p2m-pt.c            |  10 +-
 xen/arch/x86/mm/p2m.c               |  32 +-
 xen/arch/x86/x86_64/compat/mm.c     |   8 +-
 xen/arch/x86/x86_64/mm.c            |   8 +-
 xen/common/Makefile                 |   2 +
 xen/common/compat/memory.c          |   5 +
 xen/common/domain.c                 |   1 +
 xen/common/domctl.c                 |   7 +
 xen/common/mem_access.c             | 157 ++++++++
 xen/common/mem_event.c              | 723 ++++++++++++++++++++++++++++++++++++
 xen/common/memory.c                 |  72 +++-
 xen/include/asm-arm/mm.h            |   1 -
 xen/include/asm-arm/p2m.h           | 108 +++++-
 xen/include/asm-arm/processor.h     |  70 +++-
 xen/include/asm-x86/config.h        |   6 +
 xen/include/asm-x86/hvm/hvm.h       |   8 +-
 xen/include/asm-x86/mem_access.h    |  39 --
 xen/include/asm-x86/mem_event.h     |  82 ----
 xen/include/asm-x86/mm.h            |  23 --
 xen/include/asm-x86/p2m.h           |  18 +-
 xen/include/xen/mem_access.h        |  65 ++++
 xen/include/xen/mem_event.h         | 143 +++++++
 xen/include/xen/mm.h                |  27 ++
 xen/include/xsm/dummy.h             |  26 +-
 xen/include/xsm/xsm.h               |  29 +-
 xen/xsm/dummy.c                     |   7 +-
 xen/xsm/flask/hooks.c               |  33 +-
 50 files changed, 2099 insertions(+), 1285 deletions(-)
 delete mode 100644 xen/arch/x86/mm/mem_access.c
 delete mode 100644 xen/arch/x86/mm/mem_event.c
 create mode 100644 xen/common/mem_access.c
 create mode 100644 xen/common/mem_event.c
 delete mode 100644 xen/include/asm-x86/mem_access.h
 delete mode 100644 xen/include/asm-x86/mem_event.h
 create mode 100644 xen/include/xen/mem_access.h
 create mode 100644 xen/include/xen/mem_event.h

-- 
2.1.0

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 01/17] xen: Relocate mem_access and mem_event into common.
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 02/17] xen: Relocate p2m_mem_access_resume to mem_access common Tamas K Lengyel
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

In preparation to add support for ARM LPAE mem_event, relocate mem_access,
mem_event and auxiliary functions into common Xen code.
This patch makes no functional changes to the X86 side, for ARM mem_event
and mem_access functions are just defined as placeholder stubs, and are
actually enabled later in the series.

Edits that are only header path adjustments:
   xen/arch/x86/domctl.c
   xen/arch/x86/mm/hap/nested_ept.c
   xen/arch/x86/mm/hap/nested_hap.c
   xen/arch/x86/mm/mem_paging.c
   xen/arch/x86/mm/mem_sharing.c
   xen/arch/x86/mm/p2m-pod.c
   xen/arch/x86/mm/p2m-pt.c
   xen/arch/x86/mm/p2m.c
   xen/arch/x86/x86_64/compat/mm.c
   xen/arch/x86/x86_64/mm.c

Makefile adjustments for new/removed code:
   xen/common/Makefile
   xen/arch/x86/mm/Makefile

Relocated prepare_ring_for_helper and destroy_ring_for_helper functions:
   xen/include/xen/mm.h
   xen/common/memory.c
   xen/include/asm-x86/hvm/hvm.h
   xen/arch/x86/hvm/hvm.c

Code movement of mem_event and mem_access:
    xen/arch/x86/mm/mem_access.c -> xen/common/mem_access.c
    xen/arch/x86/mm/mem_event.c -> xen/common/mem_event.c
    xen/include/asm-x86/mem_access.h -> xen/include/xen/mem_access.h
    xen/include/asm-x86/mem_event.h -> xen/include/xen/mem_event.h

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Tim Deegan <tim@xen.org>
---
v5: Make <xen/mem-event.h> include <xen/sched.h> by default.
    Style fix with grouping of #includes.

v4: Make <xen/mem_access.h> include <public/memory.h> by default.

v3: Replace asm/domain.h with xen/sched.h in mem_event.c to better
    accomodate for the new code location.
    Replace #ifdef CONFIG_X86 wrappers with HAS_MEM_ACCESS flags.

v2: Update MAINTAINERS.
    More descriptive commit message to aid in the review process.
---
 MAINTAINERS                      |   6 +
 xen/Rules.mk                     |   1 +
 xen/arch/x86/Rules.mk            |   1 +
 xen/arch/x86/domctl.c            |   2 +-
 xen/arch/x86/hvm/hvm.c           |  63 +---
 xen/arch/x86/mm/Makefile         |   2 -
 xen/arch/x86/mm/hap/nested_ept.c |   6 +-
 xen/arch/x86/mm/hap/nested_hap.c |   6 +-
 xen/arch/x86/mm/mem_access.c     | 133 --------
 xen/arch/x86/mm/mem_event.c      | 705 ---------------------------------------
 xen/arch/x86/mm/mem_paging.c     |   2 +-
 xen/arch/x86/mm/mem_sharing.c    |   4 +-
 xen/arch/x86/mm/p2m-pod.c        |   8 +-
 xen/arch/x86/mm/p2m-pt.c         |  10 +-
 xen/arch/x86/mm/p2m.c            |   8 +-
 xen/arch/x86/x86_64/compat/mm.c  |   4 +-
 xen/arch/x86/x86_64/mm.c         |   4 +-
 xen/common/Makefile              |   2 +
 xen/common/domain.c              |   1 +
 xen/common/mem_access.c          | 133 ++++++++
 xen/common/mem_event.c           | 705 +++++++++++++++++++++++++++++++++++++++
 xen/common/memory.c              |  63 ++++
 xen/include/asm-arm/mm.h         |   1 -
 xen/include/asm-x86/config.h     |   3 +
 xen/include/asm-x86/hvm/hvm.h    |   6 -
 xen/include/asm-x86/mem_access.h |  39 ---
 xen/include/asm-x86/mem_event.h  |  82 -----
 xen/include/asm-x86/mm.h         |   2 -
 xen/include/xen/mem_access.h     |  60 ++++
 xen/include/xen/mem_event.h      | 143 ++++++++
 xen/include/xen/mm.h             |   6 +
 31 files changed, 1154 insertions(+), 1057 deletions(-)
 delete mode 100644 xen/arch/x86/mm/mem_access.c
 delete mode 100644 xen/arch/x86/mm/mem_event.c
 create mode 100644 xen/common/mem_access.c
 create mode 100644 xen/common/mem_event.c
 delete mode 100644 xen/include/asm-x86/mem_access.h
 delete mode 100644 xen/include/asm-x86/mem_event.h
 create mode 100644 xen/include/xen/mem_access.h
 create mode 100644 xen/include/xen/mem_event.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 266e47b..f659180 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -337,6 +337,12 @@ F:	xen/arch/x86/mm/mem_sharing.c
 F:	xen/arch/x86/mm/mem_paging.c
 F:	tools/memshr
 
+MEMORY EVENT AND ACCESS
+M:	Tim Deegan <tim@xen.org>
+S:	Supported
+F:	xen/common/mem_event.c
+F:	xen/common/mem_access.c
+
 XENTRACE
 M:	George Dunlap <george.dunlap@eu.citrix.com>
 S:	Supported
diff --git a/xen/Rules.mk b/xen/Rules.mk
index b49f3c8..dc15b09 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -57,6 +57,7 @@ CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
 CFLAGS-$(HAS_GDBSX)     += -DHAS_GDBSX
 CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
 CFLAGS-$(HAS_DEVICE_TREE) += -DHAS_DEVICE_TREE
+CFLAGS-$(HAS_MEM_ACCESS)  += -DHAS_MEM_ACCESS
 CFLAGS-$(HAS_PCI)       += -DHAS_PCI
 CFLAGS-$(HAS_IOPORTS)   += -DHAS_IOPORTS
 CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 576985e..bd4e342 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -12,6 +12,7 @@ HAS_NS16550 := y
 HAS_EHCI := y
 HAS_KEXEC := y
 HAS_GDBSX := y
+HAS_MEM_ACCESS := y
 xenoprof := y
 
 #
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 7a5de43..26a3ea1 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,7 +30,7 @@
 #include <xen/hypercall.h> /* for arch_do_domctl */
 #include <xsm/xsm.h>
 #include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 #include <public/mem_event.h>
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8d905d3..7649d36 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -35,6 +35,9 @@
 #include <xen/paging.h>
 #include <xen/cpu.h>
 #include <xen/wait.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <xen/rangeset.h>
 #include <asm/shadow.h>
 #include <asm/hap.h>
 #include <asm/current.h>
@@ -63,10 +66,7 @@
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
 #include <public/memory.h>
-#include <asm/mem_event.h>
-#include <asm/mem_access.h>
 #include <public/mem_event.h>
-#include <xen/rangeset.h>
 #include <public/arch-x86/cpuid.h>
 
 bool_t __read_mostly hvm_enabled;
@@ -484,19 +484,6 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
     clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
 }
 
-void destroy_ring_for_helper(
-    void **_va, struct page_info *page)
-{
-    void *va = *_va;
-
-    if ( va != NULL )
-    {
-        unmap_domain_page_global(va);
-        put_page_and_type(page);
-        *_va = NULL;
-    }
-}
-
 static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
@@ -504,50 +491,6 @@ static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
     destroy_ring_for_helper(&iorp->va, iorp->page);
 }
 
-int prepare_ring_for_helper(
-    struct domain *d, unsigned long gmfn, struct page_info **_page,
-    void **_va)
-{
-    struct page_info *page;
-    p2m_type_t p2mt;
-    void *va;
-
-    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
-    if ( p2m_is_paging(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        p2m_mem_paging_populate(d, gmfn);
-        return -ENOENT;
-    }
-    if ( p2m_is_shared(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        return -ENOENT;
-    }
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    va = __map_domain_page_global(page);
-    if ( va == NULL )
-    {
-        put_page_and_type(page);
-        return -ENOMEM;
-    }
-
-    *_va = va;
-    *_page = page;
-
-    return 0;
-}
-
 static int hvm_map_ioreq_page(
     struct hvm_ioreq_server *s, bool_t buf, unsigned long gmfn)
 {
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 73dcdf4..ed4b1f8 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -6,10 +6,8 @@ obj-y += p2m.o p2m-pt.o p2m-ept.o p2m-pod.o
 obj-y += guest_walk_2.o
 obj-y += guest_walk_3.o
 obj-$(x86_64) += guest_walk_4.o
-obj-$(x86_64) += mem_event.o
 obj-$(x86_64) += mem_paging.o
 obj-$(x86_64) += mem_sharing.o
-obj-$(x86_64) += mem_access.o
 
 guest_walk_%.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 0d044bc..cbbc4e9 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -17,14 +17,14 @@
  * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
  * Place - Suite 330, Boston, MA 02111-1307 USA.
  */
+#include <xen/mem_event.h>
+#include <xen/event.h>
+#include <public/mem_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <public/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <xen/event.h>
 #include <asm/hap.h>
 #include <asm/hvm/support.h>
 
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 137a87c..a4bb835 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -19,14 +19,14 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
+#include <xen/mem_event.h>
+#include <xen/event.h>
+#include <public/mem_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <public/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <xen/event.h>
 #include <asm/hap.h>
 #include <asm/hvm/support.h>
 
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
deleted file mode 100644
index e8465a5..0000000
--- a/xen/arch/x86/mm/mem_access.c
+++ /dev/null
@@ -1,133 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_access.c
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#include <xen/sched.h>
-#include <xen/guest_access.h>
-#include <xen/hypercall.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <xsm/xsm.h>
-
-
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
-{
-    long rc;
-    xen_mem_access_op_t mao;
-    struct domain *d;
-
-    if ( copy_from_guest(&mao, arg, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
-    if ( rc )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
-    if ( rc )
-        goto out;
-
-    rc = -ENODEV;
-    if ( unlikely(!d->mem_event->access.ring_page) )
-        goto out;
-
-    switch ( mao.op )
-    {
-    case XENMEM_access_op_resume:
-        p2m_mem_access_resume(d);
-        rc = 0;
-        break;
-
-    case XENMEM_access_op_set_access:
-    {
-        unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
-
-        rc = -EINVAL;
-        if ( (mao.pfn != ~0ull) &&
-             (mao.nr < start_iter ||
-              ((mao.pfn + mao.nr - 1) < mao.pfn) ||
-              ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
-            break;
-
-        rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
-                                MEMOP_CMD_MASK, mao.access);
-        if ( rc > 0 )
-        {
-            ASSERT(!(rc & MEMOP_CMD_MASK));
-            rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
-                                               XENMEM_access_op | rc, arg);
-        }
-        break;
-    }
-
-    case XENMEM_access_op_get_access:
-    {
-        xenmem_access_t access;
-
-        rc = -EINVAL;
-        if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
-            break;
-
-        rc = p2m_get_mem_access(d, mao.pfn, &access);
-        if ( rc != 0 )
-            break;
-
-        mao.access = access;
-        rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
-
-        break;
-    }
-
-    default:
-        rc = -ENOSYS;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
-{
-    int rc = mem_event_claim_slot(d, &d->mem_event->access);
-    if ( rc < 0 )
-        return rc;
-
-    mem_event_put_request(d, &d->mem_event->access, req);
-
-    return 0;
-} 
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
deleted file mode 100644
index ba7e71e..0000000
--- a/xen/arch/x86/mm/mem_event.c
+++ /dev/null
@@ -1,705 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_event.c
- *
- * Memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#include <asm/domain.h>
-#include <xen/event.h>
-#include <xen/wait.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <asm/mem_paging.h>
-#include <asm/mem_access.h>
-#include <asm/mem_sharing.h>
-#include <xsm/xsm.h>
-
-/* for public/io/ring.h macros */
-#define xen_mb()   mb()
-#define xen_rmb()  rmb()
-#define xen_wmb()  wmb()
-
-#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
-#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
-#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
-
-static int mem_event_enable(
-    struct domain *d,
-    xen_domctl_mem_event_op_t *mec,
-    struct mem_event_domain *med,
-    int pause_flag,
-    int param,
-    xen_event_channel_notification_t notification_fn)
-{
-    int rc;
-    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
-
-    /* Only one helper at a time. If the helper crashed,
-     * the ring is in an undefined state and so is the guest.
-     */
-    if ( med->ring_page )
-        return -EBUSY;
-
-    /* The parameter defaults to zero, and it should be 
-     * set to something */
-    if ( ring_gfn == 0 )
-        return -ENOSYS;
-
-    mem_event_ring_lock_init(med);
-    mem_event_ring_lock(med);
-
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
-                                    &med->ring_page);
-    if ( rc < 0 )
-        goto err;
-
-    /* Set the number of currently blocked vCPUs to 0. */
-    med->blocked = 0;
-
-    /* Allocate event channel */
-    rc = alloc_unbound_xen_event_channel(d->vcpu[0],
-                                         current->domain->domain_id,
-                                         notification_fn);
-    if ( rc < 0 )
-        goto err;
-
-    med->xen_port = mec->port = rc;
-
-    /* Prepare ring buffer */
-    FRONT_RING_INIT(&med->front_ring,
-                    (mem_event_sring_t *)med->ring_page,
-                    PAGE_SIZE);
-
-    /* Save the pause flag for this particular ring. */
-    med->pause_flag = pause_flag;
-
-    /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&med->wq);
-
-    mem_event_ring_unlock(med);
-    return 0;
-
- err:
-    destroy_ring_for_helper(&med->ring_page, 
-                            med->ring_pg_struct);
-    mem_event_ring_unlock(med);
-
-    return rc;
-}
-
-static unsigned int mem_event_ring_available(struct mem_event_domain *med)
-{
-    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
-    avail_req -= med->target_producers;
-    avail_req -= med->foreign_producers;
-
-    BUG_ON(avail_req < 0);
-
-    return avail_req;
-}
-
-/*
- * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
- * ring. These vCPUs were paused on their way out after placing an event,
- * but need to be resumed where the ring is capable of processing at least
- * one event from them.
- */
-static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
-{
-    struct vcpu *v;
-    int online = d->max_vcpus;
-    unsigned int avail_req = mem_event_ring_available(med);
-
-    if ( avail_req == 0 || med->blocked == 0 )
-        return;
-
-    /*
-     * We ensure that we only have vCPUs online if there are enough free slots
-     * for their memory events to be processed.  This will ensure that no
-     * memory events are lost (due to the fact that certain types of events
-     * cannot be replayed, we need to ensure that there is space in the ring
-     * for when they are hit).
-     * See comment below in mem_event_put_request().
-     */
-    for_each_vcpu ( d, v )
-        if ( test_bit(med->pause_flag, &v->pause_flags) )
-            online--;
-
-    ASSERT(online == (d->max_vcpus - med->blocked));
-
-    /* We remember which vcpu last woke up to avoid scanning always linearly
-     * from zero and starving higher-numbered vcpus under high load */
-    if ( d->vcpu )
-    {
-        int i, j, k;
-
-        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
-        {
-            k = i % d->max_vcpus;
-            v = d->vcpu[k];
-            if ( !v )
-                continue;
-
-            if ( !(med->blocked) || online >= avail_req )
-               break;
-
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                online++;
-                med->blocked--;
-                med->last_vcpu_wake_up = k;
-            }
-        }
-    }
-}
-
-/*
- * In the event that a vCPU attempted to place an event in the ring and
- * was unable to do so, it is queued on a wait queue.  These are woken as
- * needed, and take precedence over the blocked vCPUs.
- */
-static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
-{
-    unsigned int avail_req = mem_event_ring_available(med);
-
-    if ( avail_req > 0 )
-        wake_up_nr(&med->wq, avail_req);
-}
-
-/*
- * mem_event_wake() will wakeup all vcpus waiting for the ring to
- * become available.  If we have queued vCPUs, they get top priority. We
- * are guaranteed that they will go through code paths that will eventually
- * call mem_event_wake() again, ensuring that any blocked vCPUs will get
- * unpaused once all the queued vCPUs have made it through.
- */
-void mem_event_wake(struct domain *d, struct mem_event_domain *med)
-{
-    if (!list_empty(&med->wq.list))
-        mem_event_wake_queued(d, med);
-    else
-        mem_event_wake_blocked(d, med);
-}
-
-static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
-{
-    if ( med->ring_page )
-    {
-        struct vcpu *v;
-
-        mem_event_ring_lock(med);
-
-        if ( !list_empty(&med->wq.list) )
-        {
-            mem_event_ring_unlock(med);
-            return -EBUSY;
-        }
-
-        /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d->vcpu[0], med->xen_port);
-
-        /* Unblock all vCPUs */
-        for_each_vcpu ( d, v )
-        {
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                med->blocked--;
-            }
-        }
-
-        destroy_ring_for_helper(&med->ring_page, 
-                                med->ring_pg_struct);
-        mem_event_ring_unlock(med);
-    }
-
-    return 0;
-}
-
-static inline void mem_event_release_slot(struct domain *d,
-                                          struct mem_event_domain *med)
-{
-    /* Update the accounting */
-    if ( current->domain == d )
-        med->target_producers--;
-    else
-        med->foreign_producers--;
-
-    /* Kick any waiters */
-    mem_event_wake(d, med);
-}
-
-/*
- * mem_event_mark_and_pause() tags vcpu and put it to sleep.
- * The vcpu will resume execution in mem_event_wake_waiters().
- */
-void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
-{
-    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
-    {
-        vcpu_pause_nosync(v);
-        med->blocked++;
-    }
-}
-
-/*
- * This must be preceded by a call to claim_slot(), and is guaranteed to
- * succeed.  As a side-effect however, the vCPU may be paused if the ring is
- * overly full and its continued execution would cause stalling and excessive
- * waiting.  The vCPU will be automatically unpaused when the ring clears.
- */
-void mem_event_put_request(struct domain *d,
-                           struct mem_event_domain *med,
-                           mem_event_request_t *req)
-{
-    mem_event_front_ring_t *front_ring;
-    int free_req;
-    unsigned int avail_req;
-    RING_IDX req_prod;
-
-    if ( current->domain != d )
-    {
-        req->flags |= MEM_EVENT_FLAG_FOREIGN;
-        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
-    }
-
-    mem_event_ring_lock(med);
-
-    /* Due to the reservations, this step must succeed. */
-    front_ring = &med->front_ring;
-    free_req = RING_FREE_REQUESTS(front_ring);
-    ASSERT(free_req > 0);
-
-    /* Copy request */
-    req_prod = front_ring->req_prod_pvt;
-    memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
-    req_prod++;
-
-    /* Update ring */
-    front_ring->req_prod_pvt = req_prod;
-    RING_PUSH_REQUESTS(front_ring);
-
-    /* We've actually *used* our reservation, so release the slot. */
-    mem_event_release_slot(d, med);
-
-    /* Give this vCPU a black eye if necessary, on the way out.
-     * See the comments above wake_blocked() for more information
-     * on how this mechanism works to avoid waiting. */
-    avail_req = mem_event_ring_available(med);
-    if( current->domain == d && avail_req < d->max_vcpus )
-        mem_event_mark_and_pause(current, med);
-
-    mem_event_ring_unlock(med);
-
-    notify_via_xen_event_channel(d, med->xen_port);
-}
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
-{
-    mem_event_front_ring_t *front_ring;
-    RING_IDX rsp_cons;
-
-    mem_event_ring_lock(med);
-
-    front_ring = &med->front_ring;
-    rsp_cons = front_ring->rsp_cons;
-
-    if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
-    {
-        mem_event_ring_unlock(med);
-        return 0;
-    }
-
-    /* Copy response */
-    memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
-    rsp_cons++;
-
-    /* Update ring */
-    front_ring->rsp_cons = rsp_cons;
-    front_ring->sring->rsp_event = rsp_cons + 1;
-
-    /* Kick any waiters -- since we've just consumed an event,
-     * there may be additional space available in the ring. */
-    mem_event_wake(d, med);
-
-    mem_event_ring_unlock(med);
-
-    return 1;
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
-{
-    mem_event_ring_lock(med);
-    mem_event_release_slot(d, med);
-    mem_event_ring_unlock(med);
-}
-
-static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
-{
-    unsigned int avail_req;
-
-    if ( !med->ring_page )
-        return -ENOSYS;
-
-    mem_event_ring_lock(med);
-
-    avail_req = mem_event_ring_available(med);
-    if ( avail_req == 0 )
-    {
-        mem_event_ring_unlock(med);
-        return -EBUSY;
-    }
-
-    if ( !foreign )
-        med->target_producers++;
-    else
-        med->foreign_producers++;
-
-    mem_event_ring_unlock(med);
-
-    return 0;
-}
-
-/* Simple try_grab wrapper for use in the wait_event() macro. */
-static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
-{
-    *rc = mem_event_grab_slot(med, 0);
-    return *rc;
-}
-
-/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
-static int mem_event_wait_slot(struct mem_event_domain *med)
-{
-    int rc = -EBUSY;
-    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
-    return rc;
-}
-
-bool_t mem_event_check_ring(struct mem_event_domain *med)
-{
-    return (med->ring_page != NULL);
-}
-
-/*
- * Determines whether or not the current vCPU belongs to the target domain,
- * and calls the appropriate wait function.  If it is a guest vCPU, then we
- * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
- * this function will always return 0 for a guest.  For a non-guest, we check
- * for space and return -EBUSY if the ring is not available.
- *
- * Return codes: -ENOSYS: the ring is not yet configured
- *               -EBUSY: the ring is busy
- *               0: a spot has been reserved
- *
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep)
-{
-    if ( (current->domain == d) && allow_sleep )
-        return mem_event_wait_slot(med);
-    else
-        return mem_event_grab_slot(med, (current->domain != d));
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_paging_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
-        p2m_mem_paging_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
-        p2m_mem_access_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_sharing_notification(struct vcpu *v, unsigned int port)
-{
-    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
-        mem_sharing_sharing_resume(v->domain);
-}
-
-int do_mem_event_op(int op, uint32_t domain, void *arg)
-{
-    int ret;
-    struct domain *d;
-
-    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
-    if ( ret )
-        return ret;
-
-    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
-    if ( ret )
-        goto out;
-
-    switch (op)
-    {
-        case XENMEM_paging_op:
-            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
-            break;
-        case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
-            break;
-        default:
-            ret = -ENOSYS;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return ret;
-}
-
-/* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d)
-{
-    if ( d->mem_event->paging.ring_page ) {
-        /* Destroying the wait queue head means waking up all
-         * queued vcpus. This will drain the list, allowing
-         * the disable routine to complete. It will also drop
-         * all domain refs the wait-queued vcpus are holding.
-         * Finally, because this code path involves previously
-         * pausing the domain (domain_kill), unpausing the 
-         * vcpus causes no harm. */
-        destroy_waitqueue_head(&d->mem_event->paging.wq);
-        (void)mem_event_disable(d, &d->mem_event->paging);
-    }
-    if ( d->mem_event->access.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->access.wq);
-        (void)mem_event_disable(d, &d->mem_event->access);
-    }
-    if ( d->mem_event->share.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->share.wq);
-        (void)mem_event_disable(d, &d->mem_event->share);
-    }
-}
-
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
-    int rc;
-
-    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
-    if ( rc )
-        return rc;
-
-    if ( unlikely(d == current->domain) )
-    {
-        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
-        return -EINVAL;
-    }
-
-    if ( unlikely(d->is_dying) )
-    {
-        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
-                 d->domain_id);
-        return 0;
-    }
-
-    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
-    {
-        gdprintk(XENLOG_INFO,
-                 "Memory event op on a domain (%u) with no vcpus\n",
-                 d->domain_id);
-        return -EINVAL;
-    }
-
-    rc = -ENOSYS;
-
-    switch ( mec->mode )
-    {
-    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
-    {
-        struct mem_event_domain *med = &d->mem_event->paging;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
-        {
-            struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-            rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
-                break;
-
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* No paging if iommu is used */
-            rc = -EMLINK;
-            if ( unlikely(need_iommu(d)) )
-                break;
-
-            rc = -EXDEV;
-            /* Disallow paging in a PoD guest */
-            if ( p2m->pod.entry_count )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
-                                    HVM_PARAM_PAGING_RING_PFN,
-                                    mem_paging_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
-    {
-        struct mem_event_domain *med = &d->mem_event->access;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
-        {
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* Currently only EPT is supported */
-            if ( !cpu_has_vmx )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
-                                    HVM_PARAM_ACCESS_RING_PFN,
-                                    mem_access_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
-    {
-        struct mem_event_domain *med = &d->mem_event->share;
-        rc = -EINVAL;
-
-        switch( mec->op )
-        {
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
-        {
-            rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
-                break;
-
-            rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
-                                    HVM_PARAM_SHARING_RING_PFN,
-                                    mem_sharing_notification);
-        }
-        break;
-
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
-        {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
-        }
-        break;
-
-        default:
-            rc = -ENOSYS;
-            break;
-        }
-    }
-    break;
-
-    default:
-        rc = -ENOSYS;
-    }
-
-    return rc;
-}
-
-void mem_event_vcpu_pause(struct vcpu *v)
-{
-    ASSERT(v == current);
-
-    atomic_inc(&v->mem_event_pause_count);
-    vcpu_pause_nosync(v);
-}
-
-void mem_event_vcpu_unpause(struct vcpu *v)
-{
-    int old, new, prev = v->mem_event_pause_count.counter;
-
-    /* All unpause requests as a result of toolstack responses.  Prevent
-     * underflow of the vcpu pause count. */
-    do
-    {
-        old = prev;
-        new = old - 1;
-
-        if ( new < 0 )
-        {
-            printk(XENLOG_G_WARNING
-                   "%pv mem_event: Too many unpause attempts\n", v);
-            return;
-        }
-
-        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
-    } while ( prev != old );
-
-    vcpu_unpause(v);
-}
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 235776d..65f6a3d 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,7 +22,7 @@
 
 
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
 
 
 int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 79188b9..7c0fc7d 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -27,12 +27,12 @@
 #include <xen/mm.h>
 #include <xen/grant_table.h>
 #include <xen/sched.h>
+#include <xen/rcupdate.h>
+#include <xen/mem_event.h>
 #include <asm/page.h>
 #include <asm/string.h>
 #include <asm/p2m.h>
-#include <asm/mem_event.h>
 #include <asm/atomic.h>
-#include <xen/rcupdate.h>
 #include <asm/event.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bd4c7c8..43f507c 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -20,16 +20,16 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
+#include <xen/iommu.h>
+#include <xen/mem_event.h>
+#include <xen/event.h>
+#include <public/mem_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
 #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
-#include <xen/iommu.h>
-#include <asm/mem_event.h>
-#include <public/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <xen/event.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 085ab6f..e48b63a 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -25,16 +25,16 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
+#include <xen/iommu.h>
+#include <xen/mem_event.h>
+#include <xen/event.h>
+#include <xen/trace.h>
+#include <public/mem_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <xen/iommu.h>
-#include <asm/mem_event.h>
-#include <public/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <xen/event.h>
-#include <xen/trace.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 32776c3..a9f120a 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -24,16 +24,16 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
+#include <xen/iommu.h>
+#include <xen/mem_event.h>
+#include <xen/event.h>
+#include <public/mem_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
 #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
-#include <xen/iommu.h>
-#include <asm/mem_event.h>
-#include <public/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <xen/event.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
 #include <xsm/xsm.h>
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 69c6195..c079702 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -1,10 +1,10 @@
 #include <xen/event.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
 #include <xen/multicall.h>
 #include <compat/memory.h>
 #include <compat/xen.h>
-#include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
 
 int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 4937f9a..0da6ddc 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -26,6 +26,8 @@
 #include <xen/nodemask.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
 #include <asm/current.h>
 #include <asm/asm_defns.h>
 #include <asm/page.h>
@@ -35,9 +37,7 @@
 #include <asm/msr.h>
 #include <asm/setup.h>
 #include <asm/numa.h>
-#include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
 #include <public/memory.h>
 
 /* Parameters for PFN/MADDR compression. */
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 3683ae3..b9f3387 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -51,6 +51,8 @@ obj-y += tmem_xen.o
 obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += lzo.o
+obj-$(HAS_MEM_ACCESS) += mem_access.o
+obj-$(HAS_MEM_ACCESS) += mem_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 62514b0..134bed6 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -15,6 +15,7 @@
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
+#include <xen/mem_event.h>
 #include <xen/time.h>
 #include <xen/console.h>
 #include <xen/softirq.h>
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
new file mode 100644
index 0000000..9a8c1a9
--- /dev/null
+++ b/xen/common/mem_access.c
@@ -0,0 +1,133 @@
+/******************************************************************************
+ * mem_access.c
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#include <xen/sched.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/mem_event.h>
+#include <public/memory.h>
+#include <asm/p2m.h>
+#include <xsm/xsm.h>
+
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+    long rc;
+    xen_mem_access_op_t mao;
+    struct domain *d;
+
+    if ( copy_from_guest(&mao, arg, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
+    if ( rc )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    if ( rc )
+        goto out;
+
+    rc = -ENODEV;
+    if ( unlikely(!d->mem_event->access.ring_page) )
+        goto out;
+
+    switch ( mao.op )
+    {
+    case XENMEM_access_op_resume:
+        p2m_mem_access_resume(d);
+        rc = 0;
+        break;
+
+    case XENMEM_access_op_set_access:
+    {
+        unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
+
+        rc = -EINVAL;
+        if ( (mao.pfn != ~0ull) &&
+             (mao.nr < start_iter ||
+              ((mao.pfn + mao.nr - 1) < mao.pfn) ||
+              ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
+            break;
+
+        rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
+                                MEMOP_CMD_MASK, mao.access);
+        if ( rc > 0 )
+        {
+            ASSERT(!(rc & MEMOP_CMD_MASK));
+            rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
+                                               XENMEM_access_op | rc, arg);
+        }
+        break;
+    }
+
+    case XENMEM_access_op_get_access:
+    {
+        xenmem_access_t access;
+
+        rc = -EINVAL;
+        if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
+            break;
+
+        rc = p2m_get_mem_access(d, mao.pfn, &access);
+        if ( rc != 0 )
+            break;
+
+        mao.access = access;
+        rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
+
+        break;
+    }
+
+    default:
+        rc = -ENOSYS;
+        break;
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+    int rc = mem_event_claim_slot(d, &d->mem_event->access);
+    if ( rc < 0 )
+        return rc;
+
+    mem_event_put_request(d, &d->mem_event->access, req);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
new file mode 100644
index 0000000..336b638
--- /dev/null
+++ b/xen/common/mem_event.c
@@ -0,0 +1,705 @@
+/******************************************************************************
+ * mem_event.c
+ *
+ * Memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#include <xen/sched.h>
+#include <xen/event.h>
+#include <xen/wait.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <asm/p2m.h>
+#include <asm/mem_paging.h>
+#include <asm/mem_sharing.h>
+#include <xsm/xsm.h>
+
+/* for public/io/ring.h macros */
+#define xen_mb()   mb()
+#define xen_rmb()  rmb()
+#define xen_wmb()  wmb()
+
+#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
+#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
+#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
+
+static int mem_event_enable(
+    struct domain *d,
+    xen_domctl_mem_event_op_t *mec,
+    struct mem_event_domain *med,
+    int pause_flag,
+    int param,
+    xen_event_channel_notification_t notification_fn)
+{
+    int rc;
+    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
+
+    /* Only one helper at a time. If the helper crashed,
+     * the ring is in an undefined state and so is the guest.
+     */
+    if ( med->ring_page )
+        return -EBUSY;
+
+    /* The parameter defaults to zero, and it should be 
+     * set to something */
+    if ( ring_gfn == 0 )
+        return -ENOSYS;
+
+    mem_event_ring_lock_init(med);
+    mem_event_ring_lock(med);
+
+    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
+                                    &med->ring_page);
+    if ( rc < 0 )
+        goto err;
+
+    /* Set the number of currently blocked vCPUs to 0. */
+    med->blocked = 0;
+
+    /* Allocate event channel */
+    rc = alloc_unbound_xen_event_channel(d->vcpu[0],
+                                         current->domain->domain_id,
+                                         notification_fn);
+    if ( rc < 0 )
+        goto err;
+
+    med->xen_port = mec->port = rc;
+
+    /* Prepare ring buffer */
+    FRONT_RING_INIT(&med->front_ring,
+                    (mem_event_sring_t *)med->ring_page,
+                    PAGE_SIZE);
+
+    /* Save the pause flag for this particular ring. */
+    med->pause_flag = pause_flag;
+
+    /* Initialize the last-chance wait queue. */
+    init_waitqueue_head(&med->wq);
+
+    mem_event_ring_unlock(med);
+    return 0;
+
+ err:
+    destroy_ring_for_helper(&med->ring_page, 
+                            med->ring_pg_struct);
+    mem_event_ring_unlock(med);
+
+    return rc;
+}
+
+static unsigned int mem_event_ring_available(struct mem_event_domain *med)
+{
+    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
+    avail_req -= med->target_producers;
+    avail_req -= med->foreign_producers;
+
+    BUG_ON(avail_req < 0);
+
+    return avail_req;
+}
+
+/*
+ * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
+ * ring. These vCPUs were paused on their way out after placing an event,
+ * but need to be resumed where the ring is capable of processing at least
+ * one event from them.
+ */
+static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
+{
+    struct vcpu *v;
+    int online = d->max_vcpus;
+    unsigned int avail_req = mem_event_ring_available(med);
+
+    if ( avail_req == 0 || med->blocked == 0 )
+        return;
+
+    /*
+     * We ensure that we only have vCPUs online if there are enough free slots
+     * for their memory events to be processed.  This will ensure that no
+     * memory events are lost (due to the fact that certain types of events
+     * cannot be replayed, we need to ensure that there is space in the ring
+     * for when they are hit).
+     * See comment below in mem_event_put_request().
+     */
+    for_each_vcpu ( d, v )
+        if ( test_bit(med->pause_flag, &v->pause_flags) )
+            online--;
+
+    ASSERT(online == (d->max_vcpus - med->blocked));
+
+    /* We remember which vcpu last woke up to avoid scanning always linearly
+     * from zero and starving higher-numbered vcpus under high load */
+    if ( d->vcpu )
+    {
+        int i, j, k;
+
+        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
+        {
+            k = i % d->max_vcpus;
+            v = d->vcpu[k];
+            if ( !v )
+                continue;
+
+            if ( !(med->blocked) || online >= avail_req )
+               break;
+
+            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            {
+                vcpu_unpause(v);
+                online++;
+                med->blocked--;
+                med->last_vcpu_wake_up = k;
+            }
+        }
+    }
+}
+
+/*
+ * In the event that a vCPU attempted to place an event in the ring and
+ * was unable to do so, it is queued on a wait queue.  These are woken as
+ * needed, and take precedence over the blocked vCPUs.
+ */
+static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
+{
+    unsigned int avail_req = mem_event_ring_available(med);
+
+    if ( avail_req > 0 )
+        wake_up_nr(&med->wq, avail_req);
+}
+
+/*
+ * mem_event_wake() will wakeup all vcpus waiting for the ring to
+ * become available.  If we have queued vCPUs, they get top priority. We
+ * are guaranteed that they will go through code paths that will eventually
+ * call mem_event_wake() again, ensuring that any blocked vCPUs will get
+ * unpaused once all the queued vCPUs have made it through.
+ */
+void mem_event_wake(struct domain *d, struct mem_event_domain *med)
+{
+    if (!list_empty(&med->wq.list))
+        mem_event_wake_queued(d, med);
+    else
+        mem_event_wake_blocked(d, med);
+}
+
+static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
+{
+    if ( med->ring_page )
+    {
+        struct vcpu *v;
+
+        mem_event_ring_lock(med);
+
+        if ( !list_empty(&med->wq.list) )
+        {
+            mem_event_ring_unlock(med);
+            return -EBUSY;
+        }
+
+        /* Free domU's event channel and leave the other one unbound */
+        free_xen_event_channel(d->vcpu[0], med->xen_port);
+
+        /* Unblock all vCPUs */
+        for_each_vcpu ( d, v )
+        {
+            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            {
+                vcpu_unpause(v);
+                med->blocked--;
+            }
+        }
+
+        destroy_ring_for_helper(&med->ring_page, 
+                                med->ring_pg_struct);
+        mem_event_ring_unlock(med);
+    }
+
+    return 0;
+}
+
+static inline void mem_event_release_slot(struct domain *d,
+                                          struct mem_event_domain *med)
+{
+    /* Update the accounting */
+    if ( current->domain == d )
+        med->target_producers--;
+    else
+        med->foreign_producers--;
+
+    /* Kick any waiters */
+    mem_event_wake(d, med);
+}
+
+/*
+ * mem_event_mark_and_pause() tags vcpu and put it to sleep.
+ * The vcpu will resume execution in mem_event_wake_waiters().
+ */
+void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
+{
+    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
+    {
+        vcpu_pause_nosync(v);
+        med->blocked++;
+    }
+}
+
+/*
+ * This must be preceded by a call to claim_slot(), and is guaranteed to
+ * succeed.  As a side-effect however, the vCPU may be paused if the ring is
+ * overly full and its continued execution would cause stalling and excessive
+ * waiting.  The vCPU will be automatically unpaused when the ring clears.
+ */
+void mem_event_put_request(struct domain *d,
+                           struct mem_event_domain *med,
+                           mem_event_request_t *req)
+{
+    mem_event_front_ring_t *front_ring;
+    int free_req;
+    unsigned int avail_req;
+    RING_IDX req_prod;
+
+    if ( current->domain != d )
+    {
+        req->flags |= MEM_EVENT_FLAG_FOREIGN;
+        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+    }
+
+    mem_event_ring_lock(med);
+
+    /* Due to the reservations, this step must succeed. */
+    front_ring = &med->front_ring;
+    free_req = RING_FREE_REQUESTS(front_ring);
+    ASSERT(free_req > 0);
+
+    /* Copy request */
+    req_prod = front_ring->req_prod_pvt;
+    memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
+    req_prod++;
+
+    /* Update ring */
+    front_ring->req_prod_pvt = req_prod;
+    RING_PUSH_REQUESTS(front_ring);
+
+    /* We've actually *used* our reservation, so release the slot. */
+    mem_event_release_slot(d, med);
+
+    /* Give this vCPU a black eye if necessary, on the way out.
+     * See the comments above wake_blocked() for more information
+     * on how this mechanism works to avoid waiting. */
+    avail_req = mem_event_ring_available(med);
+    if( current->domain == d && avail_req < d->max_vcpus )
+        mem_event_mark_and_pause(current, med);
+
+    mem_event_ring_unlock(med);
+
+    notify_via_xen_event_channel(d, med->xen_port);
+}
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
+{
+    mem_event_front_ring_t *front_ring;
+    RING_IDX rsp_cons;
+
+    mem_event_ring_lock(med);
+
+    front_ring = &med->front_ring;
+    rsp_cons = front_ring->rsp_cons;
+
+    if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
+    {
+        mem_event_ring_unlock(med);
+        return 0;
+    }
+
+    /* Copy response */
+    memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
+    rsp_cons++;
+
+    /* Update ring */
+    front_ring->rsp_cons = rsp_cons;
+    front_ring->sring->rsp_event = rsp_cons + 1;
+
+    /* Kick any waiters -- since we've just consumed an event,
+     * there may be additional space available in the ring. */
+    mem_event_wake(d, med);
+
+    mem_event_ring_unlock(med);
+
+    return 1;
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{
+    mem_event_ring_lock(med);
+    mem_event_release_slot(d, med);
+    mem_event_ring_unlock(med);
+}
+
+static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
+{
+    unsigned int avail_req;
+
+    if ( !med->ring_page )
+        return -ENOSYS;
+
+    mem_event_ring_lock(med);
+
+    avail_req = mem_event_ring_available(med);
+    if ( avail_req == 0 )
+    {
+        mem_event_ring_unlock(med);
+        return -EBUSY;
+    }
+
+    if ( !foreign )
+        med->target_producers++;
+    else
+        med->foreign_producers++;
+
+    mem_event_ring_unlock(med);
+
+    return 0;
+}
+
+/* Simple try_grab wrapper for use in the wait_event() macro. */
+static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
+{
+    *rc = mem_event_grab_slot(med, 0);
+    return *rc;
+}
+
+/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
+static int mem_event_wait_slot(struct mem_event_domain *med)
+{
+    int rc = -EBUSY;
+    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
+    return rc;
+}
+
+bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+    return (med->ring_page != NULL);
+}
+
+/*
+ * Determines whether or not the current vCPU belongs to the target domain,
+ * and calls the appropriate wait function.  If it is a guest vCPU, then we
+ * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
+ * this function will always return 0 for a guest.  For a non-guest, we check
+ * for space and return -EBUSY if the ring is not available.
+ *
+ * Return codes: -ENOSYS: the ring is not yet configured
+ *               -EBUSY: the ring is busy
+ *               0: a spot has been reserved
+ *
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep)
+{
+    if ( (current->domain == d) && allow_sleep )
+        return mem_event_wait_slot(med);
+    else
+        return mem_event_grab_slot(med, (current->domain != d));
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_paging_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
+        p2m_mem_paging_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_access_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
+        p2m_mem_access_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_sharing_notification(struct vcpu *v, unsigned int port)
+{
+    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
+        mem_sharing_sharing_resume(v->domain);
+}
+
+int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+    int ret;
+    struct domain *d;
+
+    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
+    if ( ret )
+        return ret;
+
+    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
+    if ( ret )
+        goto out;
+
+    switch (op)
+    {
+        case XENMEM_paging_op:
+            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
+            break;
+        case XENMEM_sharing_op:
+            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
+            break;
+        default:
+            ret = -ENOSYS;
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return ret;
+}
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d)
+{
+    if ( d->mem_event->paging.ring_page ) {
+        /* Destroying the wait queue head means waking up all
+         * queued vcpus. This will drain the list, allowing
+         * the disable routine to complete. It will also drop
+         * all domain refs the wait-queued vcpus are holding.
+         * Finally, because this code path involves previously
+         * pausing the domain (domain_kill), unpausing the 
+         * vcpus causes no harm. */
+        destroy_waitqueue_head(&d->mem_event->paging.wq);
+        (void)mem_event_disable(d, &d->mem_event->paging);
+    }
+    if ( d->mem_event->access.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->access.wq);
+        (void)mem_event_disable(d, &d->mem_event->access);
+    }
+    if ( d->mem_event->share.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->share.wq);
+        (void)mem_event_disable(d, &d->mem_event->share);
+    }
+}
+
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    int rc;
+
+    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
+    if ( rc )
+        return rc;
+
+    if ( unlikely(d == current->domain) )
+    {
+        gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
+                 d->domain_id);
+        return 0;
+    }
+
+    if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
+    {
+        gdprintk(XENLOG_INFO,
+                 "Memory event op on a domain (%u) with no vcpus\n",
+                 d->domain_id);
+        return -EINVAL;
+    }
+
+    rc = -ENOSYS;
+
+    switch ( mec->mode )
+    {
+    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
+    {
+        struct mem_event_domain *med = &d->mem_event->paging;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
+        {
+            struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+            rc = -EOPNOTSUPP;
+            /* pvh fixme: p2m_is_foreign types need addressing */
+            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+                break;
+
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            /* No paging if iommu is used */
+            rc = -EMLINK;
+            if ( unlikely(need_iommu(d)) )
+                break;
+
+            rc = -EXDEV;
+            /* Disallow paging in a PoD guest */
+            if ( p2m->pod.entry_count )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
+                                    HVM_PARAM_PAGING_RING_PFN,
+                                    mem_paging_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
+    {
+        struct mem_event_domain *med = &d->mem_event->access;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
+        {
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            /* Currently only EPT is supported */
+            if ( !cpu_has_vmx )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
+                                    HVM_PARAM_ACCESS_RING_PFN,
+                                    mem_access_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
+    {
+        struct mem_event_domain *med = &d->mem_event->share;
+        rc = -EINVAL;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
+        {
+            rc = -EOPNOTSUPP;
+            /* pvh fixme: p2m_is_foreign types need addressing */
+            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+                break;
+
+            rc = -ENODEV;
+            /* Only HAP is supported */
+            if ( !hap_enabled(d) )
+                break;
+
+            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
+                                    HVM_PARAM_SHARING_RING_PFN,
+                                    mem_sharing_notification);
+        }
+        break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
+        {
+            if ( med->ring_page )
+                rc = mem_event_disable(d, med);
+        }
+        break;
+
+        default:
+            rc = -ENOSYS;
+            break;
+        }
+    }
+    break;
+
+    default:
+        rc = -ENOSYS;
+    }
+
+    return rc;
+}
+
+void mem_event_vcpu_pause(struct vcpu *v)
+{
+    ASSERT(v == current);
+
+    atomic_inc(&v->mem_event_pause_count);
+    vcpu_pause_nosync(v);
+}
+
+void mem_event_vcpu_unpause(struct vcpu *v)
+{
+    int old, new, prev = v->mem_event_pause_count.counter;
+
+    /* All unpause requests as a result of toolstack responses.  Prevent
+     * underflow of the vcpu pause count. */
+    do
+    {
+        old = prev;
+        new = old - 1;
+
+        if ( new < 0 )
+        {
+            printk(XENLOG_G_WARNING
+                   "%pv mem_event: Too many unpause attempts\n", v);
+            return;
+        }
+
+        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
+    } while ( prev != old );
+
+    vcpu_unpause(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 2e3225d..164f1b9 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1104,6 +1104,69 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     return rc;
 }
 
+void destroy_ring_for_helper(
+    void **_va, struct page_info *page)
+{
+    void *va = *_va;
+
+    if ( va != NULL )
+    {
+        unmap_domain_page_global(va);
+        put_page_and_type(page);
+        *_va = NULL;
+    }
+}
+
+int prepare_ring_for_helper(
+    struct domain *d, unsigned long gmfn, struct page_info **_page,
+    void **_va)
+{
+    struct page_info *page;
+    p2m_type_t p2mt;
+    void *va;
+
+    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
+
+#ifdef CONFIG_MEM_PAGING
+    if ( p2m_is_paging(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        p2m_mem_paging_populate(d, gmfn);
+        return -ENOENT;
+    }
+#endif
+#ifdef CONFIG_MEM_SHARING
+    if ( p2m_is_shared(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        return -ENOENT;
+    }
+#endif
+
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    va = __map_domain_page_global(page);
+    if ( va == NULL )
+    {
+        put_page_and_type(page);
+        return -ENOMEM;
+    }
+
+    *_va = va;
+    *_page = page;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 9fa80a4..7fc3b97 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -301,7 +301,6 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
     })
 
 static inline void put_gfn(struct domain *d, unsigned long gfn) {}
-static inline void mem_event_cleanup(struct domain *d) {}
 static inline int relinquish_shared_pages(struct domain *d)
 {
     return 0;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 210ff57..8a864ce 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -67,6 +67,9 @@
 #define NR_CPUS 256
 #endif
 
+#define CONFIG_MEM_SHARING 1
+#define CONFIG_MEM_PAGING 1
+
 /* Maximum we can support with current vLAPIC ID mapping. */
 #define MAX_HVM_VCPUS 128
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 1123857..74e66f8 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -226,12 +226,6 @@ int hvm_vcpu_cacheattr_init(struct vcpu *v);
 void hvm_vcpu_cacheattr_destroy(struct vcpu *v);
 void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip);
 
-/* Prepare/destroy a ring for a dom0 helper. Helper with talk
- * with Xen on behalf of this hvm domain. */
-int prepare_ring_for_helper(struct domain *d, unsigned long gmfn, 
-                            struct page_info **_page, void **_va);
-void destroy_ring_for_helper(void **_va, struct page_info *page);
-
 bool_t hvm_send_assist_req(ioreq_t *p);
 void hvm_broadcast_assist_req(ioreq_t *p);
 
diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
deleted file mode 100644
index 5c7c5fd..0000000
--- a/xen/include/asm-x86/mem_access.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_access.h
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-#ifndef _XEN_ASM_MEM_ACCESS_H
-#define _XEN_ASM_MEM_ACCESS_H
-
-int mem_access_memop(unsigned long cmd,
-                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-int mem_access_send_req(struct domain *d, mem_event_request_t *req);
-
-#endif /* _XEN_ASM_MEM_ACCESS_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
deleted file mode 100644
index ed4481a..0000000
--- a/xen/include/asm-x86/mem_event.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_event.h
- *
- * Common interface for memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- */
-
-
-#ifndef __MEM_EVENT_H__
-#define __MEM_EVENT_H__
-
-/* Returns whether a ring has been set up */
-bool_t mem_event_check_ring(struct mem_event_domain *med);
-
-/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
- * available space and the caller is a foreign domain. If the guest itself
- * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events. 
- *
- * However, the allow_sleep flag can be set to false in cases in which it is ok
- * to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!). 
- *
- * In general, you must follow a claim_slot() call with either put_request() or
- * cancel_slot(), both of which are guaranteed to
- * succeed. 
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d, 
-                                        struct mem_event_domain *med)
-{
-    return __mem_event_claim_slot(d, med, 1);
-}
-
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
-{
-    return __mem_event_claim_slot(d, med, 0);
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
-
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req);
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp);
-
-int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
-
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
-
-#endif /* __MEM_EVENT_H__ */
-
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 7b85865..ebd482d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -611,8 +611,6 @@ unsigned int domain_clamp_alloc_bitsize(struct domain *d, unsigned int bits);
 
 unsigned long domain_get_maximum_gpfn(struct domain *d);
 
-void mem_event_cleanup(struct domain *d);
-
 extern struct domain *dom_xen, *dom_io, *dom_cow;	/* for vmcoreinfo */
 
 /* Definition of an mm lock: spinlock with extra fields for debugging */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
new file mode 100644
index 0000000..19d1a2d
--- /dev/null
+++ b/xen/include/xen/mem_access.h
@@ -0,0 +1,60 @@
+/******************************************************************************
+ * mem_access.h
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#ifndef _XEN_ASM_MEM_ACCESS_H
+#define _XEN_ASM_MEM_ACCESS_H
+
+#include <public/memory.h>
+
+#ifdef HAS_MEM_ACCESS
+
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
+int mem_access_send_req(struct domain *d, mem_event_request_t *req);
+
+#else
+
+static inline
+int mem_access_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+    return -ENOSYS;
+}
+
+static inline
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+    return -ENOSYS;
+}
+
+#endif /* HAS_MEM_ACCESS */
+
+#endif /* _XEN_ASM_MEM_ACCESS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
new file mode 100644
index 0000000..8612b26
--- /dev/null
+++ b/xen/include/xen/mem_event.h
@@ -0,0 +1,143 @@
+/******************************************************************************
+ * mem_event.h
+ *
+ * Common interface for memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+
+#ifndef __MEM_EVENT_H__
+#define __MEM_EVENT_H__
+
+#include <xen/sched.h>
+
+#ifdef HAS_MEM_ACCESS
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d);
+
+/* Returns whether a ring has been set up */
+bool_t mem_event_check_ring(struct mem_event_domain *med);
+
+/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
+ * available space and the caller is a foreign domain. If the guest itself
+ * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
+ * that the ring does not lose future events. 
+ *
+ * However, the allow_sleep flag can be set to false in cases in which it is ok
+ * to lose future events, and thus -EBUSY can be returned to guest vcpus
+ * (handle with care!). 
+ *
+ * In general, you must follow a claim_slot() call with either put_request() or
+ * cancel_slot(), both of which are guaranteed to
+ * succeed. 
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+                            bool_t allow_sleep);
+static inline int mem_event_claim_slot(struct domain *d, 
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 1);
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return __mem_event_claim_slot(d, med, 0);
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
+
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+                            mem_event_request_t *req);
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+                           mem_event_response_t *rsp);
+
+int do_mem_event_op(int op, uint32_t domain, void *arg);
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+
+void mem_event_vcpu_pause(struct vcpu *v);
+void mem_event_vcpu_unpause(struct vcpu *v);
+
+#else
+
+static inline void mem_event_cleanup(struct domain *d) {}
+
+static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+    return 0;
+}
+
+static inline int mem_event_claim_slot(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return -ENOSYS;
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+                                        struct mem_event_domain *med)
+{
+    return -ENOSYS;
+}
+
+static inline
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{}
+
+static inline
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+                            mem_event_request_t *req)
+{}
+
+static inline
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+                           mem_event_response_t *rsp)
+{
+    return -ENOSYS;
+}
+
+static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+    return -ENOSYS;
+}
+
+static inline
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    return -ENOSYS;
+}
+
+static inline void mem_event_vcpu_pause(struct vcpu *v) {}
+static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+
+#endif /* HAS_MEM_ACCESS */
+
+#endif /* __MEM_EVENT_H__ */
+
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index b183189..7c0efc7 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -371,4 +371,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn);
 /* TRUE if the whole page at @mfn is of the requested RAM type(s) above. */
 int page_is_ram_type(unsigned long mfn, unsigned long mem_type);
 
+/* Prepare/destroy a ring for a dom0 helper. Helper with talk
+ * with Xen on behalf of this domain. */
+int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
+                            struct page_info **_page, void **_va);
+void destroy_ring_for_helper(void **_va, struct page_info *page);
+
 #endif /* __XEN_MM_H__ */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 02/17] xen: Relocate p2m_mem_access_resume to mem_access common
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 01/17] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 03/17] xen: Relocate struct npfec definition into common Tamas K Lengyel
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

The p2m_mem_access resume function is not p2m specific and could be shared
across ARM/x86.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v6: Keep the comment describing the function.
v5: Style fix.
---
 xen/arch/x86/mm/p2m.c        | 24 ------------------------
 xen/common/mem_access.c      | 26 +++++++++++++++++++++++++-
 xen/common/mem_event.c       |  2 +-
 xen/include/asm-x86/p2m.h    |  2 --
 xen/include/xen/mem_access.h |  5 +++++
 5 files changed, 31 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index a9f120a..bf8e537 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1427,30 +1427,6 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     return (p2ma == p2m_access_n2rwx);
 }
 
-void p2m_mem_access_resume(struct domain *d)
-{
-    mem_event_response_t rsp;
-
-    /* Pull all responses off the ring */
-    while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
-            continue;
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Unpause domain */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
-    }
-}
-
 /* Set access type for a region of pfns.
  * If start_pfn == -1ul, sets the default access type */
 long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 9a8c1a9..1674d7a 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -29,6 +29,30 @@
 #include <asm/p2m.h>
 #include <xsm/xsm.h>
 
+void mem_access_resume(struct domain *d)
+{
+    mem_event_response_t rsp;
+
+    /* Pull all responses off the ring. */
+    while ( mem_event_get_response(d, &d->mem_event->access, &rsp) )
+    {
+        struct vcpu *v;
+
+        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+            continue;
+
+        /* Validate the vcpu_id in the response. */
+        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+            continue;
+
+        v = d->vcpu[rsp.vcpu_id];
+
+        /* Unpause domain. */
+        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+            mem_event_vcpu_unpause(v);
+    }
+}
+
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
 {
@@ -58,7 +82,7 @@ int mem_access_memop(unsigned long cmd,
     switch ( mao.op )
     {
     case XENMEM_access_op_resume:
-        p2m_mem_access_resume(d);
+        mem_access_resume(d);
         rc = 0;
         break;
 
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 336b638..615a8b7 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -430,7 +430,7 @@ static void mem_paging_notification(struct vcpu *v, unsigned int port)
 static void mem_access_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->mem_event->access.ring_page != NULL) )
-        p2m_mem_access_resume(v->domain);
+        mem_access_resume(v->domain);
 }
 
 /* Registered with Xen-bound event channel for incoming notifications. */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index bae669e..3702bea 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -601,8 +601,6 @@ void p2m_mem_paging_resume(struct domain *d);
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
                             mem_event_request_t **req_ptr);
-/* Resumes the running of the VCPU, restarting the last instruction */
-void p2m_mem_access_resume(struct domain *d);
 
 /* Set access type for a region of pfns.
  * If start_pfn == -1ul, sets the default access type */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 19d1a2d..6ceb2a4 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -31,6 +31,9 @@ int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
 int mem_access_send_req(struct domain *d, mem_event_request_t *req);
 
+/* Resumes the running of the VCPU, restarting the last instruction */
+void mem_access_resume(struct domain *d);
+
 #else
 
 static inline
@@ -46,6 +49,8 @@ int mem_access_send_req(struct domain *d, mem_event_request_t *req)
     return -ENOSYS;
 }
 
+static inline void mem_access_resume(struct domain *d) {}
+
 #endif /* HAS_MEM_ACCESS */
 
 #endif /* _XEN_ASM_MEM_ACCESS_H */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 03/17] xen: Relocate struct npfec definition into common
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 01/17] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 02/17] xen: Relocate p2m_mem_access_resume to mem_access common Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 04/17] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Nested page fault exception code definitions can be reused on ARM as well.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/include/asm-x86/hvm/hvm.h |  2 +-
 xen/include/asm-x86/mm.h      | 21 ---------------------
 xen/include/xen/mm.h          | 21 +++++++++++++++++++++
 3 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 74e66f8..895f42b 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -27,7 +27,7 @@
 #include <public/domctl.h>
 #include <public/hvm/save.h>
 #include <public/hvm/ioreq.h>
-#include <asm/mm.h>
+#include <xen/mm.h>
 
 /* Interrupt acknowledgement sources. */
 enum hvm_intsrc {
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index ebd482d..bafd28c 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -551,27 +551,6 @@ void audit_domains(void);
 
 #endif
 
-/*
- * Extra fault info types which are used to further describe
- * the source of an access violation.
- */
-typedef enum {
-    npfec_kind_unknown, /* must be first */
-    npfec_kind_in_gpt,  /* violation in guest page table */
-    npfec_kind_with_gla /* violation with guest linear address */
-} npfec_kind_t;
-
-/*
- * Nested page fault exception codes.
- */
-struct npfec {
-    unsigned int read_access:1;
-    unsigned int write_access:1;
-    unsigned int insn_fetch:1;
-    unsigned int gla_valid:1;
-    unsigned int kind:2;  /* npfec_kind_t */
-};
-
 int new_guest_cr3(unsigned long pfn);
 void make_cr3(struct vcpu *v, unsigned long mfn);
 void update_cr3(struct vcpu *v);
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 7c0efc7..74a65a6 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -88,6 +88,27 @@ int assign_pages(
 /* Dump info to serial console */
 void arch_dump_shared_mem_info(void);
 
+/*
+ * Extra fault info types which are used to further describe
+ * the source of an access violation.
+ */
+typedef enum {
+    npfec_kind_unknown, /* must be first */
+    npfec_kind_in_gpt,  /* violation in guest page table */
+    npfec_kind_with_gla /* violation with guest linear address */
+} npfec_kind_t;
+
+/*
+ * Nested page fault exception codes.
+ */
+struct npfec {
+    unsigned int read_access:1;
+    unsigned int write_access:1;
+    unsigned int insn_fetch:1;
+    unsigned int gla_valid:1;
+    unsigned int kind:2;  /* npfec_kind_t */
+};
+
 /* memflags: */
 #define _MEMF_no_refcount 0
 #define  MEMF_no_refcount (1U<<_MEMF_no_refcount)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 04/17] xen: Relocate mem_event_op domctl and access_op memop into common.
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 03/17] xen: Relocate struct npfec definition into common Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 05/17] x86/p2m: Typo fix for spelling ambiguous Tamas K Lengyel
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v6: Grouping style fix of #includes in common/memory.c.
v5: Move memop compat into common as well.
    Position domctl in switch relative to the domctl #.
v4: Don't remove memop handling from x86_64/compat and style fixes.
---
 xen/arch/x86/domctl.c           | 8 --------
 xen/arch/x86/x86_64/compat/mm.c | 4 ----
 xen/arch/x86/x86_64/mm.c        | 4 ----
 xen/common/compat/memory.c      | 5 +++++
 xen/common/domctl.c             | 7 +++++++
 xen/common/memory.c             | 9 +++++++--
 6 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26a3ea1..166cfb3 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1131,14 +1131,6 @@ long arch_do_domctl(
     }
     break;
 
-    case XEN_DOMCTL_mem_event_op:
-    {
-        ret = mem_event_domctl(d, &domctl->u.mem_event_op,
-                              guest_handle_cast(u_domctl, void));
-        copyback = 1;
-    }
-    break;
-
     case XEN_DOMCTL_mem_sharing_op:
     {
         ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index c079702..54f25b7 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -198,10 +198,6 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-    case XENMEM_access_op:
-        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
-        break;
-
     case XENMEM_sharing_op:
     {
         xen_mem_sharing_op_t mso;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 0da6ddc..0aaa460 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1048,10 +1048,6 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-    case XENMEM_access_op:
-        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
-        break;
-
     case XENMEM_sharing_op:
     {
         xen_mem_sharing_op_t mso;
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 25dc016..43d02bc 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -4,6 +4,7 @@
 #include <xen/guest_access.h>
 #include <xen/sched.h>
 #include <xen/event.h>
+#include <xen/mem_access.h>
 #include <asm/current.h>
 #include <compat/memory.h>
 
@@ -381,6 +382,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
             break;
         }
 
+        case XENMEM_access_op:
+            rc = mem_access_memop(cmd, guest_handle_cast(compat, xen_mem_access_op_t));
+            break;
+
         case XENMEM_add_to_physmap_batch:
             start_extent = end_extent;
             break;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 1ad0729..c00a899 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -24,6 +24,7 @@
 #include <xen/bitmap.h>
 #include <xen/paging.h>
 #include <xen/hypercall.h>
+#include <xen/mem_event.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -1110,6 +1111,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
     break;
 
+    case XEN_DOMCTL_mem_event_op:
+        ret = mem_event_domctl(d, &op->u.mem_event_op,
+                               guest_handle_cast(u_domctl, void));
+        copyback = 1;
+        break;
+
     case XEN_DOMCTL_disable_migrate:
     {
         d->disable_migrate = op->u.disable_migrate.disable;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 164f1b9..9c0d110 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -21,13 +21,14 @@
 #include <xen/errno.h>
 #include <xen/tmem.h>
 #include <xen/tmem_xen.h>
+#include <xen/numa.h>
+#include <xen/mem_access.h>
+#include <xen/trace.h>
 #include <asm/current.h>
 #include <asm/hardirq.h>
 #include <asm/p2m.h>
-#include <xen/numa.h>
 #include <public/memory.h>
 #include <xsm/xsm.h>
-#include <xen/trace.h>
 
 struct memop_args {
     /* INPUT */
@@ -939,6 +940,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case XENMEM_access_op:
+        rc = mem_access_memop(cmd, guest_handle_cast(arg, xen_mem_access_op_t));
+        break;
+
     case XENMEM_claim_pages:
         if ( copy_from_guest(&reservation, arg, 1) )
             return -EFAULT;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 05/17] x86/p2m: Typo fix for spelling ambiguous
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 04/17] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 06/17] xen/mem_event: Clean out superfluous white-spaces Tamas K Lengyel
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/include/asm-x86/p2m.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 3702bea..44aedb1 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -80,7 +80,7 @@ typedef enum {
  * caused by p2m_access_t restrictions are sent to the mem_event
  * interface.
  *
- * The access permissions are soft state: when any ambigious change of page
+ * The access permissions are soft state: when any ambiguous change of page
  * type or use occurs, or when pages are flushed, swapped, or at any other
  * convenient type, the access permissions can get reset to the p2m_domain
  * default.
@@ -262,7 +262,7 @@ struct p2m_domain {
     long               (*audit_p2m)(struct p2m_domain *p2m);
 
     /* Default P2M access type for each page in the the domain: new pages,
-     * swapped in pages, cleared pages, and pages that are ambiquously
+     * swapped in pages, cleared pages, and pages that are ambiguously
      * retyped get this access type.  See definition of p2m_access_t. */
     p2m_access_t default_access;
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 06/17] xen/mem_event: Clean out superfluous white-spaces
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 05/17] x86/p2m: Typo fix for spelling ambiguous Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 07/17] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Clean the mem_event header as well.
---
 xen/common/mem_event.c      | 20 ++++++++++----------
 xen/include/xen/mem_event.h |  8 ++++----
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 615a8b7..af0ffb2 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -57,7 +57,7 @@ static int mem_event_enable(
     if ( med->ring_page )
         return -EBUSY;
 
-    /* The parameter defaults to zero, and it should be 
+    /* The parameter defaults to zero, and it should be
      * set to something */
     if ( ring_gfn == 0 )
         return -ENOSYS;
@@ -65,7 +65,7 @@ static int mem_event_enable(
     mem_event_ring_lock_init(med);
     mem_event_ring_lock(med);
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct, 
+    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
                                     &med->ring_page);
     if ( rc < 0 )
         goto err;
@@ -97,7 +97,7 @@ static int mem_event_enable(
     return 0;
 
  err:
-    destroy_ring_for_helper(&med->ring_page, 
+    destroy_ring_for_helper(&med->ring_page,
                             med->ring_pg_struct);
     mem_event_ring_unlock(med);
 
@@ -226,7 +226,7 @@ static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
             }
         }
 
-        destroy_ring_for_helper(&med->ring_page, 
+        destroy_ring_for_helper(&med->ring_page,
                                 med->ring_pg_struct);
         mem_event_ring_unlock(med);
     }
@@ -479,7 +479,7 @@ void mem_event_cleanup(struct domain *d)
          * the disable routine to complete. It will also drop
          * all domain refs the wait-queued vcpus are holding.
          * Finally, because this code path involves previously
-         * pausing the domain (domain_kill), unpausing the 
+         * pausing the domain (domain_kill), unpausing the
          * vcpus causes no harm. */
         destroy_waitqueue_head(&d->mem_event->paging.wq);
         (void)mem_event_disable(d, &d->mem_event->paging);
@@ -559,7 +559,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( p2m->pod.entry_count )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
                                     HVM_PARAM_PAGING_RING_PFN,
                                     mem_paging_notification);
         }
@@ -579,7 +579,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
+    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
     {
         struct mem_event_domain *med = &d->mem_event->access;
         rc = -EINVAL;
@@ -597,7 +597,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !cpu_has_vmx )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_access,
                                     HVM_PARAM_ACCESS_RING_PFN,
                                     mem_access_notification);
         }
@@ -617,7 +617,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING: 
+    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
     {
         struct mem_event_domain *med = &d->mem_event->share;
         rc = -EINVAL;
@@ -636,7 +636,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !hap_enabled(d) )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing, 
+            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
                                     HVM_PARAM_SHARING_RING_PFN,
                                     mem_sharing_notification);
         }
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
index 8612b26..4f3ad8e 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/mem_event.h
@@ -37,19 +37,19 @@ bool_t mem_event_check_ring(struct mem_event_domain *med);
 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
  * available space and the caller is a foreign domain. If the guest itself
  * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events. 
+ * that the ring does not lose future events.
  *
  * However, the allow_sleep flag can be set to false in cases in which it is ok
  * to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!). 
+ * (handle with care!).
  *
  * In general, you must follow a claim_slot() call with either put_request() or
  * cancel_slot(), both of which are guaranteed to
- * succeed. 
+ * succeed.
  */
 int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
                             bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d, 
+static inline int mem_event_claim_slot(struct domain *d,
                                         struct mem_event_domain *med)
 {
     return __mem_event_claim_slot(d, med, 1);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 07/17] xen/mem_event: Relax error condition on debug builds
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 06/17] xen/mem_event: Clean out superfluous white-spaces Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 08/17] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

A faulty tool stack can brick a debug hypervisor. Unpleasant while dev/test.

Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v5: Make printout greppable.
v4: Add domain_id to the printout.
v3: Switch to gdprintk and print the vCPU id as well.
---
 xen/common/mem_event.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index af0ffb2..9eca687 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -278,7 +278,11 @@ void mem_event_put_request(struct domain *d,
     if ( current->domain != d )
     {
         req->flags |= MEM_EVENT_FLAG_FOREIGN;
-        ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+#ifndef NDEBUG
+        if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
+            gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n",
+                     d->domain_id, req->vcpu_id);
+#endif
     }
 
     mem_event_ring_lock(med);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 08/17] xen/mem_event: Abstract architecture specific sanity checks
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 07/17] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 09/17] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Move architecture specific sanity checks into its own function
which is called when enabling mem_event.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v5: Style fix.
v4: Style fix.
v2: Move sanity check function into architecture specific p2m.h
---
 xen/common/mem_event.c    | 7 +------
 xen/include/asm-x86/p2m.h | 6 ++++++
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 9eca687..4d0a27b 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -593,12 +593,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
         {
             rc = -ENODEV;
-            /* Only HAP is supported */
-            if ( !hap_enabled(d) )
-                break;
-
-            /* Currently only EPT is supported */
-            if ( !cpu_has_vmx )
+            if ( !p2m_mem_event_sanity_check(d) )
                 break;
 
             rc = mem_event_enable(d, mec, med, _VPF_mem_access,
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 44aedb1..c256a3a 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -612,6 +612,12 @@ long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
 int p2m_get_mem_access(struct domain *d, unsigned long pfn,
                        xenmem_access_t *access);
 
+/* Sanity check for mem_event hardware support */
+static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+{
+    return hap_enabled(d) && cpu_has_vmx;
+}
+
 /* 
  * Internal functions, only called by other p2m code
  */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 09/17] xen/mem_access: Abstract architecture specific sanity check
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (7 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 08/17] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes Tamas K Lengyel
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v5: Style fix.
v4: Style fix.
v2: Move sanity check function into architecture specific p2m.h
---
 xen/common/mem_access.c   | 2 +-
 xen/include/asm-x86/p2m.h | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 1674d7a..78cfcbf 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -68,7 +68,7 @@ int mem_access_memop(unsigned long cmd,
         return rc;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
     rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index c256a3a..520bd51 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -618,6 +618,12 @@ static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
     return hap_enabled(d) && cpu_has_vmx;
 }
 
+/* Sanity check for mem_access hardware support */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return is_hvm_domain(d);
+}
+
 /* 
  * Internal functions, only called by other p2m code
  */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (8 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 09/17] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 22:35   ` Ian Campbell
  2014-09-15 14:02 ` [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl Tamas K Lengyel
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Define p2m_access_t in ARM and add necessary changes for page table
construction routines to pass the default access information. Also,
define the Radix tree that will hold access permission settings as
the PTE's don't have enough software programmable bits available.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v6: Move mem_event header include to first patch that needs it.
v5: #include grouping style-fix.
v4: move p2m_get_hostp2m definition here.
---
 xen/arch/arm/p2m.c        | 52 +++++++++++++++++----------
 xen/include/asm-arm/p2m.h | 90 ++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 104 insertions(+), 38 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a7dcdf5..627da2f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -229,7 +229,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
 }
 
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
-                               p2m_type_t t)
+                               p2m_type_t t, p2m_access_t a)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     /* sh, xn and write bit will be defined in the following switches
@@ -347,7 +347,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
              pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
-                                    MATTR_MEM, t);
+                                    MATTR_MEM, t, p2m->default_access);
 
              /*
               * First and second level super pages set p2m.table = 0, but
@@ -367,7 +367,8 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
+    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
+                           p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
 
@@ -470,7 +471,8 @@ static int apply_one_level(struct domain *d,
                            paddr_t *maddr,
                            bool_t *flush,
                            int mattr,
-                           p2m_type_t t)
+                           p2m_type_t t,
+                           p2m_access_t a)
 {
     const paddr_t level_size = level_sizes[level];
     const paddr_t level_mask = level_masks[level];
@@ -499,7 +501,7 @@ static int apply_one_level(struct domain *d,
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
+                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
                 p2m_write_pte(entry, pte, flush_cache);
@@ -534,7 +536,7 @@ static int apply_one_level(struct domain *d,
              (level == 3 || !p2m_table(orig_pte)) )
         {
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
+            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
@@ -713,7 +715,9 @@ static int apply_p2m_changes(struct domain *d,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     uint32_t mask,
+                     p2m_type_t t,
+                     p2m_access_t a)
 {
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -780,7 +784,7 @@ static int apply_p2m_changes(struct domain *d,
                               level, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
@@ -802,7 +806,7 @@ static int apply_p2m_changes(struct domain *d,
                               level, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
@@ -822,7 +826,7 @@ static int apply_p2m_changes(struct domain *d,
                               level, flush_pt, op,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
-                              mattr, t);
+                              mattr, t, a);
         if ( ret < 0 ) { rc = ret ; goto out; }
         /* L3 had better have done something! We cannot descend any further */
         BUG_ON(ret == P2M_ONE_DESCEND);
@@ -865,7 +869,7 @@ out:
          */
         apply_p2m_changes(d, REMOVE,
                           start_gpaddr, addr + level_sizes[level], orig_maddr,
-                          mattr, p2m_invalid);
+                          mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
     spin_unlock(&p2m->lock);
@@ -878,7 +882,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, 0, p2m_ram_rw,
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -890,7 +895,8 @@ int map_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_mmio_direct);
+                             MATTR_DEV, 0, p2m_mmio_direct,
+                             d->arch.p2m.default_access);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -902,7 +908,8 @@ int unmap_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_invalid);
+                             MATTR_DEV, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -914,7 +921,8 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, 0, t,
+                             d->arch.p2m.default_access);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -924,7 +932,8 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, 0, p2m_invalid,
+                      d->arch.p2m.default_access);
 }
 
 int arch_grant_map_page_identity(struct domain *d, unsigned long frame,
@@ -1053,6 +1062,8 @@ void p2m_teardown(struct domain *d)
 
     p2m_free_vmid(d);
 
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
     spin_unlock(&p2m->lock);
 }
 
@@ -1078,6 +1089,9 @@ int p2m_init(struct domain *d)
     p2m->max_mapped_gfn = 0;
     p2m->lowest_mapped_gfn = ULONG_MAX;
 
+    p2m->default_access = p2m_access_rwx;
+    radix_tree_init(&p2m->mem_access_settings);
+
 err:
     spin_unlock(&p2m->lock);
 
@@ -1092,7 +1106,8 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, 0, p2m_invalid,
+                              d->arch.p2m.default_access);
 }
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
@@ -1106,7 +1121,8 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
-                             MATTR_MEM, p2m_invalid);
+                             MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 97cbae4..89e653c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,6 +2,7 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
+#include <xen/radix-tree.h>
 
 #include <xen/p2m-common.h>
 
@@ -11,6 +12,48 @@ struct domain;
 
 extern void memory_type_changed(struct domain *);
 
+/* List of possible type for each page in the p2m entry.
+ * The number of available bit per page in the pte for this purpose is 4 bits.
+ * So it's possible to only have 16 fields. If we run out of value in the
+ * future, it's possible to use higher value for pseudo-type and don't store
+ * them in the p2m entry.
+ */
+typedef enum {
+    p2m_invalid = 0,    /* Nothing mapped here */
+    p2m_ram_rw,         /* Normal read/write guest RAM */
+    p2m_ram_ro,         /* Read-only; writes are silently dropped */
+    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
+    p2m_map_foreign,    /* Ram pages from foreign domain */
+    p2m_grant_map_rw,   /* Read/write grant mapping */
+    p2m_grant_map_ro,   /* Read-only grant mapping */
+    /* The types below are only used to decide the page attribute in the P2M */
+    p2m_iommu_map_rw,   /* Read/write iommu mapping */
+    p2m_iommu_map_ro,   /* Read-only iommu mapping */
+    p2m_max_real_type,  /* Types after this won't be store in the p2m */
+} p2m_type_t;
+
+/*
+ * Additional access types, which are used to further restrict
+ * the permissions given by the p2m_type_t memory type. Violations
+ * caused by p2m_access_t restrictions are sent to the mem_event
+ * interface.
+ *
+ * The access permissions are soft state: when any ambiguous change of page
+ * type or use occurs, or when pages are flushed, swapped, or at any other
+ * convenient type, the access permissions can get reset to the p2m_domain
+ * default.
+ */
+typedef enum {
+    p2m_access_n    = 0, /* No access permissions allowed */
+    p2m_access_r    = 1,
+    p2m_access_w    = 2,
+    p2m_access_rw   = 3,
+    p2m_access_x    = 4,
+    p2m_access_rx   = 5,
+    p2m_access_wx   = 6,
+    p2m_access_rwx  = 7
+} p2m_access_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
@@ -44,27 +87,20 @@ struct p2m_domain {
          * at each p2m tree level. */
         unsigned long shattered[4];
     } stats;
-};
 
-/* List of possible type for each page in the p2m entry.
- * The number of available bit per page in the pte for this purpose is 4 bits.
- * So it's possible to only have 16 fields. If we run out of value in the
- * future, it's possible to use higher value for pseudo-type and don't store
- * them in the p2m entry.
- */
-typedef enum {
-    p2m_invalid = 0,    /* Nothing mapped here */
-    p2m_ram_rw,         /* Normal read/write guest RAM */
-    p2m_ram_ro,         /* Read-only; writes are silently dropped */
-    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
-    p2m_map_foreign,    /* Ram pages from foreign domain */
-    p2m_grant_map_rw,   /* Read/write grant mapping */
-    p2m_grant_map_ro,   /* Read-only grant mapping */
-    /* The types below are only used to decide the page attribute in the P2M */
-    p2m_iommu_map_rw,   /* Read/write iommu mapping */
-    p2m_iommu_map_ro,   /* Read-only iommu mapping */
-    p2m_max_real_type,  /* Types after this won't be store in the p2m */
-} p2m_type_t;
+    /* Default P2M access type for each page in the the domain: new pages,
+     * swapped in pages, cleared pages, and pages that are ambiguously
+     * retyped get this access type. See definition of p2m_access_t. */
+    p2m_access_t default_access;
+
+    /* If true, and an access fault comes in and there is no mem_event listener,
+     * pause domain. Otherwise, remove access restrictions. */
+    bool_t access_required;
+
+    /* Radix tree to store the p2m_access_t settings as the pte's don't have
+     * enough available bits to store this information. */
+    struct radix_tree_root mem_access_settings;
+};
 
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
@@ -198,6 +234,20 @@ int arch_grant_map_page_identity(struct domain *d, unsigned long frame,
                                  bool_t writeable);
 int arch_grant_unmap_page_identity(struct domain *d, unsigned long frame);
 
+/* get host p2m table */
+#define p2m_get_hostp2m(d) (&((d)->arch.p2m))
+
+/* mem_event and mem_access are supported on all ARM */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
+static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
 #endif /* _XEN_P2M_H */
 
 /*
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (9 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 22:37   ` Ian Campbell
  2014-09-15 22:38   ` Ian Campbell
  2014-09-15 14:02 ` [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
                   ` (5 subsequent siblings)
  16 siblings, 2 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domctl.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 45974e7..7cf719c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -31,6 +31,19 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         return p2m_cache_flush(d, s, e);
     }
 
+    case XEN_DOMCTL_set_access_required:
+    {
+        struct p2m_domain* p2m;
+
+        if ( current->domain == d )
+            return -EPERM;
+
+        p2m = p2m_get_hostp2m(d);
+        p2m->access_required = domctl->u.access_required.access_required;
+        return 0;
+    }
+    break;
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (10 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 22:39   ` Ian Campbell
  2014-09-15 14:02 ` [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

The function domain_get_maximum_gpfn is returning the maximum gpfn ever
mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..e4a1e5e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -954,7 +954,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (11 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 22:53   ` Ian Campbell
  2014-09-18 18:54   ` Ian Campbell
  2014-09-15 14:02 ` [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
                   ` (3 subsequent siblings)
  16 siblings, 2 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

This patch enables to store, set, check and deliver LPAE R/W mem_events.
As the LPAE PTE's lack enough available software programmable bits,
we store the permissions in a Radix tree, where the key is the pfn
of a 4k page. Only settings other than p2m_access_rwx are saved
in the Radix tree.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v6: - Add helper function p2m_shatter_page.
    - Only allocate 4k pages when mem_access is in use.
    - If no setting was found in radix tree but PTE exists,
      return rwx as permission.
    - Move the inclusion of various headers into this patch.
    - Make npfec a const.
v5: - Move p2m_set_entry's logic into apply_one_level via
      a new p2m_op, MEMACCESS.
v4: - Add p2m_mem_access_radix_set function to be called
      when inserting new PTE's and when updating existing entries.
    - Switch p2m_mem_access_check to return bool_t.
    - Use new struct npfec to pass violation info.
v3: - Add new function for updating the PTE entries, p2m_set_entry.
    - Use the new struct npfec to pass violation information.
    - Implement n2rwx, rx2rw and listener required routines.
v2: - Patch been split to ease the review process.
    - Add definitions of data abort data fetch status codes (enum dabt_dfsc)
      and only call p2m_mem_access_check for traps caused by permission violations.
    - Only call p2m_write_pte in p2m_lookup if the PTE permission actually changed.
    - Properly save settings in the Radix tree and pause the VCPU with
      mem_event_vcpu_pause.
---
 xen/arch/arm/p2m.c              | 494 ++++++++++++++++++++++++++++++++++++----
 xen/arch/arm/traps.c            |  28 ++-
 xen/include/asm-arm/p2m.h       |  18 ++
 xen/include/asm-arm/processor.h |  30 +++
 4 files changed, 520 insertions(+), 50 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 627da2f..ce0982c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -5,6 +5,9 @@
 #include <xen/errno.h>
 #include <xen/domain_page.h>
 #include <xen/bitops.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <public/mem_event.h>
 #include <asm/flushtlb.h>
 #include <asm/gic.h>
 #include <asm/event.h>
@@ -149,6 +152,74 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr)
     return __map_domain_page(page);
 }
 
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+    /* First apply type permissions */
+    switch ( t )
+    {
+    case p2m_ram_rw:
+        e->p2m.xn = 0;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_ram_ro:
+        e->p2m.xn = 0;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct:
+        e->p2m.xn = 1;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_iommu_map_ro:
+    case p2m_grant_map_ro:
+    case p2m_invalid:
+        e->p2m.xn = 1;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_max_real_type:
+        BUG();
+        break;
+    }
+
+    /* Then restrict with access permissions */
+    switch ( a )
+    {
+    case p2m_access_n:
+        e->p2m.read = e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_r:
+        e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_x:
+        e->p2m.write = 0;
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rx:
+        e->p2m.write = 0;
+        break;
+    case p2m_access_w:
+        e->p2m.read = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_rw:
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_wx:
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rwx:
+        break;
+    }
+}
+
 /*
  * Lookup the MFN corresponding to a domain's PFN.
  *
@@ -259,37 +330,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
     }
 
-    switch (t)
-    {
-    case p2m_ram_rw:
-        e.p2m.xn = 0;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_ram_ro:
-        e.p2m.xn = 0;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_iommu_map_rw:
-    case p2m_map_foreign:
-    case p2m_grant_map_rw:
-    case p2m_mmio_direct:
-        e.p2m.xn = 1;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_iommu_map_ro:
-    case p2m_grant_map_ro:
-    case p2m_invalid:
-        e.p2m.xn = 1;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_max_real_type:
-        BUG();
-        break;
-    }
+    p2m_set_permission(&e, t, a);
 
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
@@ -381,6 +422,7 @@ enum p2m_operation {
     REMOVE,
     RELINQUISH,
     CACHEFLUSH,
+    MEMACCESS,
 };
 
 /* Put any references on the single 4K page referenced by pte.  TODO:
@@ -441,6 +483,53 @@ static bool_t is_mapping_aligned(const paddr_t start_gpaddr,
     return true;
 }
 
+static long p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
+                                     p2m_access_t a)
+{
+    long rc;
+
+    if ( p2m_access_rwx == a )
+    {
+        radix_tree_delete(&p2m->mem_access_settings, pfn);
+        return 0;
+    }
+
+    rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
+                           radix_tree_int_to_ptr(a));
+
+    if ( -EEXIST == rc )
+    {
+        /* If a setting existed already, change it to the new one */
+        radix_tree_replace_slot(
+            radix_tree_lookup_slot(
+                &p2m->mem_access_settings, pfn),
+            radix_tree_int_to_ptr(a));
+        rc = 0;
+    }
+
+    return rc;
+}
+
+static int p2m_shatter_page(struct domain *d,
+                            lpae_t *entry,
+                            unsigned int level,
+                            paddr_t level_shift,
+                            bool_t flush_cache)
+{
+    int rc = p2m_create_table(d, entry,
+                              level_shift - PAGE_SHIFT, flush_cache);
+
+    if ( !rc )
+    {
+        struct p2m_domain *p2m = &d->arch.p2m;
+        p2m->stats.shattered[level]++;
+        p2m->stats.mappings[level]--;
+        p2m->stats.mappings[level+1] += LPAE_ENTRIES;
+    }
+
+    return rc;
+}
+
 #define P2M_ONE_DESCEND        0
 #define P2M_ONE_PROGRESS_NOP   0x1
 #define P2M_ONE_PROGRESS       0x10
@@ -494,13 +583,31 @@ static int apply_one_level(struct domain *d,
         if ( p2m_valid(orig_pte) )
             return P2M_ONE_DESCEND;
 
-        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
+        if ( level < 3 && p2m_access_rwx != a )
+        {
+            /* We create only 4k pages when mem_access is in use. */
+        }
+        else if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
         {
             struct page_info *page;
 
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
+                if ( 3 == level )
+                {
+                    rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                    if ( rc < 0 )
+                    {
+                        free_domheap_page(page);
+                        return rc;
+                    }
+                }
+                else
+                {
+                    a = p2m_access_rwx;
+                }
+
                 pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
@@ -521,8 +628,8 @@ static int apply_one_level(struct domain *d,
         /*
          * If we get here then we failed to allocate a sufficiently
          * large contiguous region for this level (which can't be
-         * L3). Create a page table and continue to descend so we try
-         * smaller allocations.
+         * L3) or mem_access is in use. Create a page table and
+         * continue to descend so we try smaller allocations.
          */
         rc = p2m_create_table(d, entry, 0, flush_cache);
         if ( rc < 0 )
@@ -535,6 +642,17 @@ static int apply_one_level(struct domain *d,
            /* We do not handle replacing an existing table with a superpage */
              (level == 3 || !p2m_table(orig_pte)) )
         {
+            if ( 3 == level )
+            {
+                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                if ( rc < 0 )
+                    return rc;
+            }
+            else
+            {
+                a = p2m_access_rwx;
+            }
+
             /* New mapping is superpage aligned, make it */
             pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
@@ -585,14 +703,10 @@ static int apply_one_level(struct domain *d,
             if ( p2m_mapping(orig_pte) )
             {
                 *flush = true;
-                rc = p2m_create_table(d, entry,
-                                      level_shift - PAGE_SHIFT, flush_cache);
+                rc = p2m_shatter_page(d, entry, level, level_shift,
+                                      flush_cache);
                 if ( rc < 0 )
                     return rc;
-
-                p2m->stats.shattered[level]++;
-                p2m->stats.mappings[level]--;
-                p2m->stats.mappings[level+1] += LPAE_ENTRIES;
             } /* else: an existing table mapping -> descend */
 
             BUG_ON(!p2m_table(*entry));
@@ -627,15 +741,11 @@ static int apply_one_level(struct domain *d,
                  * and descend.
                  */
                 *flush = true;
-                rc = p2m_create_table(d, entry,
-                                      level_shift - PAGE_SHIFT, flush_cache);
+                rc = p2m_shatter_page(d, entry, level, level_shift, flush_cache);
+
                 if ( rc < 0 )
                     return rc;
 
-                p2m->stats.shattered[level]++;
-                p2m->stats.mappings[level]--;
-                p2m->stats.mappings[level+1] += LPAE_ENTRIES;
-
                 return P2M_ONE_DESCEND;
             }
         }
@@ -660,6 +770,7 @@ static int apply_one_level(struct domain *d,
 
         memset(&pte, 0x00, sizeof(pte));
         p2m_write_pte(entry, pte, flush_cache);
+        radix_tree_delete(&p2m->mem_access_settings, paddr_to_pfn(*addr));
 
         *addr += level_size;
         *maddr += level_size;
@@ -704,6 +815,49 @@ static int apply_one_level(struct domain *d,
             *addr += PAGE_SIZE;
             return P2M_ONE_PROGRESS_NOP;
         }
+
+    case MEMACCESS:
+        if ( level < 3 )
+        {
+            if ( !p2m_valid(orig_pte) )
+            {
+                *addr += level_size;
+                return P2M_ONE_PROGRESS_NOP;
+            }
+
+            /* Shatter large pages as we descend */
+            if ( p2m_mapping(orig_pte) )
+            {
+                rc = p2m_shatter_page(d, entry, level, level_shift, flush_cache);
+
+                if ( rc < 0 )
+                    return rc;
+            } /* else: an existing table mapping -> descend */
+
+            return P2M_ONE_DESCEND;
+        }
+        else
+        {
+            pte = orig_pte;
+
+            if ( !p2m_table(pte) )
+                pte.bits = 0;
+
+            if ( p2m_valid(pte) )
+            {
+                ASSERT(pte.p2m.type != p2m_invalid);
+
+                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                if ( rc < 0 )
+                    return rc;
+
+                p2m_set_permission(&pte, pte.p2m.type, a);
+                p2m_write_pte(entry, pte, flush_cache);
+            }
+
+            *addr += level_size;
+            return P2M_ONE_PROGRESS;
+        }
     }
 
     BUG(); /* Should never get here */
@@ -726,7 +880,9 @@ static int apply_p2m_changes(struct domain *d,
     unsigned int level = 0;
     unsigned long cur_first_page = ~0,
                   cur_first_offset = ~0,
-                  cur_second_offset = ~0;
+                  cur_second_offset = ~0,
+                  start_gpfn = paddr_to_pfn(start_gpaddr),
+                  end_gpfn = paddr_to_pfn(end_gpaddr);
     unsigned long count = 0;
     bool_t flush = false;
     bool_t flush_pt;
@@ -761,6 +917,20 @@ static int apply_p2m_changes(struct domain *d,
             count = 0;
         }
 
+        /*
+         * Preempt setting mem_access permissions as required by XSA-89,
+         * if it's not the last iteration. */
+        if ( op == MEMACCESS && count )
+        {
+            int progress = paddr_to_pfn(addr) - start_gpfn + 1;
+            if ( (end_gpfn-start_gpfn) > progress && !(progress & mask)
+                 && hypercall_preempt_check() )
+            {
+                rc = progress;
+                goto out;
+            }
+        }
+
         if ( cur_first_page != p2m_first_level_index(addr) )
         {
             if ( first ) unmap_domain_page(first);
@@ -1159,6 +1329,236 @@ err:
     return page;
 }
 
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
+{
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+    mem_event_request_t *req = NULL;
+    xenmem_access_t xma;
+    bool_t violation;
+    int rc;
+
+    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
+    if ( rc )
+    {
+        /* No setting was found, reinject */
+        return 1;
+    }
+    else
+    {
+        /* First, handle rx2rw and n2rwx conversion automatically. */
+        if ( npfec.write_access && xma == XENMEM_access_rx2rw )
+        {
+            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                    0, ~0, XENMEM_access_rw);
+            return 0;
+        }
+        else if ( xma == XENMEM_access_n2rwx )
+        {
+            rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                    0, ~0, XENMEM_access_rwx);
+        }
+    }
+
+    /* Otherwise, check if there is a memory event listener, and send the message along */
+    if ( !mem_event_check_ring( &v->domain->mem_event->access ) )
+    {
+        /* No listener */
+        if ( p2m->access_required )
+        {
+            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
+                                  "no mem_event listener VCPU %d, dom %d\n",
+                                  v->vcpu_id, v->domain->domain_id);
+            domain_crash(v->domain);
+        }
+        else
+        {
+            /* n2rwx was already handled */
+            if ( xma != XENMEM_access_n2rwx)
+            {
+                /* A listener is not required, so clear the access
+                 * restrictions. */
+                rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                        0, ~0, XENMEM_access_rwx);
+            }
+        }
+
+        /* No need to reinject */
+        return 0;
+    }
+
+    switch ( xma )
+    {
+    default:
+    case XENMEM_access_n:
+        violation = npfec.read_access || npfec.write_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_r:
+        violation = npfec.write_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_w:
+        violation = npfec.read_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_x:
+        violation = npfec.read_access || npfec.write_access;
+        break;
+    case XENMEM_access_rx:
+        violation = npfec.write_access;
+        break;
+    case XENMEM_access_wx:
+        violation = npfec.read_access;
+        break;
+    case XENMEM_access_rw:
+        violation = npfec.insn_fetch;
+        break;
+    case XENMEM_access_rwx:
+        violation = 0;
+        break;
+    }
+
+    if ( !violation )
+        return 1;
+
+    req = xzalloc(mem_event_request_t);
+    if ( req )
+    {
+        req->reason = MEM_EVENT_REASON_VIOLATION;
+        if ( xma != XENMEM_access_n2rwx )
+            req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        req->gfn = gpa >> PAGE_SHIFT;
+        req->offset =  gpa & ((1 << PAGE_SHIFT) - 1);
+        req->gla = gla;
+        req->gla_valid = npfec.gla_valid;
+        req->access_r = npfec.read_access;
+        req->access_w = npfec.write_access;
+        req->access_x = npfec.insn_fetch;
+        if ( npfec_kind_in_gpt == npfec.kind )
+            req->fault_in_gpt = 1;
+        if ( npfec_kind_with_gla == npfec.kind )
+            req->fault_with_gla = 1;
+        req->vcpu_id = v->vcpu_id;
+
+        mem_access_send_req(v->domain, req);
+        xfree(req);
+    }
+
+    /* Pause the current VCPU */
+    if ( xma != XENMEM_access_n2rwx )
+        mem_event_vcpu_pause(v);
+
+    return 0;
+}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_access_t a;
+    long rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+#undef ACCESS
+    };
+
+    switch ( access )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        a = memaccess[access];
+        break;
+    case XENMEM_access_default:
+        a = p2m->default_access;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    /* If request to set default access. */
+    if ( pfn == ~0ul )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    rc = apply_p2m_changes(d, MEMACCESS,
+                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
+                           0, MATTR_MEM, mask, 0, a);
+
+    if ( rc < 0 )
+        return rc;
+    else if ( rc > 0 )
+        return start+rc;
+
+    flush_tlb_domain(d);
+    return 0;
+}
+
+int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
+                       xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    void *i;
+    unsigned int index;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+#undef ACCESS
+    };
+
+    /* If request to get default access */
+    if ( gpfn == ~0ull )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
+    spin_unlock(&p2m->lock);
+
+    if ( !i )
+    {
+        /* No setting was found in the Radix tree. Check if the
+         * entry exists in the page-tables. */
+        paddr_t maddr = p2m_lookup(d, gpfn << PAGE_SHIFT, NULL);
+        if ( INVALID_PADDR == maddr )
+        {
+            return -ESRCH;
+        }
+
+        /* If entry exists then its rwx. */
+        *access = XENMEM_access_rwx;
+    }
+    else
+    {
+        index = radix_tree_ptr_to_int(i);
+
+        if ( index >= ARRAY_SIZE(memaccess) )
+            return -ERANGE;
+
+        *access = memaccess[index];
+    }
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 019991f..5bfcdf3 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1852,11 +1852,33 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     info.gva = READ_SYSREG64(FAR_EL2);
 #endif
 
-    if (dabt.s1ptw)
+    rc = gva_to_ipa(info.gva, &info.gpa);
+    if ( -EFAULT == rc )
         goto bad_data_abort;
 
-    rc = gva_to_ipa(info.gva, &info.gpa);
-    if ( rc == -EFAULT )
+    switch ( dabt.dfsc )
+    {
+    case DABT_DFSC_PERMISSION_1:
+    case DABT_DFSC_PERMISSION_2:
+    case DABT_DFSC_PERMISSION_3:
+    {
+        const struct npfec npfec = {
+            .read_access = 1,
+            .write_access = dabt.write,
+            .gla_valid = 1,
+            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+    }
+
+    if ( dabt.s1ptw )
         goto bad_data_abort;
 
     /* XXX: Decode the instruction if ISS is not valid */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 89e653c..54a7950 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -3,6 +3,7 @@
 
 #include <xen/mm.h>
 #include <xen/radix-tree.h>
+#include <public/memory.h>
 
 #include <xen/p2m-common.h>
 
@@ -248,6 +249,23 @@ static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
     return 1;
 }
 
+/* Send mem event based on the access (gla is -1ull if not available). Boolean
+ * return value indicates if trap needs to be injected into guest. */
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
+
+/* Resumes the running of the VCPU, restarting the last instruction */
+void p2m_mem_access_resume(struct domain *d);
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access);
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+                       xenmem_access_t *access);
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 0cc5b6d..b844f1d 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -262,6 +262,36 @@ enum dabt_size {
     DABT_DOUBLE_WORD = 3,
 };
 
+/* Data abort data fetch status codes */
+enum dabt_dfsc {
+    DABT_DFSC_ADDR_SIZE_0       = 0b000000,
+    DABT_DFSC_ADDR_SIZE_1       = 0b000001,
+    DABT_DFSC_ADDR_SIZE_2       = 0b000010,
+    DABT_DFSC_ADDR_SIZE_3       = 0b000011,
+    DABT_DFSC_TRANSLATION_0     = 0b000100,
+    DABT_DFSC_TRANSLATION_1     = 0b000101,
+    DABT_DFSC_TRANSLATION_2     = 0b000110,
+    DABT_DFSC_TRANSLATION_3     = 0b000111,
+    DABT_DFSC_ACCESS_1          = 0b001001,
+    DABT_DFSC_ACCESS_2          = 0b001010,
+    DABT_DFSC_ACCESS_3          = 0b001011,
+    DABT_DFSC_PERMISSION_1      = 0b001101,
+    DABT_DFSC_PERMISSION_2      = 0b001110,
+    DABT_DFSC_PERMISSION_3      = 0b001111,
+    DABT_DFSC_SYNC_EXT          = 0b010000,
+    DABT_DFSC_SYNC_PARITY       = 0b011000,
+    DABT_DFSC_SYNC_EXT_TTW_0    = 0b010100,
+    DABT_DFSC_SYNC_EXT_TTW_1    = 0b010101,
+    DABT_DFSC_SYNC_EXT_TTW_2    = 0b010110,
+    DABT_DFSC_SYNC_EXT_TTW_3    = 0b010111,
+    DABT_DFSC_SYNC_PARITY_TTW_0 = 0b011100,
+    DABT_DFSC_SYNC_PARITY_TTW_1 = 0b011101,
+    DABT_DFSC_SYNC_PARITY_TTW_2 = 0b011110,
+    DABT_DFSC_SYNC_PARITY_TTW_3 = 0b011111,
+    DABT_DFSC_ALIGNMENT         = 0b100001,
+    DABT_DFSC_TLB_CONFLICT      = 0b110000,
+};
+
 union hsr {
     uint32_t bits;
     struct {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (12 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-18 18:59   ` Ian Campbell
  2014-09-15 14:02 ` [PATCH for-4.5 v6 15/17] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Add missing structure definition for iabt and update the trap handling
mechanism to only inject the exception if the mem_access checker
decides to do so.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v6: - Make npfec a const.
v4: - Don't mark instruction fetch violation as read violation.
    - Use new struct npfec to pass violation info.
v2: - Add definition for instruction abort instruction fetch status codes
       (enum iabt_ifsc) and only call p2m_mem_access_check for traps triggered
       for permission violations.
---
 xen/arch/arm/traps.c            | 39 ++++++++++++++++++++++++++++++++++++++-
 xen/include/asm-arm/processor.h | 40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 5bfcdf3..71c087f 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1828,7 +1828,44 @@ done:
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
-    register_t addr = READ_SYSREG(FAR_EL2);
+    struct hsr_iabt iabt = hsr.iabt;
+    int rc;
+    register_t addr;
+    vaddr_t gva;
+    paddr_t gpa;
+
+#ifdef CONFIG_ARM_32
+    gva = READ_CP32(HIFAR);
+#else
+    gva = READ_SYSREG64(FAR_EL2);
+#endif
+
+    rc = gva_to_ipa(gva, &gpa);
+    if ( -EFAULT == rc )
+        return;
+
+    switch ( iabt.ifsc )
+    {
+    case IABT_IFSC_PERMISSION_1:
+    case IABT_IFSC_PERMISSION_2:
+    case IABT_IFSC_PERMISSION_3:
+    {
+        const struct npfec npfec = {
+            .insn_fetch = 1,
+            .gla_valid = 1,
+            .kind = iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(gpa, gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+    }
+
+    addr = READ_SYSREG(FAR_EL2);
     inject_iabt_exception(regs, addr, hsr.len);
 }
 
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index b844f1d..044de12 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -292,6 +292,36 @@ enum dabt_dfsc {
     DABT_DFSC_TLB_CONFLICT      = 0b110000,
 };
 
+/* Instruction abort instruction fault status codes */
+enum iabt_ifsc {
+    IABT_IFSC_ADDR_SIZE_0       = 0b000000,
+    IABT_IFSC_ADDR_SIZE_1       = 0b000001,
+    IABT_IFSC_ADDR_SIZE_2       = 0b000010,
+    IABT_IFSC_ADDR_SIZE_3       = 0b000011,
+    IABT_IFSC_TRANSLATION_0     = 0b000100,
+    IABT_IFSC_TRANSLATION_1     = 0b000101,
+    IABT_IFSC_TRANSLATION_2     = 0b000110,
+    IABT_IFSC_TRANSLATION_3     = 0b000111,
+    IABT_IFSC_ACCESS_1          = 0b001001,
+    IABT_IFSC_ACCESS_2          = 0b001010,
+    IABT_IFSC_ACCESS_3          = 0b001011,
+    IABT_IFSC_PERMISSION_1      = 0b001101,
+    IABT_IFSC_PERMISSION_2      = 0b001110,
+    IABT_IFSC_PERMISSION_3      = 0b001111,
+    IABT_IFSC_SYNC_EXT          = 0b010000,
+    IABT_IFSC_SYNC_PARITY       = 0b011000,
+    IABT_IFSC_SYNC_EXT_TTW_0    = 0b010100,
+    IABT_IFSC_SYNC_EXT_TTW_1    = 0b010101,
+    IABT_IFSC_SYNC_EXT_TTW_2    = 0b010110,
+    IABT_IFSC_SYNC_EXT_TTW_3    = 0b010111,
+    IABT_IFSC_SYNC_PARITY_TTW_0 = 0b011100,
+    IABT_IFSC_SYNC_PARITY_TTW_1 = 0b011101,
+    IABT_IFSC_SYNC_PARITY_TTW_2 = 0b011110,
+    IABT_IFSC_SYNC_PARITY_TTW_3 = 0b011111,
+    IABT_IFSC_ALIGNMENT         = 0b100001,
+    IABT_IFSC_TLB_CONFLICT      = 0b110000,
+};
+
 union hsr {
     uint32_t bits;
     struct {
@@ -371,10 +401,18 @@ union hsr {
     } sysreg; /* HSR_EC_SYSREG */
 #endif
 
+    struct hsr_iabt {
+        unsigned long ifsc:6;   /* Instruction fault status code */
+        unsigned long res0:1;
+        unsigned long s1ptw:1;  /* Fault during a stage 1 translation table walk */
+        unsigned long res1:1;
+        unsigned long ea:1;     /* External abort type */
+    } iabt; /* HSR_EC_INSTR_ABORT_* */
+
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-        unsigned long s1ptw:1; /* */
+        unsigned long s1ptw:1; /* Fault during a stage 1 translation table walk */
         unsigned long cache:1; /* Cache Maintenance */
         unsigned long eat:1;   /* External Abort Type */
 #ifdef CONFIG_ARM_32
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 15/17] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (13 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 16/17] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access " Tamas K Lengyel
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

This patch sets up the infrastructure to support mem_access and mem_event
on ARM and turns on compilation. We define the required XSM functions.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
v3: Wrap mem_event related functions in XSM into #ifdef HAS_MEM_ACCESS
       blocks.
    Update XSM hooks in flask to properly wire it up on ARM.

v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
       use them instead of CONFIG_X86.
    Split domctl copy-back and p2m type definitions into separate
       patches and move this patch to the end of the series.
---
 xen/arch/arm/Rules.mk        |  1 +
 xen/common/mem_event.c       | 19 +++++++++++++++++++
 xen/include/asm-x86/config.h |  3 +++
 xen/include/xsm/dummy.h      | 26 ++++++++++++++------------
 xen/include/xsm/xsm.h        | 29 +++++++++++++++++------------
 xen/xsm/dummy.c              |  7 +++++--
 xen/xsm/flask/hooks.c        | 33 ++++++++++++++++++++-------------
 7 files changed, 79 insertions(+), 39 deletions(-)

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 8658176..f6781b5 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -10,6 +10,7 @@ HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
 HAS_PASSTHROUGH := y
+HAS_MEM_ACCESS := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 4d0a27b..59edf07 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -27,8 +27,15 @@
 #include <xen/mem_event.h>
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
+
+#ifdef CONFIG_MEM_PAGING
 #include <asm/mem_paging.h>
+#endif
+
+#ifdef CONFIG_MEM_SHARING
 #include <asm/mem_sharing.h>
+#endif
+
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
@@ -423,12 +430,14 @@ int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
         return mem_event_grab_slot(med, (current->domain != d));
 }
 
+#ifdef CONFIG_MEM_PAGING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
         p2m_mem_paging_resume(v->domain);
 }
+#endif
 
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_access_notification(struct vcpu *v, unsigned int port)
@@ -437,15 +446,20 @@ static void mem_access_notification(struct vcpu *v, unsigned int port)
         mem_access_resume(v->domain);
 }
 
+#ifdef CONFIG_MEM_SHARING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->mem_event->share.ring_page != NULL) )
         mem_sharing_sharing_resume(v->domain);
 }
+#endif
 
 int do_mem_event_op(int op, uint32_t domain, void *arg)
 {
+#if !defined(CONFIG_MEM_PAGING) && !defined(CONFIG_MEM_SHARING)
+    return -ENOSYS;
+#else
     int ret;
     struct domain *d;
 
@@ -472,6 +486,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
  out:
     rcu_unlock_domain(d);
     return ret;
+#endif
 }
 
 /* Clean up on domain destruction */
@@ -532,6 +547,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
     switch ( mec->mode )
     {
+#ifdef CONFIG_MEM_PAGING
     case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
     {
         struct mem_event_domain *med = &d->mem_event->paging;
@@ -582,6 +598,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
     }
     break;
+#endif
 
     case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
     {
@@ -616,6 +633,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     }
     break;
 
+#ifdef CONFIG_MEM_SHARING
     case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
     {
         struct mem_event_domain *med = &d->mem_event->share;
@@ -654,6 +672,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
     }
     break;
+#endif
 
     default:
         rc = -ENOSYS;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 8a864ce..f8ef043 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -57,6 +57,9 @@
 #define CONFIG_LATE_HWDOM 1
 #endif
 
+#define CONFIG_MEM_SHARING 1
+#define CONFIG_MEM_PAGING 1
+
 #define HZ 100
 
 #define OPT_CONSOLE_STR "vga"
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index df55e70..f20e89c 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -513,6 +513,20 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
+static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
 {
@@ -556,18 +570,6 @@ static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
-{
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 6c1c079..4ce089f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -141,6 +141,11 @@ struct xsm_operations {
     int (*hvm_param_nested) (struct domain *d);
     int (*get_vnumainfo) (struct domain *d);
 
+#ifdef HAS_MEM_ACCESS
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
+#endif
+
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -149,8 +154,6 @@ struct xsm_operations {
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*hvm_ioreq_server) (struct domain *d, int op);
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -540,6 +543,18 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
     return xsm_ops->get_vnumainfo(d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+{
+    return xsm_ops->mem_event_control(d, mode, op);
+}
+
+static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+{
+    return xsm_ops->mem_event_op(d, op);
+}
+#endif
+
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
 {
@@ -576,16 +591,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int
     return xsm_ops->hvm_ioreq_server(d, op);
 }
 
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
-{
-    return xsm_ops->mem_event_control(d, mode, op);
-}
-
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->mem_event_op(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 0826a8b..8eb3050 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -118,6 +118,11 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
+#endif
+
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
@@ -126,8 +131,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, hvm_ioreq_server);
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, platform_op);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index df05566..8de5e49 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -577,6 +577,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
+#ifdef HAS_MEM_ACCESS
+    case XEN_DOMCTL_mem_event_op:
+#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -584,7 +587,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_bind_pt_irq:
     case XEN_DOMCTL_unbind_pt_irq:
     case XEN_DOMCTL_ioport_mapping:
-    case XEN_DOMCTL_mem_event_op:
     /* These have individual XSM hooks (drivers/passthrough/iommu.c) */
     case XEN_DOMCTL_get_device_group:
     case XEN_DOMCTL_test_assign_device:
@@ -1189,6 +1191,18 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+#endif /* HAS_MEM_ACCESS */
+
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1299,16 +1313,6 @@ static int flask_hvm_ioreq_server(struct domain *d, int op)
     return current_has_perm(d, SECCLASS_HVM, HVM__HVMCTL);
 }
 
-static int flask_mem_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
-static int flask_mem_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1577,6 +1581,11 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
+#ifdef HAS_MEM_ACCESS
+    .mem_event_control = flask_mem_event_control,
+    .mem_event_op = flask_mem_event_op,
+#endif
+
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
@@ -1585,8 +1594,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
     .hvm_inject_msi = flask_hvm_inject_msi,
     .hvm_ioreq_server = flask_hvm_ioreq_server,
-    .mem_event_control = flask_mem_event_control,
-    .mem_event_op = flask_mem_event_op,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .platform_op = flask_platform_op,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 16/17] tools/libxc: Allocate magic page for mem access on ARM
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (14 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 15/17] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-15 14:02 ` [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access " Tamas K Lengyel
  16 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/libxc/xc_dom_arm.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index 9b31b1f..13e881e 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -26,9 +26,10 @@
 #include "xg_private.h"
 #include "xc_dom.h"
 
-#define NR_MAGIC_PAGES 2
+#define NR_MAGIC_PAGES 3
 #define CONSOLE_PFN_OFFSET 0
 #define XENSTORE_PFN_OFFSET 1
+#define MEMACCESS_PFN_OFFSET 2
 
 #define LPAE_SHIFT 9
 
@@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
+    xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
             dom->console_pfn);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
             dom->xenstore_pfn);
+    xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
+            base + MEMACCESS_PFN_OFFSET);
     /* allocated by toolstack */
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
             dom->console_evtchn);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access on ARM
  2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (15 preceding siblings ...)
  2014-09-15 14:02 ` [PATCH for-4.5 v6 16/17] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
@ 2014-09-15 14:02 ` Tamas K Lengyel
  2014-09-18 19:02   ` Ian Campbell
  16 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-15 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, tim, julien.grall, ian.jackson, stefano.stabellini,
	andres, jbeulich, dgdegra, Tamas K Lengyel

Define the ARM specific test_and_set_bit functions and switch
to use maximum gpfn as the limit to setting permissions. Also,
move HAS_MEM_ACCESS definition into config.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v6: - Just use xc_domain_maximum_gpfn to get max_gpfn.
v5: - Use the new information returned by getdomaininfo, max_gpfn, to
      set access permissions. On ARM this will include the potential
      memory hole as well which the hypervisor just loops over.
v4: - Take into account multiple guest ram banks on ARM.
    - Move HAS_MEM_ACCESS definition into config/*.mk and only compile
      xen-access when it is defined.
    - Pass CONFIG_X86/CONFIG_ARM flags during compilation in xen-access
      Makefile.
---
 config/arm32.mk                     |  1 +
 config/arm64.mk                     |  1 +
 config/x86_32.mk                    |  2 +
 config/x86_64.mk                    |  2 +
 tools/tests/xen-access/Makefile     |  9 ++++-
 tools/tests/xen-access/xen-access.c | 79 ++++++++++++++++++++++++-------------
 xen/arch/arm/Rules.mk               |  1 -
 xen/arch/x86/Rules.mk               |  1 -
 8 files changed, 65 insertions(+), 31 deletions(-)

diff --git a/config/arm32.mk b/config/arm32.mk
index aa79d22..4a7c259 100644
--- a/config/arm32.mk
+++ b/config/arm32.mk
@@ -13,6 +13,7 @@ HAS_PL011 := y
 HAS_EXYNOS4210 := y
 HAS_OMAP := y
 HAS_NS16550 := y
+HAS_MEM_ACCESS := y
 
 # Use only if calling $(LD) directly.
 LDFLAGS_DIRECT += -EL
diff --git a/config/arm64.mk b/config/arm64.mk
index 15b57a4..ea6558d 100644
--- a/config/arm64.mk
+++ b/config/arm64.mk
@@ -8,6 +8,7 @@ CFLAGS += #-marm -march= -mcpu= etc
 
 HAS_PL011 := y
 HAS_NS16550 := y
+HAS_MEM_ACCESS := y
 
 # Use only if calling $(LD) directly.
 LDFLAGS_DIRECT += -EL
diff --git a/config/x86_32.mk b/config/x86_32.mk
index 7f76b25..5906e60 100644
--- a/config/x86_32.mk
+++ b/config/x86_32.mk
@@ -6,6 +6,8 @@ CONFIG_HVM := y
 CONFIG_MIGRATE := y
 CONFIG_XCUTILS := y
 
+HAS_MEM_ACCESS := y
+
 CFLAGS += -m32 -march=i686
 
 # Use only if calling $(LD) directly.
diff --git a/config/x86_64.mk b/config/x86_64.mk
index 11104bd..b000c4e 100644
--- a/config/x86_64.mk
+++ b/config/x86_64.mk
@@ -7,6 +7,8 @@ CONFIG_HVM := y
 CONFIG_MIGRATE := y
 CONFIG_XCUTILS := y
 
+HAS_MEM_ACCESS := y
+
 CONFIG_XEN_INSTALL_SUFFIX := .gz
 
 CFLAGS += -m64
diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
index 65eef99..5056972 100644
--- a/tools/tests/xen-access/Makefile
+++ b/tools/tests/xen-access/Makefile
@@ -7,8 +7,13 @@ CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_xeninclude)
 
-TARGETS-y := 
-TARGETS-$(CONFIG_X86) += xen-access
+CFLAGS-y :=
+CFLAGS-$(CONFIG_X86) := -DCONFIG_X86
+CFLAGS-$(CONFIG_ARM) := -DCONFIG_ARM
+CFLAGS += $(CFLAGS-y)
+
+TARGETS-y :=
+TARGETS-$(HAS_MEM_ACCESS) := xen-access
 TARGETS := $(TARGETS-y)
 
 .PHONY: all
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 6cb382d..40a7143 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -41,22 +41,16 @@
 #include <xenctrl.h>
 #include <xen/mem_event.h>
 
-#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
-#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
-#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
-
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
+#ifdef CONFIG_X86
 
+#define START_PFN 0ULL
 #define ADDR (*(volatile long *) addr)
+
 /**
  * test_and_set_bit - Set a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
  */
 static inline int test_and_set_bit(int nr, volatile void *addr)
 {
@@ -69,6 +63,43 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
     return oldbit;
 }
 
+#elif CONFIG_ARM
+
+#include <xen/arch-arm.h>
+
+#define PAGE_SHIFT              12
+#define START_PFN               (GUEST_RAM0_BASE >> PAGE_SHIFT)
+#define BITS_PER_WORD           32
+#define BIT_MASK(nr)            (1UL << ((nr) % BITS_PER_WORD))
+#define BIT_WORD(nr)            ((nr) / BITS_PER_WORD)
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ */
+static inline int test_and_set_bit(int nr, volatile void *addr)
+{
+        unsigned int mask = BIT_MASK(nr);
+        volatile unsigned int *p =
+                ((volatile unsigned int *)addr) + BIT_WORD(nr);
+        unsigned int old = *p;
+
+        *p = old | mask;
+        return (old & mask) != 0;
+}
+
+#endif
+
+#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
+#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
+#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
+
+/* Spinlock and mem event definitions */
+
+#define SPIN_LOCK_UNLOCKED 0
+
 typedef int spinlock_t;
 
 static inline void spin_lock(spinlock_t *lock)
@@ -108,7 +139,7 @@ typedef struct mem_event {
 typedef struct xenaccess {
     xc_interface *xc_handle;
 
-    xc_domaininfo_t    *domain_info;
+    int max_gpfn;
 
     mem_event_t mem_event;
 } xenaccess_t;
@@ -212,7 +243,6 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     }
     xenaccess->xc_handle = NULL;
 
-    free(xenaccess->domain_info);
     free(xenaccess);
 
     return 0;
@@ -293,23 +323,17 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
                    (mem_event_sring_t *)xenaccess->mem_event.ring_page,
                    XC_PAGE_SIZE);
 
-    /* Get domaininfo */
-    xenaccess->domain_info = malloc(sizeof(xc_domaininfo_t));
-    if ( xenaccess->domain_info == NULL )
-    {
-        ERROR("Error allocating memory for domain info");
-        goto err;
-    }
+    /* Get max_gpfn */
+    xenaccess->max_gpfn = xc_domain_maximum_gpfn(xenaccess->xc_handle,
+                                                 xenaccess->mem_event.domain_id);
 
-    rc = xc_domain_getinfolist(xenaccess->xc_handle, domain_id, 1,
-                               xenaccess->domain_info);
-    if ( rc != 1 )
+    if ( xenaccess->max_gpfn < 0 )
     {
-        ERROR("Error getting domain info");
+        ERROR("Failed to get max gpfn");
         goto err;
     }
 
-    DPRINTF("max_pages = %"PRIx64"\n", xenaccess->domain_info->max_pages);
+    DPRINTF("max_gpfn = %"PRIx32"\n", xenaccess->max_gpfn);
 
     return xenaccess;
 
@@ -492,8 +516,9 @@ int main(int argc, char *argv[])
         goto exit;
     }
 
-    rc = xc_set_mem_access(xch, domain_id, default_access, 0,
-                           xenaccess->domain_info->max_pages);
+    rc = xc_set_mem_access(xch, domain_id, default_access, START_PFN,
+                           (xenaccess->max_gpfn - START_PFN) );
+
     if ( rc < 0 )
     {
         ERROR("Error %d setting all memory to access type %d\n", rc,
@@ -520,8 +545,8 @@ int main(int argc, char *argv[])
 
             /* Unregister for every event */
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
-            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
-                                   xenaccess->domain_info->max_pages);
+            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN,
+                                   (xenaccess->max_gpfn - START_PFN) );
             rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
 
             shutting_down = 1;
diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index f6781b5..8658176 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -10,7 +10,6 @@ HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
 HAS_PASSTHROUGH := y
-HAS_MEM_ACCESS := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index bd4e342..576985e 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -12,7 +12,6 @@ HAS_NS16550 := y
 HAS_EHCI := y
 HAS_KEXEC := y
 HAS_GDBSX := y
-HAS_MEM_ACCESS := y
 xenoprof := y
 
 #
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes
  2014-09-15 14:02 ` [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes Tamas K Lengyel
@ 2014-09-15 22:35   ` Ian Campbell
  2014-09-16  8:49     ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-15 22:35 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> Define p2m_access_t in ARM and add necessary changes for page table
> construction routines to pass the default access information. Also,
> define the Radix tree that will hold access permission settings as
> the PTE's don't have enough software programmable bits available.

So my main concern here is the overhead for non-xenaccess users. I think
it amounts to a few extra fields in the p2m_domain struct which I can
see here and presumably some NULL vs. non-NULL type checks which I guess
we will get to later. The important thing is that the fast paths for the
common case don't get a lot of extra overhead.

WRT the xenaccess performance did you consider any options other than a
radix tree (which seems quite expensive to me)? e.g. perhaps allocating
(only when needed) as second page for each real T page as a
shadow/extended region? Perhaps pointed to by a filed in the real PT
struct page_info. I'm sure there are other possible ideas too.

Ian,

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl
  2014-09-15 14:02 ` [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl Tamas K Lengyel
@ 2014-09-15 22:37   ` Ian Campbell
  2014-09-16  8:37     ` Tamas K Lengyel
  2014-09-15 22:38   ` Ian Campbell
  1 sibling, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-15 22:37 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> Reviewed-by: Julien Grall <julien.grall@linaro.org>

This is essentially identical to the x86 code -- could it not have been
made common along with the other stuff you genericised?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl
  2014-09-15 14:02 ` [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl Tamas K Lengyel
  2014-09-15 22:37   ` Ian Campbell
@ 2014-09-15 22:38   ` Ian Campbell
  2014-09-16  8:33     ` Tamas K Lengyel
  1 sibling, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-15 22:38 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> Reviewed-by: Julien Grall <julien.grall@linaro.org>

... also do you not need to adjust the flask stuff? Currently the
handling for this domctl is under an x86 ifdef.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-15 14:02 ` [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
@ 2014-09-15 22:39   ` Ian Campbell
  2014-09-16  8:02     ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-15 22:39 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

We had a grief discussion about why/how this was useful for xenaccess
and whether it was needed or just an optimisation etc. Could you recap
that here please?

Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-15 14:02 ` [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-09-15 22:53   ` Ian Campbell
       [not found]     ` <CAErYnshu0vJJMxWwu4eo2MZf=q_g2H123p6VUk_4a9f12vYLjg@mail.gmail.com>
  2014-09-18 18:54   ` Ian Campbell
  1 sibling, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-15 22:53 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> This patch enables to store, set, check and deliver LPAE R/W mem_events.
> As the LPAE PTE's lack enough available software programmable bits,
> we store the permissions in a Radix tree, where the key is the pfn
> of a 4k page. Only settings other than p2m_access_rwx are saved
> in the Radix tree.

It is here that it is absolutely essential to consider the overhead for
the non-xenaccess use case, and I think it should be written here in the
commit message.

I'm worried because I'm not seeing the obvious "xenaccess is disabled,
skip all this stuff" path I was hoping for.

I think the overhead of p2m modifications and on fault needs separate
consideration.

For cases where xenaccess isn't enabled IMHO we need to be talking about
overhead in the ballpark of a pointer deref, compare, conditional jump.
Certainly not any kind of radix tree lookup etc.

> @@ -149,6 +152,74 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr)
>      return __map_domain_page(page);
>  }
>  
> +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
> +{

It would have been good to refactor this, and probably p2m_shatter_page
in precursor patches to reduce the size of this aleady pretty big patch.

> @@ -1159,6 +1329,236 @@ err:
>      return page;
>  }
>  
> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)

Can a bunch of this function not be common code? i.e. the inject an
event into a ring bits? Or even the automatic conversion of rx2rw etc.

> @@ -248,6 +249,23 @@ static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
>      return 1;
>  }
>  
> +/* Send mem event based on the access (gla is -1ull if not available). Boolean

Is a "gla" a "guest linear address"? I don't think we use that term on
ARM.

Ian

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-15 22:39   ` Ian Campbell
@ 2014-09-16  8:02     ` Tamas K Lengyel
  2014-09-16 16:44       ` Ian Campbell
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16  8:02 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1154 bytes --]

On Tue, Sep 16, 2014 at 12:39 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> > The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> > mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this
> purpose.
>
> We had a grief discussion about why/how this was useful for xenaccess
> and whether it was needed or just an optimisation etc. Could you recap
> that here please?
>
> Ian.
>

Certainly. The reason we use this in xenaccess is to avoid the user
attempting to set page permissions on pages which don't exist for the
domain. For example, if the user attempts to set page permissions from gpfn
0 -> (~0-1), it will waste a lot of the hypervisors time. It won't break
anything as non-existent pages are skipped automatically, and now with the
preemption in place it can't DoS the system, but it's still a reasonable
sanity check to perform. It's not a comprehensive enforcement of 'don't try
to set permission on non-existent pages' as ARM has holes in the memory
layout and doesn't start from 0, but this check at least works for both x86
and ARM.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1688 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl
  2014-09-15 22:38   ` Ian Campbell
@ 2014-09-16  8:33     ` Tamas K Lengyel
  2014-09-16 13:25       ` Ian Campbell
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16  8:33 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 482 bytes --]

On Tue, Sep 16, 2014 at 12:38 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> > Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> > Reviewed-by: Julien Grall <julien.grall@linaro.org>
>
> ... also do you not need to adjust the flask stuff? Currently the
> handling for this domctl is under an x86 ifdef.
>

Where do you see it under x86 ifdef? In xsm/flask/hooks.c it definitely
isn't under x86 ifdef O.o ..

[-- Attachment #1.2: Type: text/html, Size: 986 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl
  2014-09-15 22:37   ` Ian Campbell
@ 2014-09-16  8:37     ` Tamas K Lengyel
  0 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16  8:37 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 446 bytes --]

On Tue, Sep 16, 2014 at 12:37 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> > Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> > Reviewed-by: Julien Grall <julien.grall@linaro.org>
>
> This is essentially identical to the x86 code -- could it not have been
> made common along with the other stuff you genericised?
>
>
You are right, it could be in common. Ack.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 993 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes
  2014-09-15 22:35   ` Ian Campbell
@ 2014-09-16  8:49     ` Tamas K Lengyel
  2014-09-16 13:27       ` Ian Campbell
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16  8:49 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1840 bytes --]

On Tue, Sep 16, 2014 at 12:35 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> > Define p2m_access_t in ARM and add necessary changes for page table
> > construction routines to pass the default access information. Also,
> > define the Radix tree that will hold access permission settings as
> > the PTE's don't have enough software programmable bits available.
>
> So my main concern here is the overhead for non-xenaccess users. I think
> it amounts to a few extra fields in the p2m_domain struct which I can
> see here and presumably some NULL vs. non-NULL type checks which I guess
> we will get to later. The important thing is that the fast paths for the
> common case don't get a lot of extra overhead.
>
> WRT the xenaccess performance did you consider any options other than a
> radix tree (which seems quite expensive to me)? e.g. perhaps allocating
> (only when needed) as second page for each real T page as a
> shadow/extended region? Perhaps pointed to by a filed in the real PT
> struct page_info. I'm sure there are other possible ideas too.
>
> Ian,
>

Those would all be possible solutions. I used the Radix tree implementation
already in Xen as a matter of convenience and because it has an acceptable
overhead. I certainly don't oppose further optimizing this code, I'm just
not sure if it needs to happen now, provided feature freeze is rapidly
approaching. The main concern should be impact on non-xenaccess code-paths,
which I agree are a showstopper of any feature like this. If you say it
can't be merged unless the xenaccess code-path is also optimized I'm afraid
this series will be postponed till 4.6 as I won't have the time to test out
which approach puts the least overhead on the system under what
usage-scenarios etc in this timeframe.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2368 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
       [not found]     ` <CAErYnshu0vJJMxWwu4eo2MZf=q_g2H123p6VUk_4a9f12vYLjg@mail.gmail.com>
@ 2014-09-16 10:07       ` Tamas K Lengyel
  2014-09-16 16:50         ` Ian Campbell
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16 10:07 UTC (permalink / raw)
  To: Ian Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3737 bytes --]

Dropped xen-devel by accident on my previous msg.

On Tue, Sep 16, 2014 at 12:53 AM, Ian Campbell <ian.campbell@citrix.com>
> wrote:
>
>> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>> > This patch enables to store, set, check and deliver LPAE R/W mem_events.
>> > As the LPAE PTE's lack enough available software programmable bits,
>> > we store the permissions in a Radix tree, where the key is the pfn
>> > of a 4k page. Only settings other than p2m_access_rwx are saved
>> > in the Radix tree.
>>
>> It is here that it is absolutely essential to consider the overhead for
>> the non-xenaccess use case, and I think it should be written here in the
>> commit message.
>>
>> I'm worried because I'm not seeing the obvious "xenaccess is disabled,
>> skip all this stuff" path I was hoping for.
>>
>> I think the overhead of p2m modifications and on fault needs separate
>> consideration.
>>
>> For cases where xenaccess isn't enabled IMHO we need to be talking about
>> overhead in the ballpark of a pointer deref, compare, conditional jump.
>> Certainly not any kind of radix tree lookup etc.
>>
>
> In the trap handlers only permission faults are checked against the
> mem_access radix tree, so that already cuts down overhead to a conditional.
> In my experiments I haven't seen a single instance of a permission fault
> happening which I didn't cause myself. In p2m modifications as long as the
> default mem_access is rwx, the code path is the same as before for large
> pages. For 4k pages when adding them with rwx permission it does have an
> extra radix tree lookup to clean any potential setting in the radix tree
> for that page, but that is really just a safety check on my part and if
> overhead with it is a problem it can be removed. IMHO in the default case
> the radix tree is empty, so that lookup is essentially just another
> conditional.
>
> Do you see any other paths that are a cause for concern?
>
>
>>
>> > @@ -149,6 +152,74 @@ static lpae_t *p2m_map_first(struct p2m_domain
>> *p2m, paddr_t addr)
>> >      return __map_domain_page(page);
>> >  }
>> >
>> > +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>> > +{
>>
>> It would have been good to refactor this, and probably p2m_shatter_page
>> in precursor patches to reduce the size of this aleady pretty big patch.
>>
>
> Ack, that would make sense.
>
>
>>
>> > @@ -1159,6 +1329,236 @@ err:
>> >      return page;
>> >  }
>> >
>> > +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
>> npfec npfec)
>>
>> Can a bunch of this function not be common code? i.e. the inject an
>> event into a ring bits? Or even the automatic conversion of rx2rw etc.
>>
>
> I'll look into it, some bits might be.
>

While the code logic is in theory the same, unfortunately there are
significant differences between handling locks, which makes it not possible
to have this code in common. There are also some style differences, like
ARM doesn't have set_entry/get_entry pointers in the p2m_domain, as on ARM
we don't need to dynamically support different types for those functions
(no need to abstract them). The parts that could be in common are only a
couple lines here and there which don't really justify having them in
common as separate functions.


>
>
>>
>> > @@ -248,6 +249,23 @@ static inline bool_t
>> p2m_mem_event_sanity_check(struct domain *d)
>> >      return 1;
>> >  }
>> >
>> > +/* Send mem event based on the access (gla is -1ull if not available).
>> Boolean
>>
>> Is a "gla" a "guest linear address"? I don't think we use that term on
>> ARM.
>>
>
> Yea, that's copy-pasta and will clean it up.. on ARM we always have the
> faulting vaddr it seems to me anyway.
>
>
>>
>> Ian
>>
>
> Thanks,
> Tamas
>

[-- Attachment #1.2: Type: text/html, Size: 5506 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl
  2014-09-16  8:33     ` Tamas K Lengyel
@ 2014-09-16 13:25       ` Ian Campbell
  0 siblings, 0 replies; 47+ messages in thread
From: Ian Campbell @ 2014-09-16 13:25 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

On Tue, 2014-09-16 at 10:33 +0200, Tamas K Lengyel wrote:
> 
> 
> On Tue, Sep 16, 2014 at 12:38 AM, Ian Campbell
> <ian.campbell@citrix.com> wrote:
>         On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>         > Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>         > Reviewed-by: Julien Grall <julien.grall@linaro.org>
>         
>         ... also do you not need to adjust the flask stuff? Currently
>         the
>         handling for this domctl is under an x86 ifdef.
> 
> 
> Where do you see it under x86 ifdef? In xsm/flask/hooks.c it
> definitely isn't under x86 ifdef O.o .. 

Sorry, I missed the #endif in there.


Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes
  2014-09-16  8:49     ` Tamas K Lengyel
@ 2014-09-16 13:27       ` Ian Campbell
  2014-09-16 20:38         ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-16 13:27 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

On Tue, 2014-09-16 at 10:49 +0200, Tamas K Lengyel wrote:
> 
> 
> On Tue, Sep 16, 2014 at 12:35 AM, Ian Campbell
> <ian.campbell@citrix.com> wrote:
>         On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>         > Define p2m_access_t in ARM and add necessary changes for
>         page table
>         > construction routines to pass the default access
>         information. Also,
>         > define the Radix tree that will hold access permission
>         settings as
>         > the PTE's don't have enough software programmable bits
>         available.
>         
>         So my main concern here is the overhead for non-xenaccess
>         users. I think
>         it amounts to a few extra fields in the p2m_domain struct
>         which I can
>         see here and presumably some NULL vs. non-NULL type checks
>         which I guess
>         we will get to later. The important thing is that the fast
>         paths for the
>         common case don't get a lot of extra overhead.
>         
>         WRT the xenaccess performance did you consider any options
>         other than a
>         radix tree (which seems quite expensive to me)? e.g. perhaps
>         allocating
>         (only when needed) as second page for each real T page as a
>         shadow/extended region? Perhaps pointed to by a filed in the
>         real PT
>         struct page_info. I'm sure there are other possible ideas too.
>         
>         Ian,
> 
> 
> Those would all be possible solutions. I used the Radix tree
> implementation already in Xen as a matter of convenience and because
> it has an acceptable overhead. I certainly don't oppose further
> optimizing this code, I'm just not sure if it needs to happen now,
> provided feature freeze is rapidly approaching. The main concern
> should be impact on non-xenaccess code-paths, which I agree are a
> showstopper of any feature like this. If you say it can't be merged
> unless the xenaccess code-path is also optimized I'm afraid this
> series will be postponed till 4.6 as I won't have the time to test out
> which approach puts the least overhead on the system under what
> usage-scenarios etc in this timeframe.

WRT merging I'm only concerned about the impact on non-xenaccess uses,
the stuff about the xenaccess-on case was just idle wondering, sorry for
not making that clear.

Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-16  8:02     ` Tamas K Lengyel
@ 2014-09-16 16:44       ` Ian Campbell
  2014-09-16 17:09         ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-16 16:44 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

On Tue, 2014-09-16 at 10:02 +0200, Tamas K Lengyel wrote:
> Certainly. The reason we use this in xenaccess is to avoid the user
> attempting to set page permissions on pages which don't exist for the
> domain. For example, if the user attempts to set page permissions from
> gpfn 0 -> (~0-1), it will waste a lot of the hypervisors time. It
> won't break anything as non-existent pages are skipped automatically,
> and now with the preemption in place it can't DoS the system, but it's
> still a reasonable sanity check to perform. It's not a comprehensive
> enforcement of 'don't try to set permission on non-existent pages' as
> ARM has holes in the memory layout and doesn't start from 0, but this
> check at least works for both x86 and ARM. 

OK. Please can you include this in the commit message.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-16 10:07       ` Tamas K Lengyel
@ 2014-09-16 16:50         ` Ian Campbell
  2014-09-16 17:08           ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-16 16:50 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel

On Tue, 2014-09-16 at 12:07 +0200, Tamas K Lengyel wrote:

>         In the trap handlers only permission faults are checked
>         against the mem_access radix tree, so that already cuts down
>         overhead to a conditional. In my experiments I haven't seen a
>         single instance of a permission fault happening which I didn't
>         cause myself. In p2m modifications as long as the default
>         mem_access is rwx, the code path is the same as before for
>         large pages. For 4k pages when adding them with rwx permission
>         it does have an extra radix tree lookup to clean any potential
>         setting in the radix tree for that page, but that is really
>         just a safety check on my part and if overhead with it is a
>         problem it can be removed. IMHO in the default case the radix
>         tree is empty, so that lookup is essentially just another
>         conditional.

With the radix tree lookup functions it isn't trivially easy to reason
that they turn into an almost-nop on an empty tree, so I'm not sure.
It's an out of line function call and at least 2 pointer indirections
from the looks of it.

Perhaps we could add an explicit value for p2m->default_access which
causes the tree never to even get touched?

> 
> While the code logic is in theory the same, unfortunately there are
> significant differences between handling locks, which makes it not
> possible to have this code in common. There are also some style
> differences, like ARM doesn't have set_entry/get_entry pointers in the
> p2m_domain, as on ARM we don't need to dynamically support different
> types for those functions (no need to abstract them). The parts that
> could be in common are only a couple lines here and there which don't
> really justify having them in common as separate functions.

We can add new arch-generic wrappers for things if it makes sense, like
e.g. guest_physmap_add_entry or map_mmio_regions etc. Do you tihnk that
would help/make sense here?

Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-16 16:50         ` Ian Campbell
@ 2014-09-16 17:08           ` Tamas K Lengyel
  0 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16 17:08 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3073 bytes --]

On Tue, Sep 16, 2014 at 6:50 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Tue, 2014-09-16 at 12:07 +0200, Tamas K Lengyel wrote:
>
> >         In the trap handlers only permission faults are checked
> >         against the mem_access radix tree, so that already cuts down
> >         overhead to a conditional. In my experiments I haven't seen a
> >         single instance of a permission fault happening which I didn't
> >         cause myself. In p2m modifications as long as the default
> >         mem_access is rwx, the code path is the same as before for
> >         large pages. For 4k pages when adding them with rwx permission
> >         it does have an extra radix tree lookup to clean any potential
> >         setting in the radix tree for that page, but that is really
> >         just a safety check on my part and if overhead with it is a
> >         problem it can be removed. IMHO in the default case the radix
> >         tree is empty, so that lookup is essentially just another
> >         conditional.
>
> With the radix tree lookup functions it isn't trivially easy to reason
> that they turn into an almost-nop on an empty tree, so I'm not sure.
> It's an out of line function call and at least 2 pointer indirections
> from the looks of it.
>
> Perhaps we could add an explicit value for p2m->default_access which
> causes the tree never to even get touched?
>

It wouldn't really be via the default_access, I would just have a separate
function that is used to remove pre-existing mem_access entries, as that
would only happen when the user initiates a mem_access op with rwx
permissions. Right now I just bundled that into this path, but as this also
gets called by non-memaccess paths its better to separate the two.


>
> >
> > While the code logic is in theory the same, unfortunately there are
> > significant differences between handling locks, which makes it not
> > possible to have this code in common. There are also some style
> > differences, like ARM doesn't have set_entry/get_entry pointers in the
> > p2m_domain, as on ARM we don't need to dynamically support different
> > types for those functions (no need to abstract them). The parts that
> > could be in common are only a couple lines here and there which don't
> > really justify having them in common as separate functions.
>
> We can add new arch-generic wrappers for things if it makes sense, like
> e.g. guest_physmap_add_entry or map_mmio_regions etc. Do you tihnk that
> would help/make sense here?
>
> Ian.
>

Not particularly, because we are locking at a different level here.. x86
locks on a gfn level as that's where the permission are stored vs. in ARM
we need to lock on a p2m level because the tree might get modified
otherwise. So abstracting this code while having the appropriate locks
sprinkled into it is not very nice.. Essentially at all lock location for
both archs we would need to call down to the arch specific function which
would determine where it needs to lock.. and that wouldn't really help in
making the code easy to follow.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3922 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-16 16:44       ` Ian Campbell
@ 2014-09-16 17:09         ` Tamas K Lengyel
  0 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16 17:09 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 915 bytes --]

On Tue, Sep 16, 2014 at 6:44 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Tue, 2014-09-16 at 10:02 +0200, Tamas K Lengyel wrote:
> > Certainly. The reason we use this in xenaccess is to avoid the user
> > attempting to set page permissions on pages which don't exist for the
> > domain. For example, if the user attempts to set page permissions from
> > gpfn 0 -> (~0-1), it will waste a lot of the hypervisors time. It
> > won't break anything as non-existent pages are skipped automatically,
> > and now with the preemption in place it can't DoS the system, but it's
> > still a reasonable sanity check to perform. It's not a comprehensive
> > enforcement of 'don't try to set permission on non-existent pages' as
> > ARM has holes in the memory layout and doesn't start from 0, but this
> > check at least works for both x86 and ARM.
>
> OK. Please can you include this in the commit message.
>

Ack.

[-- Attachment #1.2: Type: text/html, Size: 1375 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes
  2014-09-16 13:27       ` Ian Campbell
@ 2014-09-16 20:38         ` Julien Grall
  2014-09-16 21:52           ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2014-09-16 20:38 UTC (permalink / raw)
  To: Ian Campbell, Tamas K Lengyel
  Cc: Tim Deegan, Ian Jackson, xen-devel, Stefano Stabellini,
	Andres Lagar-Cavilla, Jan Beulich, Daniel De Graaf,
	Tamas K Lengyel

Hi Ian,

On 16/09/14 06:27, Ian Campbell wrote:
> On Tue, 2014-09-16 at 10:49 +0200, Tamas K Lengyel wrote:
>>
>>
>> On Tue, Sep 16, 2014 at 12:35 AM, Ian Campbell
>> <ian.campbell@citrix.com> wrote:
>>          On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>>          > Define p2m_access_t in ARM and add necessary changes for
>>          page table
>>          > construction routines to pass the default access
>>          information. Also,
>>          > define the Radix tree that will hold access permission
>>          settings as
>>          > the PTE's don't have enough software programmable bits
>>          available.
>>
>>          So my main concern here is the overhead for non-xenaccess
>>          users. I think
>>          it amounts to a few extra fields in the p2m_domain struct
>>          which I can
>>          see here and presumably some NULL vs. non-NULL type checks
>>          which I guess
>>          we will get to later. The important thing is that the fast
>>          paths for the
>>          common case don't get a lot of extra overhead.
>>
>>          WRT the xenaccess performance did you consider any options
>>          other than a
>>          radix tree (which seems quite expensive to me)? e.g. perhaps
>>          allocating
>>          (only when needed) as second page for each real T page as a
>>          shadow/extended region? Perhaps pointed to by a filed in the
>>          real PT
>>          struct page_info. I'm sure there are other possible ideas too.
>>
>>          Ian,
>>
>>
>> Those would all be possible solutions. I used the Radix tree
>> implementation already in Xen as a matter of convenience and because
>> it has an acceptable overhead. I certainly don't oppose further
>> optimizing this code, I'm just not sure if it needs to happen now,
>> provided feature freeze is rapidly approaching. The main concern
>> should be impact on non-xenaccess code-paths, which I agree are a
>> showstopper of any feature like this. If you say it can't be merged
>> unless the xenaccess code-path is also optimized I'm afraid this
>> series will be postponed till 4.6 as I won't have the time to test out
>> which approach puts the least overhead on the system under what
>> usage-scenarios etc in this timeframe.
>
> WRT merging I'm only concerned about the impact on non-xenaccess uses,
> the stuff about the xenaccess-on case was just idle wondering, sorry for
> not making that clear.

I didn't write this code, but read it multiple time and ask the similar 
question to Tamas few version ago.

So, the radix tree is only used when the access type of the page is 
different than access_rwx. This is the default access when xenaccess is 
not used.

AFAIU, the only overhead we have is the new fields in the arch_domain 
structure.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes
  2014-09-16 20:38         ` Julien Grall
@ 2014-09-16 21:52           ` Tamas K Lengyel
  0 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-16 21:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 3394 bytes --]

On Tue, Sep 16, 2014 at 10:38 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hi Ian,
>
>
> On 16/09/14 06:27, Ian Campbell wrote:
>
>> On Tue, 2014-09-16 at 10:49 +0200, Tamas K Lengyel wrote:
>>
>>>
>>>
>>> On Tue, Sep 16, 2014 at 12:35 AM, Ian Campbell
>>> <ian.campbell@citrix.com> wrote:
>>>          On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>>>          > Define p2m_access_t in ARM and add necessary changes for
>>>          page table
>>>          > construction routines to pass the default access
>>>          information. Also,
>>>          > define the Radix tree that will hold access permission
>>>          settings as
>>>          > the PTE's don't have enough software programmable bits
>>>          available.
>>>
>>>          So my main concern here is the overhead for non-xenaccess
>>>          users. I think
>>>          it amounts to a few extra fields in the p2m_domain struct
>>>          which I can
>>>          see here and presumably some NULL vs. non-NULL type checks
>>>          which I guess
>>>          we will get to later. The important thing is that the fast
>>>          paths for the
>>>          common case don't get a lot of extra overhead.
>>>
>>>          WRT the xenaccess performance did you consider any options
>>>          other than a
>>>          radix tree (which seems quite expensive to me)? e.g. perhaps
>>>          allocating
>>>          (only when needed) as second page for each real T page as a
>>>          shadow/extended region? Perhaps pointed to by a filed in the
>>>          real PT
>>>          struct page_info. I'm sure there are other possible ideas too.
>>>
>>>          Ian,
>>>
>>>
>>> Those would all be possible solutions. I used the Radix tree
>>> implementation already in Xen as a matter of convenience and because
>>> it has an acceptable overhead. I certainly don't oppose further
>>> optimizing this code, I'm just not sure if it needs to happen now,
>>> provided feature freeze is rapidly approaching. The main concern
>>> should be impact on non-xenaccess code-paths, which I agree are a
>>> showstopper of any feature like this. If you say it can't be merged
>>> unless the xenaccess code-path is also optimized I'm afraid this
>>> series will be postponed till 4.6 as I won't have the time to test out
>>> which approach puts the least overhead on the system under what
>>> usage-scenarios etc in this timeframe.
>>>
>>
>> WRT merging I'm only concerned about the impact on non-xenaccess uses,
>> the stuff about the xenaccess-on case was just idle wondering, sorry for
>> not making that clear.
>>
>
> I didn't write this code, but read it multiple time and ask the similar
> question to Tamas few version ago.
>
> So, the radix tree is only used when the access type of the page is
> different than access_rwx. This is the default access when xenaccess is not
> used.
>

> AFAIU, the only overhead we have is the new fields in the arch_domain
> structure.
>

> Regards,
>
> --
> Julien Grall
>


Yeap, that's the basic outline of things (except that one thing I mentioned
in the other thread, which is already fixed in v7 of the series). I looked
into switching to an extended struct page_info and that looks like a good
alternative to consider. It seems to be like a faster lookup to get that
struct with get_page_from_gva then the radix lookup if the radix tree is
large.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 4606 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-15 14:02 ` [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
  2014-09-15 22:53   ` Ian Campbell
@ 2014-09-18 18:54   ` Ian Campbell
  2014-09-18 20:09     ` Tamas K Lengyel
  1 sibling, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-18 18:54 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:

> -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
> +        if ( level < 3 && p2m_access_rwx != a )
> +        {
> +            /* We create only 4k pages when mem_access is in use. */

I wonder if it might turn out cleaner to integrate this check into
is_mapping_aligned (which really is more of a "can we use a superpage"
function).

i.e.
        /* mem access cannot use super pages */
        if ( a != p2m_access_rwx && level_size != THIRD_SIZE )
                return false;

> +        }
> +        else if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
>          {
>              struct page_info *page;
>  
>              page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
>              if ( page )
>              {
> +                if ( 3 == level )

Please write the conditionals the other way around.

> +                {
> +                    rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
> +                    if ( rc < 0 )
> +                    {
> +                        free_domheap_page(page);
> +                        return rc;
> +                    }

> +                }
> +                else
> +                {
> +                    a = p2m_access_rwx;
> +                }

You have this else clause twice, I think you could pull it up to the
head of the function, or perhaps even into the caller.
> @@ -627,15 +741,11 @@ static int apply_one_level(struct domain *d,
>                   * and descend.
>                   */
>                  *flush = true;
> -                rc = p2m_create_table(d, entry,
> -                                      level_shift - PAGE_SHIFT, flush_cache);
> +                rc = p2m_shatter_page(d, entry, level, level_shift, flush_cache);
> +

Please keep the error handling if snuggled against the function, (i.e.
drop the additional blank line) here and in at least one other place
which you've changed.

> @@ -704,6 +815,49 @@ static int apply_one_level(struct domain *d,
>              *addr += PAGE_SIZE;
>              return P2M_ONE_PROGRESS_NOP;
>          }
> +
> +    case MEMACCESS:
> +        if ( level < 3 )
> +        {
> +            if ( !p2m_valid(orig_pte) )
> +            {
> +                *addr += level_size;
> +                return P2M_ONE_PROGRESS_NOP;
> +            }
> +
> +            /* Shatter large pages as we descend */
> +            if ( p2m_mapping(orig_pte) )
> +            {
> +                rc = p2m_shatter_page(d, entry, level, level_shift, flush_cache);
> +
> +                if ( rc < 0 )
> +                    return rc;
> +            } /* else: an existing table mapping -> descend */
> +
> +            return P2M_ONE_DESCEND;
> +        }
> +        else
> +        {
> +            pte = orig_pte;
> +
> +            if ( !p2m_table(pte) )
> +                pte.bits = 0;
> +
> +            if ( p2m_valid(pte) )
> +            {
> +                ASSERT(pte.p2m.type != p2m_invalid);
> +
> +                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
> +                if ( rc < 0 )
> +                    return rc;
> +
> +                p2m_set_permission(&pte, pte.p2m.type, a);

I think this function can always make use of pte.p2m.type itself rather
than receiving it as a parameter. The other caller passes "t" but has
already assigned that to pte.p2m.type as well.
>  
> -    rc = gva_to_ipa(info.gva, &info.gpa);
> -    if ( rc == -EFAULT )
> +    switch ( dabt.dfsc )
> +    {
> +    case DABT_DFSC_PERMISSION_1:
> +    case DABT_DFSC_PERMISSION_2:
> +    case DABT_DFSC_PERMISSION_3:

Eventually this will need to handle level 0 too. Would it work to mask
out the level bits and check the remainder against the common bit
pattern?

> +/* Data abort data fetch status codes */
> +enum dabt_dfsc {
> +    DABT_DFSC_ADDR_SIZE_0       = 0b000000,

Unfortunately I think 0b... is a gcc extension and not standard C
(please correct me if I'm wrong). In which case we should probably avoid
it and use hex instead.

Actually, isn't this partially duplicating the existing FSC_* defines?
We should either use those here or move the existing users over to the
new scheme.

Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-09-15 14:02 ` [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-09-18 18:59   ` Ian Campbell
  2014-09-18 20:12     ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-18 18:59 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> Add missing structure definition for iabt and update the trap handling
> mechanism to only inject the exception if the mem_access checker
> decides to do so.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
> v6: - Make npfec a const.
> v4: - Don't mark instruction fetch violation as read violation.
>     - Use new struct npfec to pass violation info.
> v2: - Add definition for instruction abort instruction fetch status codes
>        (enum iabt_ifsc) and only call p2m_mem_access_check for traps triggered
>        for permission violations.
> ---
>  xen/arch/arm/traps.c            | 39 ++++++++++++++++++++++++++++++++++++++-
>  xen/include/asm-arm/processor.h | 40 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 77 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 5bfcdf3..71c087f 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1828,7 +1828,44 @@ done:
>  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>                                        union hsr hsr)
>  {
> -    register_t addr = READ_SYSREG(FAR_EL2);
> +    struct hsr_iabt iabt = hsr.iabt;
> +    int rc;
> +    register_t addr;
> +    vaddr_t gva;
> +    paddr_t gpa;
> +
> +#ifdef CONFIG_ARM_32
> +    gva = READ_CP32(HIFAR);
> +#else
> +    gva = READ_SYSREG64(FAR_EL2);
> +#endif

You can just use READ_SYSREG(FAR_EL2) here and it will do the right
thing without the ifdef.
[...]
> +    addr = READ_SYSREG(FAR_EL2);

Like you do here ;-)
> index b844f1d..044de12 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -292,6 +292,36 @@ enum dabt_dfsc {
>      DABT_DFSC_TLB_CONFLICT      = 0b110000,
>  };
>  
> +/* Instruction abort instruction fault status codes */
> +enum iabt_ifsc {
> +    IABT_IFSC_ADDR_SIZE_0       = 0b000000,

Apart from the related comments on the last patch which mostly apply
here too, aren't these mostly common with the DABT codes? 

> @@ -371,10 +401,18 @@ union hsr {
>      } sysreg; /* HSR_EC_SYSREG */
>  #endif
>  
> +    struct hsr_iabt {
> +        unsigned long ifsc:6;   /* Instruction fault status code */
> +        unsigned long res0:1;
> +        unsigned long s1ptw:1;  /* Fault during a stage 1 translation table walk */
> +        unsigned long res1:1;
> +        unsigned long ea:1;     /* External abort type */

Please use eat for consistency here.

You should also include the common len/cc/etc bits and sufficient
padding that the whole thing adds up to 32-bits.

Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access on ARM
  2014-09-15 14:02 ` [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access " Tamas K Lengyel
@ 2014-09-18 19:02   ` Ian Campbell
  2014-09-22 18:48     ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-18 19:02 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, julien.grall, ian.jackson, xen-devel, stefano.stabellini,
	andres, jbeulich, dgdegra

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> +static inline int test_and_set_bit(int nr, volatile void *addr)
> +{
> +        unsigned int mask = BIT_MASK(nr);
> +        volatile unsigned int *p =
> +                ((volatile unsigned int *)addr) + BIT_WORD(nr);
> +        unsigned int old = *p;
> +
> +        *p = old | mask;
> +        return (old & mask) != 0;
> +}

This doesn't need to be / is not intended to be atomic, right?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-18 18:54   ` Ian Campbell
@ 2014-09-18 20:09     ` Tamas K Lengyel
  2014-09-19  9:05       ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-18 20:09 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 6174 bytes --]

On Thu, Sep 18, 2014 at 8:54 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>
> > -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
> > +        if ( level < 3 && p2m_access_rwx != a )
> > +        {
> > +            /* We create only 4k pages when mem_access is in use. */
>
> I wonder if it might turn out cleaner to integrate this check into
> is_mapping_aligned (which really is more of a "can we use a superpage"
> function).
>
> i.e.
>         /* mem access cannot use super pages */
>         if ( a != p2m_access_rwx && level_size != THIRD_SIZE )
>                 return false;
>

Ack, that would make sense.


>
> > +        }
> > +        else if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
> >          {
> >              struct page_info *page;
> >
> >              page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
> >              if ( page )
> >              {
> > +                if ( 3 == level )
>
> Please write the conditionals the other way around.
>

I tend to write equality tests this way where one side of the test is a
constant. It has happened in the past that such equality tests were used as
a cover in attempting to sneak bugdoors into Linux (see
https://freedom-to-tinker.com/blog/felten/the-linux-backdoor-attempt-of-2003
for more info).


>
> > +                {
> > +                    rc = p2m_mem_access_radix_set(p2m,
> paddr_to_pfn(*addr), a);
> > +                    if ( rc < 0 )
> > +                    {
> > +                        free_domheap_page(page);
> > +                        return rc;
> > +                    }
>
> > +                }
> > +                else
> > +                {
> > +                    a = p2m_access_rwx;
> > +                }
>
> You have this else clause twice, I think you could pull it up to the
> head of the function, or perhaps even into the caller.
>

The radix tree approach has been retired in v7 as I found the page_info
approach you suggested more palatable. Thus this entire block is gone in
the latest revision.


> > @@ -627,15 +741,11 @@ static int apply_one_level(struct domain *d,
> >                   * and descend.
> >                   */
> >                  *flush = true;
> > -                rc = p2m_create_table(d, entry,
> > -                                      level_shift - PAGE_SHIFT,
> flush_cache);
> > +                rc = p2m_shatter_page(d, entry, level, level_shift,
> flush_cache);
> > +
>
> Please keep the error handling if snuggled against the function, (i.e.
> drop the additional blank line) here and in at least one other place
> which you've changed.
>

Ack, I think this already was fixed in v7.


>
> > @@ -704,6 +815,49 @@ static int apply_one_level(struct domain *d,
> >              *addr += PAGE_SIZE;
> >              return P2M_ONE_PROGRESS_NOP;
> >          }
> > +
> > +    case MEMACCESS:
> > +        if ( level < 3 )
> > +        {
> > +            if ( !p2m_valid(orig_pte) )
> > +            {
> > +                *addr += level_size;
> > +                return P2M_ONE_PROGRESS_NOP;
> > +            }
> > +
> > +            /* Shatter large pages as we descend */
> > +            if ( p2m_mapping(orig_pte) )
> > +            {
> > +                rc = p2m_shatter_page(d, entry, level, level_shift,
> flush_cache);
> > +
> > +                if ( rc < 0 )
> > +                    return rc;
> > +            } /* else: an existing table mapping -> descend */
> > +
> > +            return P2M_ONE_DESCEND;
> > +        }
> > +        else
> > +        {
> > +            pte = orig_pte;
> > +
> > +            if ( !p2m_table(pte) )
> > +                pte.bits = 0;
> > +
> > +            if ( p2m_valid(pte) )
> > +            {
> > +                ASSERT(pte.p2m.type != p2m_invalid);
> > +
> > +                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr),
> a);
> > +                if ( rc < 0 )
> > +                    return rc;
> > +
> > +                p2m_set_permission(&pte, pte.p2m.type, a);
>
> I think this function can always make use of pte.p2m.type itself rather
> than receiving it as a parameter. The other caller passes "t" but has
> already assigned that to pte.p2m.type as well.
>

I find the approach as it is right now a bit more straight forward as all
the inputs that affect the permissions are explicitly there as inputs. I'll
switch it to what you suggest if you feel strongly about it though.


> >
> > -    rc = gva_to_ipa(info.gva, &info.gpa);
> > -    if ( rc == -EFAULT )
> > +    switch ( dabt.dfsc )
> > +    {
> > +    case DABT_DFSC_PERMISSION_1:
> > +    case DABT_DFSC_PERMISSION_2:
> > +    case DABT_DFSC_PERMISSION_3:
>
> Eventually this will need to handle level 0 too. Would it work to mask
> out the level bits and check the remainder against the common bit
> pattern?
>

>From a performance perspective my understanding is that bitmasks
under-perform pure numeric comparisons. I see that with the FSC_FLT defines
the masking approach has been taken which I guess I'll switch to as to
reduce bloating but it might be worth taking a second look at.


>
> > +/* Data abort data fetch status codes */
> > +enum dabt_dfsc {
> > +    DABT_DFSC_ADDR_SIZE_0       = 0b000000,
>
> Unfortunately I think 0b... is a gcc extension and not standard C
> (please correct me if I'm wrong). In which case we should probably avoid
> it and use hex instead.
>

It's also supported by clang but you are right.


>
> Actually, isn't this partially duplicating the existing FSC_* defines?
> We should either use those here or move the existing users over to the
> new scheme.
>

Ack. Somehow I was oblivious to the fact we already have defines for
these.. I guess I was looking for the values defined in the manual in 0b
form. Either way, since we already have defines for these and I only care
about the permission defines anyway, I'll drop the new enums and use the
approach that's in place already.

Thanks!
Tamas


>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 9077 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-09-18 18:59   ` Ian Campbell
@ 2014-09-18 20:12     ` Tamas K Lengyel
  0 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-18 20:12 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1601 bytes --]

> > +#ifdef CONFIG_ARM_32
> > +    gva = READ_CP32(HIFAR);
> > +#else
> > +    gva = READ_SYSREG64(FAR_EL2);
> > +#endif
>
> You can just use READ_SYSREG(FAR_EL2) here and it will do the right
> thing without the ifdef.
> [...]
> > +    addr = READ_SYSREG(FAR_EL2);
>

> Like you do here ;-)
>

Ack.


> > index b844f1d..044de12 100644
> > --- a/xen/include/asm-arm/processor.h
> > +++ b/xen/include/asm-arm/processor.h
> > @@ -292,6 +292,36 @@ enum dabt_dfsc {
> >      DABT_DFSC_TLB_CONFLICT      = 0b110000,
> >  };
> >
> > +/* Instruction abort instruction fault status codes */
> > +enum iabt_ifsc {
> > +    IABT_IFSC_ADDR_SIZE_0       = 0b000000,
>
> Apart from the related comments on the last patch which mostly apply
> here too, aren't these mostly common with the DABT codes?
>

Ack, will be dropped and going to be using pre-existing FSC defines.


>
> > @@ -371,10 +401,18 @@ union hsr {
> >      } sysreg; /* HSR_EC_SYSREG */
> >  #endif
> >
> > +    struct hsr_iabt {
> > +        unsigned long ifsc:6;   /* Instruction fault status code */
> > +        unsigned long res0:1;
> > +        unsigned long s1ptw:1;  /* Fault during a stage 1 translation
> table walk */
> > +        unsigned long res1:1;
> > +        unsigned long ea:1;     /* External abort type */
>
> Please use eat for consistency here.
>
> You should also include the common len/cc/etc bits and sufficient
> padding that the whole thing adds up to 32-bits.
>

Ack.


>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 2916 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-18 20:09     ` Tamas K Lengyel
@ 2014-09-19  9:05       ` Tamas K Lengyel
  2014-09-22  9:11         ` Ian Campbell
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-19  9:05 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 982 bytes --]

On Thu, Sep 18, 2014 at 10:09 PM, Tamas K Lengyel <
tamas.lengyel@zentific.com> wrote:

>
>
>
> On Thu, Sep 18, 2014 at 8:54 PM, Ian Campbell <ian.campbell@citrix.com>
> wrote:
>
>> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>>
>> > -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
>> > +        if ( level < 3 && p2m_access_rwx != a )
>> > +        {
>> > +            /* We create only 4k pages when mem_access is in use. */
>>
>> I wonder if it might turn out cleaner to integrate this check into
>> is_mapping_aligned (which really is more of a "can we use a superpage"
>> function).
>>
>> i.e.
>>         /* mem access cannot use super pages */
>>         if ( a != p2m_access_rwx && level_size != THIRD_SIZE )
>>                 return false;
>>
>
> Ack, that would make sense.
>
>
One a closer look, is_mapping_aligned is also used when removing pages,
where we would need to add an exception.. so it doesn't look cleaner after
all.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1889 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-19  9:05       ` Tamas K Lengyel
@ 2014-09-22  9:11         ` Ian Campbell
  2014-09-22 17:18           ` Tamas K Lengyel
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-22  9:11 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

On Fri, 2014-09-19 at 11:05 +0200, Tamas K Lengyel wrote:
> 
> 
> On Thu, Sep 18, 2014 at 10:09 PM, Tamas K Lengyel
> <tamas.lengyel@zentific.com> wrote:
>         
>         
>         
>         On Thu, Sep 18, 2014 at 8:54 PM, Ian Campbell
>         <ian.campbell@citrix.com> wrote:
>                 On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel
>                 wrote:
>                 
>                 > -        if ( is_mapping_aligned(*addr, end_gpaddr,
>                 0, level_size) )
>                 > +        if ( level < 3 && p2m_access_rwx != a )
>                 > +        {
>                 > +            /* We create only 4k pages when
>                 mem_access is in use. */
>                 
>                 I wonder if it might turn out cleaner to integrate
>                 this check into
>                 is_mapping_aligned (which really is more of a "can we
>                 use a superpage"
>                 function).
>                 
>                 i.e.
>                         /* mem access cannot use super pages */
>                         if ( a != p2m_access_rwx && level_size !=
>                 THIRD_SIZE )
>                                 return false;
>         
>         
>         Ack, that would make sense.
>         
>         
>         
> 
> 
> One a closer look, is_mapping_aligned is also used when removing
> pages, where we would need to add an exception.. so it doesn't look
> cleaner after all.

Hrm. The two things I'd like to try and avoid are the open coding of the
memaccess logic and more importantly the weird "empty if clause" thing
which this adds.

Perhaps an is_memaccess_enabled(...) (or /is..._active/used? or
is_superpage_allowed) predicate folded in with the existing
is_mapping_aligned check?

BTW, how come you only do this for ALLOCATE and not INSERT? I'd be
surprised if you were hitting the ALLOCATE case since it is only used
for the non-1:1 dom0 case (which no platform actually uses today)

Ian.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-22  9:11         ` Ian Campbell
@ 2014-09-22 17:18           ` Tamas K Lengyel
  0 siblings, 0 replies; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-22 17:18 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 2162 bytes --]

On Mon, Sep 22, 2014 at 11:11 AM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Fri, 2014-09-19 at 11:05 +0200, Tamas K Lengyel wrote:
> >
> >
> > On Thu, Sep 18, 2014 at 10:09 PM, Tamas K Lengyel
> > <tamas.lengyel@zentific.com> wrote:
> >
> >
> >
> >         On Thu, Sep 18, 2014 at 8:54 PM, Ian Campbell
> >         <ian.campbell@citrix.com> wrote:
> >                 On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel
> >                 wrote:
> >
> >                 > -        if ( is_mapping_aligned(*addr, end_gpaddr,
> >                 0, level_size) )
> >                 > +        if ( level < 3 && p2m_access_rwx != a )
> >                 > +        {
> >                 > +            /* We create only 4k pages when
> >                 mem_access is in use. */
> >
> >                 I wonder if it might turn out cleaner to integrate
> >                 this check into
> >                 is_mapping_aligned (which really is more of a "can we
> >                 use a superpage"
> >                 function).
> >
> >                 i.e.
> >                         /* mem access cannot use super pages */
> >                         if ( a != p2m_access_rwx && level_size !=
> >                 THIRD_SIZE )
> >                                 return false;
> >
> >
> >         Ack, that would make sense.
> >
> >
> >
> >
> >
> > One a closer look, is_mapping_aligned is also used when removing
> > pages, where we would need to add an exception.. so it doesn't look
> > cleaner after all.
>
> Hrm. The two things I'd like to try and avoid are the open coding of the
> memaccess logic and more importantly the weird "empty if clause" thing
> which this adds.
>
> Perhaps an is_memaccess_enabled(...) (or /is..._active/used? or
> is_superpage_allowed) predicate folded in with the existing
> is_mapping_aligned check?
>
> BTW, how come you only do this for ALLOCATE and not INSERT? I'd be
> surprised if you were hitting the ALLOCATE case since it is only used
> for the non-1:1 dom0 case (which no platform actually uses today)
>
> Ian.
>

Hm, I might have lost the block in this revision which does it for INSERT
as well.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3076 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access on ARM
  2014-09-18 19:02   ` Ian Campbell
@ 2014-09-22 18:48     ` Tamas K Lengyel
  2014-09-23 12:18       ` Ian Campbell
  0 siblings, 1 reply; 47+ messages in thread
From: Tamas K Lengyel @ 2014-09-22 18:48 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 985 bytes --]

On Thu, Sep 18, 2014 at 9:02 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> > +static inline int test_and_set_bit(int nr, volatile void *addr)
> > +{
> > +        unsigned int mask = BIT_MASK(nr);
> > +        volatile unsigned int *p =
> > +                ((volatile unsigned int *)addr) + BIT_WORD(nr);
> > +        unsigned int old = *p;
> > +
> > +        *p = old | mask;
> > +        return (old & mask) != 0;
> > +}
>
> This doesn't need to be / is not intended to be atomic, right?
>

This function I took from xen/include/asm-arm/bitops.h directly. Since the
x86 assembly is missing "lock;" I assume this doesn't need to be atomic but
I may be wrong. However, in case the x86 assembly should actually have
"lock;" similarly to how it is in xen/include/asm-x86/bitops.h, then the
ARM side should be atomic too AFAIU. Do we have an atomic test_and_set_bit
implementation for ARM laying around somewhere?

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1475 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access on ARM
  2014-09-22 18:48     ` Tamas K Lengyel
@ 2014-09-23 12:18       ` Ian Campbell
  2014-10-01 17:32         ` Aravindh Puthiyaparambil (aravindp)
  0 siblings, 1 reply; 47+ messages in thread
From: Ian Campbell @ 2014-09-23 12:18 UTC (permalink / raw)
  To: Tamas K Lengyel, Aravindh Puthiyaparambil
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

On Mon, 2014-09-22 at 20:48 +0200, Tamas K Lengyel wrote:
> 
> 
> On Thu, Sep 18, 2014 at 9:02 PM, Ian Campbell
> <ian.campbell@citrix.com> wrote:
>         On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>         > +static inline int test_and_set_bit(int nr, volatile void
>         *addr)
>         > +{
>         > +        unsigned int mask = BIT_MASK(nr);
>         > +        volatile unsigned int *p =
>         > +                ((volatile unsigned int *)addr) +
>         BIT_WORD(nr);
>         > +        unsigned int old = *p;
>         > +
>         > +        *p = old | mask;
>         > +        return (old & mask) != 0;
>         > +}
>         
>         This doesn't need to be / is not intended to be atomic, right?
> 
> 
> This function I took from xen/include/asm-arm/bitops.h directly. Since
> the x86 assembly is missing "lock;" I assume this doesn't need to be
> atomic but I may be wrong.

Looks like you have taken __test_and_set_bit (the non-atomic version)
and renamed it. Plain test_and_set_bit is supposed to be atomic I think.

>  However, in case the x86 assembly should actually have "lock;"
> similarly to how it is in xen/include/asm-x86/bitops.h, then the ARM
> side should be atomic too AFAIU.

I don't know how xen-access.c uses them so I don't know if they are
supposed to be atomic or not, but the comment above the x86 version says
they are and then defines one which is not... No idea if it is the
comment or the code which is wrong though.

It seems this is used to define some sort of lock spinlock
implementation. The code isn't multithreaded so I don't know why a lock
is needed (or if it is why a normal userspace pthread lock isn't good
enough). I hope this isn't sharing a lock with the hypervisor or
something! Maybe all that locking can just be ditched, or converted to a
standard lock.

CCing Aravindh who's been touching this most recently. 
>  Do we have an atomic test_and_set_bit implementation for ARM laying
> around somewhere?

xen/arch/arm/arm32/lib has testsetbit.S which uses bitops.S. It's a bit
non-trivial. IIRC gcc these days has some intrinsics which might be
usable. As above I hope it won't be needed though...

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access on ARM
  2014-09-23 12:18       ` Ian Campbell
@ 2014-10-01 17:32         ` Aravindh Puthiyaparambil (aravindp)
  0 siblings, 0 replies; 47+ messages in thread
From: Aravindh Puthiyaparambil (aravindp) @ 2014-10-01 17:32 UTC (permalink / raw)
  To: Ian Campbell, Tamas K Lengyel
  Cc: Tim Deegan, Julien Grall, Ian Jackson, xen-devel,
	Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
	Daniel De Graaf, Tamas K Lengyel

>> On Thu, Sep 18, 2014 at 9:02 PM, Ian Campbell
>> <ian.campbell@citrix.com> wrote:
>>         On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
>>         > +static inline int test_and_set_bit(int nr, volatile void
>>         *addr)
>>         > +{
>>         > +        unsigned int mask = BIT_MASK(nr);
>>         > +        volatile unsigned int *p =
>>         > +                ((volatile unsigned int *)addr) +
>>         BIT_WORD(nr);
>>         > +        unsigned int old = *p;
>>         > +
>>         > +        *p = old | mask;
>>         > +        return (old & mask) != 0;
>>         > +}
>>
>>         This doesn't need to be / is not intended to be atomic, right?
>>
>>
>> This function I took from xen/include/asm-arm/bitops.h directly. Since
>> the x86 assembly is missing "lock;" I assume this doesn't need to be
>> atomic but I may be wrong.
>
>Looks like you have taken __test_and_set_bit (the non-atomic version)
>and renamed it. Plain test_and_set_bit is supposed to be atomic I think.
>
>>  However, in case the x86 assembly should actually have "lock;"
>> similarly to how it is in xen/include/asm-x86/bitops.h, then the ARM
>> side should be atomic too AFAIU.
>
>I don't know how xen-access.c uses them so I don't know if they are
>supposed to be atomic or not, but the comment above the x86 version says
>they are and then defines one which is not... No idea if it is the
>comment or the code which is wrong though.
>
>It seems this is used to define some sort of lock spinlock
>implementation. The code isn't multithreaded so I don't know why a lock
>is needed (or if it is why a normal userspace pthread lock isn't good
>enough). I hope this isn't sharing a lock with the hypervisor or
>something! Maybe all that locking can just be ditched, or converted to a
>standard lock.
>
>CCing Aravindh who's been touching this most recently.

Sorry about the delay in replying. I was not involved when this code was introduced by Joe Epstein who was the original author and I think was reviewed Tim. 

Tim, do you remember why this was done?

Thanks,
Aravindh

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2014-10-01 17:32 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-15 14:02 [PATCH for-4.5 v6 00/17] Mem_event and mem_access for ARM Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 01/17] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 02/17] xen: Relocate p2m_mem_access_resume to mem_access common Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 03/17] xen: Relocate struct npfec definition into common Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 04/17] xen: Relocate mem_event_op domctl and access_op memop " Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 05/17] x86/p2m: Typo fix for spelling ambiguous Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 06/17] xen/mem_event: Clean out superfluous white-spaces Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 07/17] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 08/17] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 09/17] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 10/17] xen/arm: p2m type definitions and changes Tamas K Lengyel
2014-09-15 22:35   ` Ian Campbell
2014-09-16  8:49     ` Tamas K Lengyel
2014-09-16 13:27       ` Ian Campbell
2014-09-16 20:38         ` Julien Grall
2014-09-16 21:52           ` Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 11/17] xen/arm: Add set access required domctl Tamas K Lengyel
2014-09-15 22:37   ` Ian Campbell
2014-09-16  8:37     ` Tamas K Lengyel
2014-09-15 22:38   ` Ian Campbell
2014-09-16  8:33     ` Tamas K Lengyel
2014-09-16 13:25       ` Ian Campbell
2014-09-15 14:02 ` [PATCH for-4.5 v6 12/17] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
2014-09-15 22:39   ` Ian Campbell
2014-09-16  8:02     ` Tamas K Lengyel
2014-09-16 16:44       ` Ian Campbell
2014-09-16 17:09         ` Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
2014-09-15 22:53   ` Ian Campbell
     [not found]     ` <CAErYnshu0vJJMxWwu4eo2MZf=q_g2H123p6VUk_4a9f12vYLjg@mail.gmail.com>
2014-09-16 10:07       ` Tamas K Lengyel
2014-09-16 16:50         ` Ian Campbell
2014-09-16 17:08           ` Tamas K Lengyel
2014-09-18 18:54   ` Ian Campbell
2014-09-18 20:09     ` Tamas K Lengyel
2014-09-19  9:05       ` Tamas K Lengyel
2014-09-22  9:11         ` Ian Campbell
2014-09-22 17:18           ` Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 14/17] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
2014-09-18 18:59   ` Ian Campbell
2014-09-18 20:12     ` Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 15/17] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 16/17] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
2014-09-15 14:02 ` [PATCH for-4.5 v6 17/17] tools/tests: Enable xen-access " Tamas K Lengyel
2014-09-18 19:02   ` Ian Campbell
2014-09-22 18:48     ` Tamas K Lengyel
2014-09-23 12:18       ` Ian Campbell
2014-10-01 17:32         ` Aravindh Puthiyaparambil (aravindp)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.