All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
@ 2014-09-29 11:36 Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support Tamas K Lengyel
                   ` (9 more replies)
  0 siblings, 10 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

The ARM virtualization extension provides 2-stage paging, a similar mechanisms
to Intel's EPT, which can be used to trace the memory accesses performed by
the guest systems. This series sets up the necessary infrastructure in the ARM code
to deliver the event on R/W/X traps. Finally, we turn on the compilation of
mem_access and mem_event on ARM and perform the necessary changes to the tools side.

This version of the series has been fully tested and is functional on both an
Arndale board and on Intel hardware.

This version is based on staging as to skip the patches that have already
been merged.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/arm_memaccess11

Julien Grall (1):
  xen/arm: Implement domain_get_maximum_gpfn

Tamas K Lengyel (8):
  xen/arm: p2m changes for mem_access support
  xen/arm: Add p2m_set_permission and p2m_shatter_page helpers.
  xen/arm: Data abort exception (R/W) mem_events.
  xen/arm: Allow hypervisor access to mem_access protected pages
  xen/arm: Instruction prefetch abort (X) mem_event handling
  xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  tools/libxc: Allocate magic page for mem access on ARM
  tools/tests: Enable xen-access on ARM

 config/arm32.mk                     |   1 +
 config/arm64.mk                     |   1 +
 tools/libxc/xc_dom_arm.c            |   6 +-
 tools/tests/xen-access/Makefile     |   9 +-
 tools/tests/xen-access/xen-access.c |  79 +++--
 xen/arch/arm/guestcopy.c            | 120 +++++++-
 xen/arch/arm/mm.c                   |   2 +-
 xen/arch/arm/p2m.c                  | 558 +++++++++++++++++++++++++++++++-----
 xen/arch/arm/traps.c                |  57 +++-
 xen/include/asm-arm/domain.h        |   1 +
 xen/include/asm-arm/p2m.h           |  41 ++-
 xen/include/asm-arm/processor.h     |  13 +-
 xen/include/xsm/dummy.h             |  26 +-
 xen/include/xsm/xsm.h               |  29 +-
 xen/xsm/dummy.c                     |   7 +-
 xen/xsm/flask/hooks.c               |  33 ++-
 16 files changed, 833 insertions(+), 150 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 12:26   ` Julien Grall
  2014-09-29 11:36 ` [PATCH for-4.5 v11 2/9] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

Add necessary changes for page table construction routines to pass
the default access information. We store the p2m_access_t info in a
Radix tree as the PTE lacks enough software programmable bits.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v11: - Move including common/mem_event.h down the series.
v10: - Typo fix and drop reshuffling things that no longer need
      shuffling.
v8: - Drop lock inputs as common mem_access_check is postponed.
    - Resurrect the radix tree with an extra boolean access_in_use flag
      to indicate if the tree is empty to avoid lookups.
v7: - Remove radix tree init/destroy and move p2m_access_t store to page_info.
    - Add p2m_gpfn_lock/unlock functions.
    - Add bool_t lock input to p2m_lookup and apply_p2m_changes so the caller
      can specify if locking should be performed. This is needed in order to
      support mem_access_check from common.
v6: - Move mem_event header include to first patch that needs it.
v5: - #include grouping style-fix.
v4: - Move p2m_get_hostp2m definition here.
---
 xen/arch/arm/p2m.c              | 49 +++++++++++++++++++++++++++--------------
 xen/include/asm-arm/domain.h    |  1 +
 xen/include/asm-arm/p2m.h       | 25 ++++++++++++++++++++-
 xen/include/asm-arm/processor.h |  2 +-
 4 files changed, 59 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 70929fc..0f86088 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -228,7 +228,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
 }
 
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
-                               p2m_type_t t)
+                               p2m_type_t t, p2m_access_t a)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     /* sh, xn and write bit will be defined in the following switches
@@ -346,7 +346,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
              pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
-                                    MATTR_MEM, t);
+                                    MATTR_MEM, t, p2m->default_access);
 
              /*
               * First and second level super pages set p2m.table = 0, but
@@ -366,7 +366,8 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
+    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
+                           p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
 
@@ -469,7 +470,8 @@ static int apply_one_level(struct domain *d,
                            paddr_t *maddr,
                            bool_t *flush,
                            int mattr,
-                           p2m_type_t t)
+                           p2m_type_t t,
+                           p2m_access_t a)
 {
     const paddr_t level_size = level_sizes[level];
     const paddr_t level_mask = level_masks[level];
@@ -498,7 +500,7 @@ static int apply_one_level(struct domain *d,
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
+                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
                 p2m_write_pte(entry, pte, flush_cache);
@@ -533,7 +535,7 @@ static int apply_one_level(struct domain *d,
              (level == 3 || !p2m_table(orig_pte)) )
         {
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
+            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
@@ -712,7 +714,9 @@ static int apply_p2m_changes(struct domain *d,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     uint32_t mask,
+                     p2m_type_t t,
+                     p2m_access_t a)
 {
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -805,7 +809,7 @@ static int apply_p2m_changes(struct domain *d,
                                   level, flush_pt, op,
                                   start_gpaddr, end_gpaddr,
                                   &addr, &maddr, &flush,
-                                  mattr, t);
+                                  mattr, t, a);
             if ( ret < 0 ) { rc = ret ; goto out; }
             count += ret;
             /* L3 had better have done something! We cannot descend any further */
@@ -863,7 +867,7 @@ out:
          */
         apply_p2m_changes(d, REMOVE,
                           start_gpaddr, addr + level_sizes[level], orig_maddr,
-                          mattr, p2m_invalid);
+                          mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
     for ( level = P2M_ROOT_LEVEL; level < 4; level ++ )
@@ -882,7 +886,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, 0, p2m_ram_rw,
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -894,7 +899,8 @@ int map_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_mmio_direct);
+                             MATTR_DEV, 0, p2m_mmio_direct,
+                             d->arch.p2m.default_access);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -906,7 +912,8 @@ int unmap_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_invalid);
+                             MATTR_DEV, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -918,7 +925,8 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, 0, t,
+                             d->arch.p2m.default_access);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -928,7 +936,8 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, 0, p2m_invalid,
+                      d->arch.p2m.default_access);
 }
 
 int arch_grant_map_page_identity(struct domain *d, unsigned long frame,
@@ -1058,6 +1067,8 @@ void p2m_teardown(struct domain *d)
 
     p2m_free_vmid(d);
 
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
     spin_unlock(&p2m->lock);
 }
 
@@ -1083,6 +1094,10 @@ int p2m_init(struct domain *d)
     p2m->max_mapped_gfn = 0;
     p2m->lowest_mapped_gfn = ULONG_MAX;
 
+    p2m->default_access = p2m_access_rwx;
+    p2m->access_in_use = false;
+    radix_tree_init(&p2m->mem_access_settings);
+
 err:
     spin_unlock(&p2m->lock);
 
@@ -1097,7 +1112,8 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, 0, p2m_invalid,
+                              d->arch.p2m.default_access);
 }
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
@@ -1111,7 +1127,8 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
-                             MATTR_MEM, p2m_invalid);
+                             MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 787e93c..3d69152 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -17,6 +17,7 @@ struct hvm_domain
 {
     uint64_t              params[HVM_NR_PARAMS];
     struct hvm_iommu      iommu;
+    bool_t                introspection_enabled;
 }  __cacheline_aligned;
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 10bf111..bda4837 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,7 +2,7 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
-
+#include <xen/radix-tree.h>
 #include <xen/p2m-common.h>
 
 #define paddr_bits PADDR_BITS
@@ -48,6 +48,18 @@ struct p2m_domain {
     /* If true, and an access fault comes in and there is no mem_event listener,
      * pause domain. Otherwise, remove access restrictions. */
     bool_t access_required;
+
+    /* Defines if mem_access is in use for the domain. */
+    bool_t access_in_use;
+
+    /* Default P2M access type for each page in the the domain: new pages,
+     * swapped in pages, cleared pages, and pages that are ambiguously
+     * retyped get this access type. See definition of p2m_access_t. */
+    p2m_access_t default_access;
+
+    /* Radix tree to store the p2m_access_t settings as the pte's don't have
+     * enough available bits to store this information. */
+    struct radix_tree_root mem_access_settings;
 };
 
 /* List of possible type for each page in the p2m entry.
@@ -221,6 +233,17 @@ int arch_grant_unmap_page_identity(struct domain *d, unsigned long frame);
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+/* mem_event and mem_access are supported on any ARM guest */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
+static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 07a421c..d74b6f4 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -435,7 +435,7 @@ union hsr {
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-        unsigned long s1ptw:1; /* */
+        unsigned long s1ptw:1; /* Stage 2 fault during stage 1 translation */
         unsigned long cache:1; /* Cache Maintenance */
         unsigned long eat:1;   /* External Abort Type */
 #ifdef CONFIG_ARM_32
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 2/9] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 3/9] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers Tamas K Lengyel
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, Julien Grall, tim, stefano.stabellini, dgdegra

From: Julien Grall <julien.grall@linaro.org>

The function domain_get_maximum_gpfn is returning the maximum gpfn ever
mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

We use this in xenaccess as to avoid the user attempting to set page
permissions on pages which don't exist for the domain, as a non-arch specific
sanity check.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index c5b48ef..439cb01 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -971,7 +971,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 3/9] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers.
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 2/9] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
v8: Determine level_shift in p2m_shatter_page instead of passing as argument.
---
 xen/arch/arm/p2m.c | 136 ++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 93 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0f86088..7dec1da 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -227,6 +227,76 @@ int p2m_pod_decrease_reservation(struct domain *d,
     return -ENOSYS;
 }
 
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+    /* First apply type permissions */
+    switch ( t )
+    {
+    case p2m_ram_rw:
+        e->p2m.xn = 0;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_ram_ro:
+        e->p2m.xn = 0;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct:
+        e->p2m.xn = 1;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_iommu_map_ro:
+    case p2m_grant_map_ro:
+    case p2m_invalid:
+        e->p2m.xn = 1;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_max_real_type:
+        BUG();
+        break;
+    }
+
+    /* Then restrict with access permissions */
+    switch ( a )
+    {
+    case p2m_access_rwx:
+        break;
+    case p2m_access_wx:
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rw:
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_w:
+        e->p2m.read = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_rx:
+    case p2m_access_rx2rw:
+        e->p2m.write = 0;
+        break;
+    case p2m_access_x:
+        e->p2m.write = 0;
+        e->p2m.read = 0;
+        break;
+    case p2m_access_r:
+        e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_n:
+    case p2m_access_n2rwx:
+        e->p2m.read = e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    }
+}
+
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
                                p2m_type_t t, p2m_access_t a)
 {
@@ -258,37 +328,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
     }
 
-    switch (t)
-    {
-    case p2m_ram_rw:
-        e.p2m.xn = 0;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_ram_ro:
-        e.p2m.xn = 0;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_iommu_map_rw:
-    case p2m_map_foreign:
-    case p2m_grant_map_rw:
-    case p2m_mmio_direct:
-        e.p2m.xn = 1;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_iommu_map_ro:
-    case p2m_grant_map_ro:
-    case p2m_invalid:
-        e.p2m.xn = 1;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_max_real_type:
-        BUG();
-        break;
-    }
+    p2m_set_permission(&e, t, a);
 
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
@@ -452,6 +492,26 @@ static const paddr_t level_masks[] =
 static const paddr_t level_shifts[] =
     { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
+static int p2m_shatter_page(struct domain *d,
+                            lpae_t *entry,
+                            unsigned int level,
+                            bool_t flush_cache)
+{
+    const paddr_t level_shift = level_shifts[level];
+    int rc = p2m_create_table(d, entry,
+                              level_shift - PAGE_SHIFT, flush_cache);
+
+    if ( !rc )
+    {
+        struct p2m_domain *p2m = &d->arch.p2m;
+        p2m->stats.shattered[level]++;
+        p2m->stats.mappings[level]--;
+        p2m->stats.mappings[level+1] += LPAE_ENTRIES;
+    }
+
+    return rc;
+}
+
 /*
  * 0   == (P2M_ONE_DESCEND) continue to descend the tree
  * +ve == (P2M_ONE_PROGRESS_*) handled at this level, continue, flush,
@@ -584,14 +644,9 @@ static int apply_one_level(struct domain *d,
             if ( p2m_mapping(orig_pte) )
             {
                 *flush = true;
-                rc = p2m_create_table(d, entry,
-                                      level_shift - PAGE_SHIFT, flush_cache);
+                rc = p2m_shatter_page(d, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
-
-                p2m->stats.shattered[level]++;
-                p2m->stats.mappings[level]--;
-                p2m->stats.mappings[level+1] += LPAE_ENTRIES;
             } /* else: an existing table mapping -> descend */
 
             BUG_ON(!p2m_table(*entry));
@@ -626,15 +681,10 @@ static int apply_one_level(struct domain *d,
                  * and descend.
                  */
                 *flush = true;
-                rc = p2m_create_table(d, entry,
-                                      level_shift - PAGE_SHIFT, flush_cache);
+                rc = p2m_shatter_page(d, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
 
-                p2m->stats.shattered[level]++;
-                p2m->stats.mappings[level]--;
-                p2m->stats.mappings[level+1] += LPAE_ENTRIES;
-
                 return P2M_ONE_DESCEND;
             }
         }
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 3/9] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 12:35   ` Julien Grall
  2014-09-29 11:36 ` [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

This patch enables to store, set, check and deliver LPAE R/W mem_events.
As the LPAE PTE's lack enough available software programmable bits,
we store the permissions in a Radix tree. A custom boolean, access_in_use,
specifies if the tree is in use to avoid unecessary lookups on an empty tree.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v11: - Move including common/mem_event.h in here in p2m.h.
     - Flush the tlb in p2m_set_mem_access to cover both the preemption
       and successful finish cases.
v10: - Remove ASSERT from MEMACCESS case.
    - Flush the tlb in the MEMACCESS case as we progress.
    - Typos and style fixes.
v8: - Revert to arch specific p2m_mem_access_check.
    - Retire dabt_dfsc enum and use FSC_FLT defines.
    - Revert to Radix tree approach and use access_in_use flag to
      indicate if the tree is in use or not to avoid uneccessary lookups.
v7: - Removed p2m_shatter_page and p2m_set_permission into separate
      patch.
    - Replace Radix tree settings store with extended struct page_info
      approach. This way the trap handlers can use the MMU directly to
      locate the permission store instead of having to do a tree-lookup.
    - Add p2m_get_entry/set_entry compat functions which are required by
      the common mem_access_check function.
    - Typo fixes.
v6: - Add helper function p2m_shatter_page.
    - Only allocate 4k pages when mem_access is in use.
    - If no setting was found in radix tree but PTE exists,
      return rwx as permission.
    - Move the inclusion of various headers into this patch.
    - Make npfec a const.
v5: - Move p2m_set_entry's logic into apply_one_level via
      a new p2m_op, MEMACCESS.
v4: - Add p2m_mem_access_radix_set function to be called
      when inserting new PTE's and when updating existing entries.
    - Switch p2m_mem_access_check to return bool_t.
    - Use new struct npfec to pass violation info.
v3: - Add new function for updating the PTE entries, p2m_set_entry.
    - Use the new struct npfec to pass violation information.
    - Implement n2rwx, rx2rw and listener required routines.
v2: - Patch been split to ease the review process.
    - Add definitions of data abort data fetch status codes (enum dabt_dfsc)
      and only call p2m_mem_access_check for traps caused by permission violations.
    - Only call p2m_write_pte in p2m_lookup if the PTE permission actually changed.
    - Properly save settings in the Radix tree and pause the VCPU with
      mem_event_vcpu_pause.
---
 xen/arch/arm/p2m.c        | 373 ++++++++++++++++++++++++++++++++++++++++++++--
 xen/arch/arm/traps.c      |  26 +++-
 xen/include/asm-arm/p2m.h |  16 ++
 3 files changed, 401 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7dec1da..ecaa4e3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -5,6 +5,9 @@
 #include <xen/errno.h>
 #include <xen/domain_page.h>
 #include <xen/bitops.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <public/mem_event.h>
 #include <asm/flushtlb.h>
 #include <asm/gic.h>
 #include <asm/event.h>
@@ -414,12 +417,41 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
     return 0;
 }
 
+static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
+                                     p2m_access_t a)
+{
+    int rc;
+
+    if ( p2m_access_rwx == a )
+    {
+        if ( p2m->access_in_use )
+            radix_tree_delete(&p2m->mem_access_settings, pfn);
+
+        return 0;
+    }
+
+    rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
+                           radix_tree_int_to_ptr(a));
+    if ( rc == -EEXIST )
+    {
+        /* If a setting existed already, change it to the new one */
+        radix_tree_replace_slot(
+            radix_tree_lookup_slot(
+                &p2m->mem_access_settings, pfn),
+            radix_tree_int_to_ptr(a));
+        rc = 0;
+    }
+
+    return rc;
+}
+
 enum p2m_operation {
     INSERT,
     ALLOCATE,
     REMOVE,
     RELINQUISH,
     CACHEFLUSH,
+    MEMACCESS,
 };
 
 /* Put any references on the single 4K page referenced by pte.  TODO:
@@ -553,13 +585,22 @@ static int apply_one_level(struct domain *d,
         if ( p2m_valid(orig_pte) )
             return P2M_ONE_DESCEND;
 
-        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
+        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
+           /* We only create superpages when mem_access is not in use. */
+             (level == 3 || (level < 3 && !p2m->access_in_use)) )
         {
             struct page_info *page;
 
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
+                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                if ( rc < 0 )
+                {
+                    free_domheap_page(page);
+                    return rc;
+                }
+
                 pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
@@ -580,8 +621,8 @@ static int apply_one_level(struct domain *d,
         /*
          * If we get here then we failed to allocate a sufficiently
          * large contiguous region for this level (which can't be
-         * L3). Create a page table and continue to descend so we try
-         * smaller allocations.
+         * L3) or mem_access is in use. Create a page table and
+         * continue to descend so we try smaller allocations.
          */
         rc = p2m_create_table(d, entry, 0, flush_cache);
         if ( rc < 0 )
@@ -591,9 +632,14 @@ static int apply_one_level(struct domain *d,
 
     case INSERT:
         if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
-           /* We do not handle replacing an existing table with a superpage */
-             (level == 3 || !p2m_table(orig_pte)) )
+           /* We do not handle replacing an existing table with a superpage
+            * or when mem_access is in use. */
+             (level == 3 || (!p2m_table(orig_pte) && !p2m->access_in_use)) )
         {
+            rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+            if ( rc < 0 )
+                return rc;
+
             /* New mapping is superpage aligned, make it */
             pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
@@ -709,6 +755,7 @@ static int apply_one_level(struct domain *d,
 
         memset(&pte, 0x00, sizeof(pte));
         p2m_write_pte(entry, pte, flush_cache);
+        p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), p2m_access_rwx);
 
         *addr += level_size;
         *maddr += level_size;
@@ -753,6 +800,46 @@ static int apply_one_level(struct domain *d,
             *addr += PAGE_SIZE;
             return P2M_ONE_PROGRESS_NOP;
         }
+
+    case MEMACCESS:
+        if ( level < 3 )
+        {
+            if ( !p2m_valid(orig_pte) )
+            {
+                *addr += level_size;
+                return P2M_ONE_PROGRESS_NOP;
+            }
+
+            /* Shatter large pages as we descend */
+            if ( p2m_mapping(orig_pte) )
+            {
+                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                if ( rc < 0 )
+                    return rc;
+            } /* else: an existing table mapping -> descend */
+
+            return P2M_ONE_DESCEND;
+        }
+        else
+        {
+            pte = orig_pte;
+
+            if ( !p2m_table(pte) )
+                pte.bits = 0;
+
+            if ( p2m_valid(pte) )
+            {
+                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                if ( rc < 0 )
+                    return rc;
+
+                p2m_set_permission(&pte, pte.p2m.type, a);
+                p2m_write_pte(entry, pte, flush_cache);
+            }
+
+            *addr += level_size;
+            return P2M_ONE_PROGRESS;
+        }
     }
 
     BUG(); /* Should never get here */
@@ -776,6 +863,8 @@ static int apply_p2m_changes(struct domain *d,
     unsigned int cur_root_table = ~0;
     unsigned int cur_offset[4] = { ~0, };
     unsigned int count = 0;
+    unsigned long sgfn = paddr_to_pfn(start_gpaddr),
+                  egfn = paddr_to_pfn(end_gpaddr);
     bool_t flush = false;
     bool_t flush_pt;
 
@@ -821,6 +910,22 @@ static int apply_p2m_changes(struct domain *d,
             count = 0;
         }
 
+        /*
+         * Preempt setting mem_access permissions as required by XSA-89,
+         * if it's not the last iteration.
+         */
+        if ( op == MEMACCESS && count )
+        {
+            uint32_t progress = paddr_to_pfn(addr) - sgfn + 1;
+
+            if ( (egfn - sgfn) > progress && !(progress & mask)
+                 && hypercall_preempt_check() )
+            {
+                rc = progress;
+                goto out;
+            }
+        }
+
         if ( P2M_ROOT_PAGES > 1 )
         {
             int i;
@@ -886,18 +991,12 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( flush )
     {
-        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
-        unsigned long egfn = paddr_to_pfn(end_gpaddr);
-
         flush_tlb_domain(d);
         iommu_iotlb_flush(d, sgfn, egfn - sgfn);
     }
 
     if ( op == ALLOCATE || op == INSERT )
     {
-        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
-        unsigned long egfn = paddr_to_pfn(end_gpaddr);
-
         p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
         p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
     }
@@ -1284,6 +1383,258 @@ void __init setup_virt_paging(void)
     smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
+{
+    int rc;
+    bool_t violation;
+    xenmem_access_t xma;
+    mem_event_request_t *req;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+
+    /* Mem_access is not in use. */
+    if ( !p2m->access_in_use )
+        return true;
+
+    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
+    if ( rc )
+        return true;
+
+    /* Now check for mem_access violation. */
+    switch ( xma )
+    {
+    case XENMEM_access_rwx:
+        violation = false;
+        break;
+    case XENMEM_access_rw:
+        violation = npfec.insn_fetch;
+        break;
+    case XENMEM_access_wx:
+        violation = npfec.read_access;
+        break;
+    case XENMEM_access_rx:
+    case XENMEM_access_rx2rw:
+        violation = npfec.write_access;
+        break;
+    case XENMEM_access_x:
+        violation = npfec.read_access || npfec.write_access;
+        break;
+    case XENMEM_access_w:
+        violation = npfec.read_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_r:
+        violation = npfec.write_access || npfec.insn_fetch;
+        break;
+    default:
+    case XENMEM_access_n:
+    case XENMEM_access_n2rwx:
+        violation = true;
+        break;
+    }
+
+    if ( !violation )
+        return true;
+
+    /* First, handle rx2rw and n2rwx conversion automatically. */
+    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
+    {
+        rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                0, ~0, XENMEM_access_rw);
+        return false;
+    }
+    else if ( xma == XENMEM_access_n2rwx )
+    {
+        rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                0, ~0, XENMEM_access_rwx);
+    }
+
+    /* Otherwise, check if there is a memory event listener, and send the message along */
+    if ( !mem_event_check_ring(&v->domain->mem_event->access) )
+    {
+        /* No listener */
+        if ( p2m->access_required )
+        {
+            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
+                                  "no mem_event listener VCPU %d, dom %d\n",
+                                  v->vcpu_id, v->domain->domain_id);
+            domain_crash(v->domain);
+        }
+        else
+        {
+            /* n2rwx was already handled */
+            if ( xma != XENMEM_access_n2rwx )
+            {
+                /* A listener is not required, so clear the access
+                 * restrictions. */
+                rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                        0, ~0, XENMEM_access_rwx);
+            }
+        }
+
+        /* No need to reinject */
+        return false;
+    }
+
+    req = xzalloc(mem_event_request_t);
+    if ( req )
+    {
+        req->reason = MEM_EVENT_REASON_VIOLATION;
+        if ( xma != XENMEM_access_n2rwx )
+            req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        req->gfn = gpa >> PAGE_SHIFT;
+        req->offset =  gpa & ((1 << PAGE_SHIFT) - 1);
+        req->gla = gla;
+        req->gla_valid = npfec.gla_valid;
+        req->access_r = npfec.read_access;
+        req->access_w = npfec.write_access;
+        req->access_x = npfec.insn_fetch;
+        if ( npfec_kind_in_gpt == npfec.kind )
+            req->fault_in_gpt = 1;
+        if ( npfec_kind_with_gla == npfec.kind )
+            req->fault_with_gla = 1;
+        req->vcpu_id = v->vcpu_id;
+
+        mem_access_send_req(v->domain, req);
+        xfree(req);
+    }
+
+    /* Pause the current VCPU */
+    if ( xma != XENMEM_access_n2rwx )
+        mem_event_vcpu_pause(v);
+
+    return false;
+}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_access_t a;
+    long rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    switch ( access )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        a = memaccess[access];
+        break;
+    case XENMEM_access_default:
+        a = p2m->default_access;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    /*
+     * Flip access_in_use to true when a permission is set, as to prevent
+     * allocating or inserting super-pages.
+     */
+    p2m->access_in_use = true;
+
+    /* If request to set default access. */
+    if ( pfn == ~0ul )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    rc = apply_p2m_changes(d, MEMACCESS,
+                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
+                           0, MATTR_MEM, mask, 0, a);
+
+    flush_tlb_domain(d);
+    iommu_iotlb_flush(d, pfn+start, nr-start);
+
+    if ( rc < 0 )
+        return rc;
+    else if ( rc > 0 )
+        return start + rc;
+
+    return 0;
+}
+
+int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
+                       xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    void *i;
+    unsigned int index;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+            ACCESS(rx2rw),
+            ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    /* If no setting was ever set, just return rwx. */
+    if ( !p2m->access_in_use )
+    {
+        *access = XENMEM_access_rwx;
+        return 0;
+    }
+
+    /* If request to get default access */
+    if ( gpfn == ~0ull )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
+    spin_unlock(&p2m->lock);
+
+    if ( !i )
+    {
+        /*
+         * No setting was found in the Radix tree. Check if the
+         * entry exists in the page-tables.
+         */
+        paddr_t maddr = p2m_lookup(d, gpfn << PAGE_SHIFT, NULL);
+        if ( INVALID_PADDR == maddr )
+            return -ESRCH;
+
+        /* If entry exists then its rwx. */
+        *access = XENMEM_access_rwx;
+    }
+    else
+    {
+        /* Setting was found in the Radix tree. */
+        index = radix_tree_ptr_to_int(i);
+        if ( index >= ARRAY_SIZE(memaccess) )
+            return -ERANGE;
+
+        *access = memaccess[index];
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index cda0523..3a60cae 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1869,11 +1869,31 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     info.gva = READ_SYSREG64(FAR_EL2);
 #endif
 
-    if (dabt.s1ptw)
+    rc = gva_to_ipa(info.gva, &info.gpa);
+    if ( -EFAULT == rc )
         goto bad_data_abort;
 
-    rc = gva_to_ipa(info.gva, &info.gpa);
-    if ( rc == -EFAULT )
+    switch ( dabt.dfsc & 0x3f )
+    {
+    case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
+    {
+        const struct npfec npfec = {
+            .read_access = 1,
+            .write_access = dabt.write,
+            .gla_valid = 1,
+            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+    }
+
+    if ( dabt.s1ptw )
         goto bad_data_abort;
 
     /* XXX: Decode the instruction if ISS is not valid */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index bda4837..35d09f9 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -3,6 +3,8 @@
 
 #include <xen/mm.h>
 #include <xen/radix-tree.h>
+#include <public/mem_event.h> /* for mem_event_response_t */
+#include <public/memory.h>
 #include <xen/p2m-common.h>
 
 #define paddr_bits PADDR_BITS
@@ -244,6 +246,20 @@ static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
     return 1;
 }
 
+/* Send mem event based on the access. Boolean return value indicates if trap
+ * needs to be injected into guest. */
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access);
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+                       xenmem_access_t *access);
+
 #endif /* _XEN_P2M_H */
 
 /*
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 14:12   ` Julien Grall
  2014-09-29 11:36 ` [PATCH for-4.5 v11 6/9] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

The guestcopy helpers use the MMU to verify that the given guest has read/write
access to a given page during hypercalls. As we may have custom mem_access
permissions set on these pages, we do a software-based type checking in case
the MMU based approach failed, but only if access_in_use is set.

These memory accesses are not forwarded to the mem_event listener. Accesses
performed by the hypervisor are currently not part of the mem_access scheme.
This is consistent behaviour with the x86 side as well.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
 xen/arch/arm/guestcopy.c | 120 +++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 117 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 0173597..b0727b1 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -6,6 +6,111 @@
 
 #include <asm/mm.h>
 #include <asm/guest_access.h>
+#include <asm/p2m.h>
+
+/*
+ * If mem_access is in use it might have been the reason why get_page_from_gva
+ * failed to fetch the page, as it uses the MMU for the permission checking.
+ * Only in these cases we do a software-based type check and fetch the page if
+ * we indeed found a conflicting mem_access setting.
+ */
+static int check_type_get_page(vaddr_t gva, unsigned long flag,
+                               struct page_info** page)
+{
+    long rc;
+    paddr_t ipa;
+    unsigned long maddr;
+    xenmem_access_t xma;
+    p2m_type_t t;
+
+    rc = gva_to_ipa(gva, &ipa);
+    if ( rc < 0 )
+        return rc;
+
+    /*
+     * We do this first as this is faster in the default case when no
+     * permission is set on the page.
+     */
+    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa), &xma);
+    if ( rc < 0 )
+        return rc;
+
+    /* Let's check if mem_access limited the access. */
+    switch ( xma )
+    {
+    default:
+    case XENMEM_access_rwx:
+    case XENMEM_access_rw:
+        return -EFAULT;
+    case XENMEM_access_n2rwx:
+    case XENMEM_access_n:
+    case XENMEM_access_x:
+        break;
+    case XENMEM_access_wx:
+    case XENMEM_access_w:
+        if ( flag == GV2M_READ )
+            break;
+        else return -EFAULT;
+    case XENMEM_access_rx2rw:
+    case XENMEM_access_rx:
+    case XENMEM_access_r:
+        if ( flag == GV2M_WRITE )
+            break;
+        else return -EFAULT;
+    }
+
+    /*
+     * We had a mem_access permission limiting the access, but the page type
+     * could also be limiting, so we need to check that as well.
+     */
+    maddr = p2m_lookup(current->domain, ipa, &t);
+    if ( maddr == INVALID_PADDR )
+        return -EFAULT;
+
+    /*
+     * All page types are readable so we only have to check the
+     * type if writing.
+     */
+    if ( flag == GV2M_WRITE )
+    {
+        switch ( t )
+        {
+        case p2m_ram_rw:
+        case p2m_iommu_map_rw:
+        case p2m_map_foreign:
+        case p2m_grant_map_rw:
+        case p2m_mmio_direct:
+            /* Base type allows writing, we are good to get the page. */
+            break;
+        default:
+            return -EFAULT;
+        }
+    }
+
+    *page = mfn_to_page(maddr >> PAGE_SHIFT);
+    ASSERT(*page);
+
+    if ( unlikely(!get_page(*page, current->domain)) )
+    {
+        *page = NULL;
+        return -EFAULT;
+    }
+
+    return 0;
+}
+
+/*
+ * If mem_access is not in use, we have a fault. If mem_access is in use, do the
+ * software-based type checking.
+ */
+static inline
+int check_mem_access(vaddr_t gva, unsigned long flag, struct page_info **page)
+{
+    if( !current->domain->arch.p2m.access_in_use )
+        return -EFAULT;
+
+    return check_type_get_page(gva, flag, page);
+}
 
 static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
                                               unsigned len, int flush_dcache)
@@ -21,7 +126,10 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
 
         page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
-            return len;
+        {
+            if ( check_mem_access((vaddr_t) to, GV2M_WRITE, &page) < 0 )
+                return len;
+        }
 
         p = __map_domain_page(page);
         p += offset;
@@ -68,7 +176,10 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
         page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
-            return len;
+        {
+            if ( check_mem_access((vaddr_t) to, GV2M_WRITE, &page) < 0 )
+                return len;
+        }
 
         p = __map_domain_page(page);
         p += offset;
@@ -100,7 +211,10 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
 
         page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
         if ( page == NULL )
-            return len;
+        {
+            if ( check_mem_access((vaddr_t) from, GV2M_READ, &page) < 0 )
+                return len;
+        }
 
         p = __map_domain_page(page);
         p += ((vaddr_t)from & (~PAGE_MASK));
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 6/9] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 14:13   ` Julien Grall
  2014-09-29 11:36 ` [PATCH for-4.5 v11 7/9] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

Add missing structure definition for iabt and update the trap handling
mechanism to only inject the exception if the mem_access checker
decides to do so.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v10: - Minor comment fix for describing s1ptw.
v8: - Revert to arch specific p2m_mem_access_check.
    - Retire iabt_fsc enum and use FSC_FLT instead.
    - Complete the struct definition of hsr_iabt.
v7: - Use the new common mem_access_check.
v6: - Make npfec a const.
v4: - Don't mark instruction fetch violation as read violation.
    - Use new struct npfec to pass violation info.
v2: - Add definition for instruction abort instruction fetch status codes
       (enum iabt_ifsc) and only call p2m_mem_access_check for traps triggered
       for permission violations.
---
 xen/arch/arm/traps.c            | 31 +++++++++++++++++++++++++++++--
 xen/include/asm-arm/processor.h | 11 +++++++++++
 2 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 3a60cae..35135ce 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1845,8 +1845,35 @@ done:
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
-    register_t addr = READ_SYSREG(FAR_EL2);
-    inject_iabt_exception(regs, addr, hsr.len);
+    struct hsr_iabt iabt = hsr.iabt;
+    int rc;
+    paddr_t gpa;
+    register_t gva = READ_SYSREG(FAR_EL2);
+
+    rc = gva_to_ipa(gva, &gpa);
+    if ( -EFAULT == rc )
+        return;
+
+    switch ( iabt.ifsc & 0x3f )
+    {
+    case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
+    {
+        const struct npfec npfec = {
+            .insn_fetch = 1,
+            .gla_valid = 1,
+            .kind = iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(gpa, gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+    }
+
+    inject_iabt_exception(regs, gva, hsr.len);
 }
 
 static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index d74b6f4..da63ccd 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -432,6 +432,17 @@ union hsr {
     } sysreg; /* HSR_EC_SYSREG */
 #endif
 
+    struct hsr_iabt {
+        unsigned long ifsc:6;  /* Instruction fault status code */
+        unsigned long res0:1;
+        unsigned long s1ptw:1; /* Stage 2 fault during stage 1 translation */
+        unsigned long res1:1;
+        unsigned long eat:1;   /* External abort type */
+        unsigned long res2:15;
+        unsigned long len:1;   /* Instruction length */
+        unsigned long ec:6;    /* Exception Class */
+    } iabt; /* HSR_EC_INSTR_ABORT_* */
+
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 7/9] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 6/9] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 8/9] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

This patch sets up the infrastructure to support mem_access and mem_event
on ARM and turns on compilation. We define the required XSM functions.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
v8: All MEM_* flags have been converted to HAS_* and moved into config/*.mk

v3: Wrap mem_event related functions in XSM into #ifdef HAS_MEM_ACCESS
       blocks.
    Update XSM hooks in flask to properly wire it up on ARM.

v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
       use them instead of CONFIG_X86.
    Split domctl copy-back and p2m type definitions into separate
       patches and move this patch to the end of the series.
---
 config/arm32.mk         |  1 +
 config/arm64.mk         |  1 +
 xen/include/xsm/dummy.h | 26 ++++++++++++++------------
 xen/include/xsm/xsm.h   | 29 +++++++++++++++++------------
 xen/xsm/dummy.c         |  7 +++++--
 xen/xsm/flask/hooks.c   | 33 ++++++++++++++++++++-------------
 6 files changed, 58 insertions(+), 39 deletions(-)

diff --git a/config/arm32.mk b/config/arm32.mk
index 4f83a63..26b8abd 100644
--- a/config/arm32.mk
+++ b/config/arm32.mk
@@ -13,6 +13,7 @@ HAS_PL011 := y
 HAS_EXYNOS4210 := y
 HAS_OMAP := y
 HAS_NS16550 := y
+HAS_MEM_ACCESS := y
 
 # Use only if calling $(LD) directly.
 LDFLAGS_DIRECT += -EL
diff --git a/config/arm64.mk b/config/arm64.mk
index 4e57b3a..9c4c147 100644
--- a/config/arm64.mk
+++ b/config/arm64.mk
@@ -8,6 +8,7 @@ CFLAGS += #-marm -march= -mcpu= etc
 
 HAS_PL011 := y
 HAS_NS16550 := y
+HAS_MEM_ACCESS := y
 
 # Use only if calling $(LD) directly.
 LDFLAGS_DIRECT += -EL
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index df55e70..f20e89c 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -513,6 +513,20 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
+static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
 {
@@ -556,18 +570,6 @@ static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
-{
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 6c1c079..4ce089f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -141,6 +141,11 @@ struct xsm_operations {
     int (*hvm_param_nested) (struct domain *d);
     int (*get_vnumainfo) (struct domain *d);
 
+#ifdef HAS_MEM_ACCESS
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
+#endif
+
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -149,8 +154,6 @@ struct xsm_operations {
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*hvm_ioreq_server) (struct domain *d, int op);
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -540,6 +543,18 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
     return xsm_ops->get_vnumainfo(d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+{
+    return xsm_ops->mem_event_control(d, mode, op);
+}
+
+static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+{
+    return xsm_ops->mem_event_op(d, op);
+}
+#endif
+
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
 {
@@ -576,16 +591,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int
     return xsm_ops->hvm_ioreq_server(d, op);
 }
 
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
-{
-    return xsm_ops->mem_event_control(d, mode, op);
-}
-
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->mem_event_op(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 0826a8b..8eb3050 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -118,6 +118,11 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
+#endif
+
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
@@ -126,8 +131,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, hvm_ioreq_server);
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, platform_op);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index df05566..8de5e49 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -577,6 +577,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
+#ifdef HAS_MEM_ACCESS
+    case XEN_DOMCTL_mem_event_op:
+#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -584,7 +587,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_bind_pt_irq:
     case XEN_DOMCTL_unbind_pt_irq:
     case XEN_DOMCTL_ioport_mapping:
-    case XEN_DOMCTL_mem_event_op:
     /* These have individual XSM hooks (drivers/passthrough/iommu.c) */
     case XEN_DOMCTL_get_device_group:
     case XEN_DOMCTL_test_assign_device:
@@ -1189,6 +1191,18 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+#endif /* HAS_MEM_ACCESS */
+
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1299,16 +1313,6 @@ static int flask_hvm_ioreq_server(struct domain *d, int op)
     return current_has_perm(d, SECCLASS_HVM, HVM__HVMCTL);
 }
 
-static int flask_mem_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
-static int flask_mem_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1577,6 +1581,11 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
+#ifdef HAS_MEM_ACCESS
+    .mem_event_control = flask_mem_event_control,
+    .mem_event_op = flask_mem_event_op,
+#endif
+
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
@@ -1585,8 +1594,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
     .hvm_inject_msi = flask_hvm_inject_msi,
     .hvm_ioreq_server = flask_hvm_ioreq_server,
-    .mem_event_control = flask_mem_event_control,
-    .mem_event_op = flask_mem_event_op,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .platform_op = flask_platform_op,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 8/9] tools/libxc: Allocate magic page for mem access on ARM
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 7/9] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 11:36 ` [PATCH for-4.5 v11 9/9] tools/tests: Enable xen-access " Tamas K Lengyel
  2014-09-29 12:17 ` [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
  9 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/libxc/xc_dom_arm.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index 9b31b1f..13e881e 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -26,9 +26,10 @@
 #include "xg_private.h"
 #include "xc_dom.h"
 
-#define NR_MAGIC_PAGES 2
+#define NR_MAGIC_PAGES 3
 #define CONSOLE_PFN_OFFSET 0
 #define XENSTORE_PFN_OFFSET 1
+#define MEMACCESS_PFN_OFFSET 2
 
 #define LPAE_SHIFT 9
 
@@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
+    xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
             dom->console_pfn);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
             dom->xenstore_pfn);
+    xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
+            base + MEMACCESS_PFN_OFFSET);
     /* allocated by toolstack */
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
             dom->console_evtchn);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH for-4.5 v11 9/9] tools/tests: Enable xen-access on ARM
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (7 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 8/9] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
@ 2014-09-29 11:36 ` Tamas K Lengyel
  2014-09-29 14:16   ` Julien Grall
  2014-09-29 12:17 ` [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
  9 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 11:36 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, tim, stefano.stabellini, dgdegra,
	Tamas K Lengyel

Define the ARM specific test_and_set_bit functions and switch
to use maximum gpfn as the limit to setting permissions. Also,
move HAS_MEM_ACCESS definition into config.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v6: - Just use xc_domain_maximum_gpfn to get max_gpfn.
v5: - Use the new information returned by getdomaininfo, max_gpfn, to
      set access permissions. On ARM this will include the potential
      memory hole as well which the hypervisor just loops over.
v4: - Take into account multiple guest ram banks on ARM.
    - Move HAS_MEM_ACCESS definition into config/*.mk and only compile
      xen-access when it is defined.
    - Pass CONFIG_X86/CONFIG_ARM flags during compilation in xen-access
      Makefile.
---
 tools/tests/xen-access/Makefile     |  9 ++++-
 tools/tests/xen-access/xen-access.c | 79 ++++++++++++++++++++++++-------------
 2 files changed, 59 insertions(+), 29 deletions(-)

diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
index 65eef99..5056972 100644
--- a/tools/tests/xen-access/Makefile
+++ b/tools/tests/xen-access/Makefile
@@ -7,8 +7,13 @@ CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_xeninclude)
 
-TARGETS-y := 
-TARGETS-$(CONFIG_X86) += xen-access
+CFLAGS-y :=
+CFLAGS-$(CONFIG_X86) := -DCONFIG_X86
+CFLAGS-$(CONFIG_ARM) := -DCONFIG_ARM
+CFLAGS += $(CFLAGS-y)
+
+TARGETS-y :=
+TARGETS-$(HAS_MEM_ACCESS) := xen-access
 TARGETS := $(TARGETS-y)
 
 .PHONY: all
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 6cb382d..40a7143 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -41,22 +41,16 @@
 #include <xenctrl.h>
 #include <xen/mem_event.h>
 
-#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
-#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
-#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
-
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
+#ifdef CONFIG_X86
 
+#define START_PFN 0ULL
 #define ADDR (*(volatile long *) addr)
+
 /**
  * test_and_set_bit - Set a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
  */
 static inline int test_and_set_bit(int nr, volatile void *addr)
 {
@@ -69,6 +63,43 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
     return oldbit;
 }
 
+#elif CONFIG_ARM
+
+#include <xen/arch-arm.h>
+
+#define PAGE_SHIFT              12
+#define START_PFN               (GUEST_RAM0_BASE >> PAGE_SHIFT)
+#define BITS_PER_WORD           32
+#define BIT_MASK(nr)            (1UL << ((nr) % BITS_PER_WORD))
+#define BIT_WORD(nr)            ((nr) / BITS_PER_WORD)
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ */
+static inline int test_and_set_bit(int nr, volatile void *addr)
+{
+        unsigned int mask = BIT_MASK(nr);
+        volatile unsigned int *p =
+                ((volatile unsigned int *)addr) + BIT_WORD(nr);
+        unsigned int old = *p;
+
+        *p = old | mask;
+        return (old & mask) != 0;
+}
+
+#endif
+
+#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
+#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
+#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
+
+/* Spinlock and mem event definitions */
+
+#define SPIN_LOCK_UNLOCKED 0
+
 typedef int spinlock_t;
 
 static inline void spin_lock(spinlock_t *lock)
@@ -108,7 +139,7 @@ typedef struct mem_event {
 typedef struct xenaccess {
     xc_interface *xc_handle;
 
-    xc_domaininfo_t    *domain_info;
+    int max_gpfn;
 
     mem_event_t mem_event;
 } xenaccess_t;
@@ -212,7 +243,6 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     }
     xenaccess->xc_handle = NULL;
 
-    free(xenaccess->domain_info);
     free(xenaccess);
 
     return 0;
@@ -293,23 +323,17 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
                    (mem_event_sring_t *)xenaccess->mem_event.ring_page,
                    XC_PAGE_SIZE);
 
-    /* Get domaininfo */
-    xenaccess->domain_info = malloc(sizeof(xc_domaininfo_t));
-    if ( xenaccess->domain_info == NULL )
-    {
-        ERROR("Error allocating memory for domain info");
-        goto err;
-    }
+    /* Get max_gpfn */
+    xenaccess->max_gpfn = xc_domain_maximum_gpfn(xenaccess->xc_handle,
+                                                 xenaccess->mem_event.domain_id);
 
-    rc = xc_domain_getinfolist(xenaccess->xc_handle, domain_id, 1,
-                               xenaccess->domain_info);
-    if ( rc != 1 )
+    if ( xenaccess->max_gpfn < 0 )
     {
-        ERROR("Error getting domain info");
+        ERROR("Failed to get max gpfn");
         goto err;
     }
 
-    DPRINTF("max_pages = %"PRIx64"\n", xenaccess->domain_info->max_pages);
+    DPRINTF("max_gpfn = %"PRIx32"\n", xenaccess->max_gpfn);
 
     return xenaccess;
 
@@ -492,8 +516,9 @@ int main(int argc, char *argv[])
         goto exit;
     }
 
-    rc = xc_set_mem_access(xch, domain_id, default_access, 0,
-                           xenaccess->domain_info->max_pages);
+    rc = xc_set_mem_access(xch, domain_id, default_access, START_PFN,
+                           (xenaccess->max_gpfn - START_PFN) );
+
     if ( rc < 0 )
     {
         ERROR("Error %d setting all memory to access type %d\n", rc,
@@ -520,8 +545,8 @@ int main(int argc, char *argv[])
 
             /* Unregister for every event */
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
-            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
-                                   xenaccess->domain_info->max_pages);
+            rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN,
+                                   (xenaccess->max_gpfn - START_PFN) );
             rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
 
             shutting_down = 1;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
                   ` (8 preceding siblings ...)
  2014-09-29 11:36 ` [PATCH for-4.5 v11 9/9] tools/tests: Enable xen-access " Tamas K Lengyel
@ 2014-09-29 12:17 ` Tamas K Lengyel
  2014-09-29 13:37   ` Ian Campbell
  9 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 12:17 UTC (permalink / raw)
  To: konrad.wilk
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 1661 bytes --]

On Mon, Sep 29, 2014 at 1:36 PM, Tamas K Lengyel <tklengyel@sec.in.tum.de>
wrote:

> The ARM virtualization extension provides 2-stage paging, a similar
> mechanisms
> to Intel's EPT, which can be used to trace the memory accesses performed by
> the guest systems. This series sets up the necessary infrastructure in the
> ARM code
> to deliver the event on R/W/X traps. Finally, we turn on the compilation of
> mem_access and mem_event on ARM and perform the necessary changes to the
> tools side.
>

While the series is marked for-4.5, I certainly don't mind having these
remaining parts delayed till 4.6 as nothing depends on this feature being
in 4.5. It would make some security researchers life a lot easier if they
could install a stable Xen release with this feature already in-place.
Beyond that I don't think there is an audience for it (yet). IMHO I think
its close to be done but if the general feel is that it wasn't reviewed
enough and there is some hesitance, I'm OK with a couple more rounds of
reviews.

It has also been proposed that a proper analysis for overhead be performed
on this series as to show it does not add too much overhead to
non-mem_access users. What that entails is unclear to me and IMHO it's not
an easy task considering all the corner-cases and use-cases that would need
to be covered to be comprehensive. It has been my goal during this series
to minimize the overhead added and to be on-par with the x86 side, but I'm
afraid a more in-depth analysis is not something I can contribute. Of
course, if specific instances of overhead added are pointed out in the code
that can be avoided, I'm happy to do so.

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 2115 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support
  2014-09-29 11:36 ` [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support Tamas K Lengyel
@ 2014-09-29 12:26   ` Julien Grall
  2014-09-29 12:41     ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Julien Grall @ 2014-09-29 12:26 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: tim, dgdegra, wei.liu2, ian.campbell, stefano.stabellini

Hi Tamas,

On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 787e93c..3d69152 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -17,6 +17,7 @@ struct hvm_domain
>  {
>      uint64_t              params[HVM_NR_PARAMS];
>      struct hvm_iommu      iommu;
> +    bool_t                introspection_enabled;

Sorry, I haven't catch it before...

This field is used by the common code, can't it be moved in
xen/domain.h? Maybe protected by an #ifdef.

I'm fine, if the code movement is done in a follow-up patch. So:

Reviewed-by: Julien Grall <julien.grall@linaro.org>

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-29 11:36 ` [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-09-29 12:35   ` Julien Grall
  2014-09-29 12:47     ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Julien Grall @ 2014-09-29 12:35 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: tim, dgdegra, wei.liu2, ian.campbell, stefano.stabellini

Hi Tamas,

On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> @@ -776,6 +863,8 @@ static int apply_p2m_changes(struct domain *d,
>      unsigned int cur_root_table = ~0;
>      unsigned int cur_offset[4] = { ~0, };
>      unsigned int count = 0;
> +    unsigned long sgfn = paddr_to_pfn(start_gpaddr),
> +                  egfn = paddr_to_pfn(end_gpaddr);

NIT: const unsigned long

[..]

> +/* Set access type for a region of pfns.
> + * If start_pfn == -1ul, sets the default access type */
> +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access)
> +{


[..]

> +
> +    rc = apply_p2m_changes(d, MEMACCESS,
> +                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
> +                           0, MATTR_MEM, mask, 0, a);
> +
> +    flush_tlb_domain(d);
> +    iommu_iotlb_flush(d, pfn+start, nr-start);

With your solution, when rc == 0 (i.e the call memaccess has been fully
applied), you will have one more TLB flush: one in apply_p2m_changes,
the other one here...

As this code already exists in apply_p2m_changes but in the wrong place,
why didn't you move it later?

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support
  2014-09-29 12:26   ` Julien Grall
@ 2014-09-29 12:41     ` Tamas K Lengyel
  0 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 12:41 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1360 bytes --]

On Mon, Sep 29, 2014 at 2:26 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hi Tamas,
>
> On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> >  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
> > diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> > index 787e93c..3d69152 100644
> > --- a/xen/include/asm-arm/domain.h
> > +++ b/xen/include/asm-arm/domain.h
> > @@ -17,6 +17,7 @@ struct hvm_domain
> >  {
> >      uint64_t              params[HVM_NR_PARAMS];
> >      struct hvm_iommu      iommu;
> > +    bool_t                introspection_enabled;
>
> Sorry, I haven't catch it before...
>
> This field is used by the common code, can't it be moved in
> xen/domain.h? Maybe protected by an #ifdef.
>

Do you mean struct domain in xen/sched.h? That could be done I suppose. I
was debating if rather then including this field in the ARM struct to just
have an empty p2m inline function for ARM that is called from the common
code. IMHO that would be preferred if including this field as it is now is
no good.

Tamas


>
> I'm fine, if the code movement is done in a follow-up patch. So:
>
> Reviewed-by: Julien Grall <julien.grall@linaro.org>
>
> Regards,
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 2260 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-29 12:35   ` Julien Grall
@ 2014-09-29 12:47     ` Tamas K Lengyel
  2014-09-29 12:52       ` Julien Grall
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 12:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1897 bytes --]

On Mon, Sep 29, 2014 at 2:35 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hi Tamas,
>
> On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> > @@ -776,6 +863,8 @@ static int apply_p2m_changes(struct domain *d,
> >      unsigned int cur_root_table = ~0;
> >      unsigned int cur_offset[4] = { ~0, };
> >      unsigned int count = 0;
> > +    unsigned long sgfn = paddr_to_pfn(start_gpaddr),
> > +                  egfn = paddr_to_pfn(end_gpaddr);
>
> NIT: const unsigned long
>

Ack.


>
> [..]
>
> > +/* Set access type for a region of pfns.
> > + * If start_pfn == -1ul, sets the default access type */
> > +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t
> nr,
> > +                        uint32_t start, uint32_t mask, xenmem_access_t
> access)
> > +{
>
>
> [..]
>
> > +
> > +    rc = apply_p2m_changes(d, MEMACCESS,
> > +                           pfn_to_paddr(pfn+start),
> pfn_to_paddr(pfn+nr),
> > +                           0, MATTR_MEM, mask, 0, a);
> > +
> > +    flush_tlb_domain(d);
> > +    iommu_iotlb_flush(d, pfn+start, nr-start);
>
> With your solution, when rc == 0 (i.e the call memaccess has been fully
> applied), you will have one more TLB flush: one in apply_p2m_changes,
> the other one here...
>

No, I don't flush it in apply_p2m_changes, *flush is not set to true in
MEMACCESS in this version.


>
> As this code already exists in apply_p2m_changes but in the wrong place,
> why didn't you move it later?
>

The problem is with the preemption case that just goes to out. I found it
cleaner to just flush the tlb here for both cases instead of having the
preemption case going to a flush: label then to out. If that's preferred,
I'm OK with that approach too.

Tamas


>
> Regards,
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 3145 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-29 12:47     ` Tamas K Lengyel
@ 2014-09-29 12:52       ` Julien Grall
  2014-09-29 12:53         ` Julien Grall
  0 siblings, 1 reply; 31+ messages in thread
From: Julien Grall @ 2014-09-29 12:52 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel

On 09/29/2014 01:47 PM, Tamas K Lengyel wrote:
>     > +/* Set access type for a region of pfns.
>     > + * If start_pfn == -1ul, sets the default access type */
>     > +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
>     > +                        uint32_t start, uint32_t mask, xenmem_access_t access)
>     > +{
> 
> 
>     [..]
> 
>     > +
>     > +    rc = apply_p2m_changes(d, MEMACCESS,
>     > +                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
>     > +                           0, MATTR_MEM, mask, 0, a);
>     > +
>     > +    flush_tlb_domain(d);
>     > +    iommu_iotlb_flush(d, pfn+start, nr-start);
> 
>     With your solution, when rc == 0 (i.e the call memaccess has been fully
>     applied), you will have one more TLB flush: one in apply_p2m_changes,
>     the other one here...
> 
> 
> No, I don't flush it in apply_p2m_changes, *flush is not set to true in
> MEMACCESS in this version.

Oh right, sorry I haven't noticed it.

>     As this code already exists in apply_p2m_changes but in the wrong place,
>     why didn't you move it later?
> 
> 
> The problem is with the preemption case that just goes to out. I found
> it cleaner to just flush the tlb here for both cases instead of having
> the preemption case going to a flush: label then to out. If that's
> preferred, I'm OK with that approach too.

What is the issue to do?

out:
   if ( flush )
   {
      TLB flush
   }

   if ( rc < 0 && ( op == INSERT ....

   ...

   return rc;

I would prefer if you have to duplicate the flush here and at the same
time you will fix other flush issue in the different path in case of error.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events.
  2014-09-29 12:52       ` Julien Grall
@ 2014-09-29 12:53         ` Julien Grall
  0 siblings, 0 replies; 31+ messages in thread
From: Julien Grall @ 2014-09-29 12:53 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel

On 09/29/2014 01:52 PM, Julien Grall wrote:
> On 09/29/2014 01:47 PM, Tamas K Lengyel wrote:
>>     > +/* Set access type for a region of pfns.
>>     > + * If start_pfn == -1ul, sets the default access type */
>>     > +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
>>     > +                        uint32_t start, uint32_t mask, xenmem_access_t access)
>>     > +{
>>
>>
>>     [..]
>>
>>     > +
>>     > +    rc = apply_p2m_changes(d, MEMACCESS,
>>     > +                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
>>     > +                           0, MATTR_MEM, mask, 0, a);
>>     > +
>>     > +    flush_tlb_domain(d);
>>     > +    iommu_iotlb_flush(d, pfn+start, nr-start);
>>
>>     With your solution, when rc == 0 (i.e the call memaccess has been fully
>>     applied), you will have one more TLB flush: one in apply_p2m_changes,
>>     the other one here...
>>
>>
>> No, I don't flush it in apply_p2m_changes, *flush is not set to true in
>> MEMACCESS in this version.
> 
> Oh right, sorry I haven't noticed it.
> 
>>     As this code already exists in apply_p2m_changes but in the wrong place,
>>     why didn't you move it later?
>>
>>
>> The problem is with the preemption case that just goes to out. I found
>> it cleaner to just flush the tlb here for both cases instead of having
>> the preemption case going to a flush: label then to out. If that's
>> preferred, I'm OK with that approach too.
> 
> What is the issue to do?
> 
> out:
>    if ( flush )
>    {
>       TLB flush
>    }
> 
>    if ( rc < 0 && ( op == INSERT ....
> 
>    ...
> 
>    return rc;
> 
> I would prefer if you have to duplicate the flush here and at the same

I meant avoid of course.

> time you will fix other flush issue in the different path in case of error.
> 
> Regards,
> 


-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 12:17 ` [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
@ 2014-09-29 13:37   ` Ian Campbell
  2014-09-29 14:21     ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Ian Campbell @ 2014-09-29 13:37 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Tim Deegan, xen-devel, Stefano Stabellini, Daniel De Graaf

On Mon, 2014-09-29 at 14:17 +0200, Tamas K Lengyel wrote:
> On Mon, Sep 29, 2014 at 1:36 PM, Tamas K Lengyel
> <tklengyel@sec.in.tum.de> wrote:
>         The ARM virtualization extension provides 2-stage paging, a
>         similar mechanisms
>         to Intel's EPT, which can be used to trace the memory accesses
>         performed by
>         the guest systems. This series sets up the necessary
>         infrastructure in the ARM code
>         to deliver the event on R/W/X traps. Finally, we turn on the
>         compilation of
>         mem_access and mem_event on ARM and perform the necessary
>         changes to the tools side.
> 
> 
> While the series is marked for-4.5, I certainly don't mind having
> these remaining parts delayed till 4.6 as nothing depends on this
> feature being in 4.5. It would make some security researchers life a
> lot easier if they could install a stable Xen release with this
> feature already in-place. Beyond that I don't think there is an
> audience for it (yet). IMHO I think its close to be done but if the
> general feel is that it wasn't reviewed enough and there is some
> hesitance, I'm OK with a couple more rounds of reviews. 

I think we've got most (all?) of the generic/x86 code refactoring in and
we could consider taking some of the obvious ARM refactoring (e.g. Add
"p2m_set_permission and p2m_shatter_page helpers.") but that the bulk of
the functionality is now 4.6 material.

I'm sorry to be reaching this conclusion after previously being fairly
confident but the change to the copy to/from guest in the previous round
made me take a step back and realise that I had gotten caught up in all
the excitement/rush to get things in. Looking at it with a more level
head now it is touching some pretty core code, with potentially unknown
performance implications and we are quite a way past the feature freeze
already.

Having got the major bits of refactoring in should make this series far
easier to rebase once the 4.6 dev cycle opens and/or for folks who want
to make use of this stuff to apply locally.

Sorry again for not reaching this conclusion sooner.

> It has also been proposed that a proper analysis for overhead be
> performed on this series as to show it does not add too much overhead
> to non-mem_access users. What that entails is unclear to me and IMHO
> it's not an easy task considering all the corner-cases and use-cases
> that would need to be covered to be comprehensive. It has been my goal
> during this series to minimize the overhead added and to be on-par
> with the x86 side, but I'm afraid a more in-depth analysis is not
> something I can contribute. Of course, if specific instances of
> overhead added are pointed out in the code that can be avoided, I'm
> happy to do so.

What we would need is some evidence that there is no regression when
xenaccess is not in use for at least some common benchmarks. e.g.
hackbench and kernbench for example. Not asking for every corner case
etc, just some basic stuff. Stefano do you have anything thoughts on
other small/simple benchmarks?

Ian.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages
  2014-09-29 11:36 ` [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
@ 2014-09-29 14:12   ` Julien Grall
  2014-09-29 14:44     ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Julien Grall @ 2014-09-29 14:12 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: tim, dgdegra, wei.liu2, ian.campbell, stefano.stabellini

Hi Tamas,

On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> The guestcopy helpers use the MMU to verify that the given guest has read/write
> access to a given page during hypercalls. As we may have custom mem_access
> permissions set on these pages, we do a software-based type checking in case
> the MMU based approach failed, but only if access_in_use is set.
> 
> These memory accesses are not forwarded to the mem_event listener. Accesses
> performed by the hypervisor are currently not part of the mem_access scheme.
> This is consistent behaviour with the x86 side as well.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
>  xen/arch/arm/guestcopy.c | 120 +++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 117 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 0173597..b0727b1 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -6,6 +6,111 @@
>  
>  #include <asm/mm.h>
>  #include <asm/guest_access.h>
> +#include <asm/p2m.h>
> +
> +/*
> + * If mem_access is in use it might have been the reason why get_page_from_gva
> + * failed to fetch the page, as it uses the MMU for the permission checking.
> + * Only in these cases we do a software-based type check and fetch the page if
> + * we indeed found a conflicting mem_access setting.
> + */
> +static int check_type_get_page(vaddr_t gva, unsigned long flag,
> +                               struct page_info** page)
> +{
> +    long rc;
> +    paddr_t ipa;
> +    unsigned long maddr;
> +    xenmem_access_t xma;
> +    p2m_type_t t;
> +
> +    rc = gva_to_ipa(gva, &ipa);

I think you have to extend gva_to_ipa to take the flag in parameter.
Otherwise you may end up to write in read-only page from the guest POV.


> +    if ( rc < 0 )
> +        return rc;
> +
> +    /*
> +     * We do this first as this is faster in the default case when no
> +     * permission is set on the page.
> +     */
> +    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa), &xma);
> +    if ( rc < 0 )
> +        return rc;
> +
> +    /* Let's check if mem_access limited the access. */
> +    switch ( xma )
> +    {
> +    default:
> +    case XENMEM_access_rwx:

access_rwx is used to say there is no permission, right? If so, why
don't you continue to check permission?

> +    case XENMEM_access_rw:
> +        return -EFAULT;
> +    case XENMEM_access_n2rwx:
> +    case XENMEM_access_n:
> +    case XENMEM_access_x:
> +        break;
> +    case XENMEM_access_wx:
> +    case XENMEM_access_w:
> +        if ( flag == GV2M_READ )
> +            break;
> +        else return -EFAULT;
> +    case XENMEM_access_rx2rw:
> +    case XENMEM_access_rx:
> +    case XENMEM_access_r:
> +        if ( flag == GV2M_WRITE )
> +            break;
> +        else return -EFAULT;
> +    }
> +
> +    /*
> +     * We had a mem_access permission limiting the access, but the page type
> +     * could also be limiting, so we need to check that as well.
> +     */

With your current solution, if an hypercall is trying to read/write to a
restricted hypercall page, it will fail.

I though the goal was to skip mem access stuff even if the access is
restricted?

> +    maddr = p2m_lookup(current->domain, ipa, &t);
> +    if ( maddr == INVALID_PADDR )
> +        return -EFAULT;
> +
> +    /*
> +     * All page types are readable so we only have to check the
> +     * type if writing.
> +     */
> +    if ( flag == GV2M_WRITE )
> +    {
> +        switch ( t )
> +        {
> +        case p2m_ram_rw:
> +        case p2m_iommu_map_rw:
> +        case p2m_map_foreign:
> +        case p2m_grant_map_rw:
> +        case p2m_mmio_direct:
> +            /* Base type allows writing, we are good to get the page. */
> +            break;
> +        default:
> +            return -EFAULT;
> +        }
> +    }
> +
> +    *page = mfn_to_page(maddr >> PAGE_SHIFT);

You can use maddr_to_page here.

> +    ASSERT(*page);

mfn_to_page only returns a valid pointer if the MFN is valid (see
mfn_valid).

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 6/9] xen/arm: Instruction prefetch abort (X) mem_event handling
  2014-09-29 11:36 ` [PATCH for-4.5 v11 6/9] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-09-29 14:13   ` Julien Grall
  0 siblings, 0 replies; 31+ messages in thread
From: Julien Grall @ 2014-09-29 14:13 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: tim, dgdegra, wei.liu2, ian.campbell, stefano.stabellini

Hi Tamas,

On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> Add missing structure definition for iabt and update the trap handling
> mechanism to only inject the exception if the mem_access checker
> decides to do so.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by:  Julien Grall <julien.grall@linaro.org>

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 9/9] tools/tests: Enable xen-access on ARM
  2014-09-29 11:36 ` [PATCH for-4.5 v11 9/9] tools/tests: Enable xen-access " Tamas K Lengyel
@ 2014-09-29 14:16   ` Julien Grall
  0 siblings, 0 replies; 31+ messages in thread
From: Julien Grall @ 2014-09-29 14:16 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: tim, dgdegra, wei.liu2, ian.campbell, stefano.stabellini

Hello Tamas,

On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> Define the ARM specific test_and_set_bit functions and switch
> to use maximum gpfn as the limit to setting permissions. Also,
> move HAS_MEM_ACCESS definition into config.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
> v6: - Just use xc_domain_maximum_gpfn to get max_gpfn.
> v5: - Use the new information returned by getdomaininfo, max_gpfn, to
>       set access permissions. On ARM this will include the potential
>       memory hole as well which the hypervisor just loops over.
> v4: - Take into account multiple guest ram banks on ARM.
>     - Move HAS_MEM_ACCESS definition into config/*.mk and only compile
>       xen-access when it is defined.
>     - Pass CONFIG_X86/CONFIG_ARM flags during compilation in xen-access
>       Makefile.
> ---
>  tools/tests/xen-access/Makefile     |  9 ++++-
>  tools/tests/xen-access/xen-access.c | 79 ++++++++++++++++++++++++-------------
>  2 files changed, 59 insertions(+), 29 deletions(-)
> 
> diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
> index 65eef99..5056972 100644
> --- a/tools/tests/xen-access/Makefile
> +++ b/tools/tests/xen-access/Makefile
> @@ -7,8 +7,13 @@ CFLAGS += $(CFLAGS_libxenctrl)
>  CFLAGS += $(CFLAGS_libxenguest)
>  CFLAGS += $(CFLAGS_xeninclude)
>  
> -TARGETS-y := 
> -TARGETS-$(CONFIG_X86) += xen-access
> +CFLAGS-y :=
> +CFLAGS-$(CONFIG_X86) := -DCONFIG_X86
> +CFLAGS-$(CONFIG_ARM) := -DCONFIG_ARM

Hmmm ... why do you need those flags?

Can't you directly use __aarch64__, __arm__, __i386__ and __x86_64__?

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 13:37   ` Ian Campbell
@ 2014-09-29 14:21     ` Tamas K Lengyel
  2014-09-29 15:07       ` Ian Campbell
  2014-09-30 11:02       ` Stefano Stabellini
  0 siblings, 2 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 14:21 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Tim Deegan, xen-devel, Stefano Stabellini, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 4194 bytes --]

On Mon, Sep 29, 2014 at 3:37 PM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Mon, 2014-09-29 at 14:17 +0200, Tamas K Lengyel wrote:
> > On Mon, Sep 29, 2014 at 1:36 PM, Tamas K Lengyel
> > <tklengyel@sec.in.tum.de> wrote:
> >         The ARM virtualization extension provides 2-stage paging, a
> >         similar mechanisms
> >         to Intel's EPT, which can be used to trace the memory accesses
> >         performed by
> >         the guest systems. This series sets up the necessary
> >         infrastructure in the ARM code
> >         to deliver the event on R/W/X traps. Finally, we turn on the
> >         compilation of
> >         mem_access and mem_event on ARM and perform the necessary
> >         changes to the tools side.
> >
> >
> > While the series is marked for-4.5, I certainly don't mind having
> > these remaining parts delayed till 4.6 as nothing depends on this
> > feature being in 4.5. It would make some security researchers life a
> > lot easier if they could install a stable Xen release with this
> > feature already in-place. Beyond that I don't think there is an
> > audience for it (yet). IMHO I think its close to be done but if the
> > general feel is that it wasn't reviewed enough and there is some
> > hesitance, I'm OK with a couple more rounds of reviews.
>
> I think we've got most (all?) of the generic/x86 code refactoring in and
> we could consider taking some of the obvious ARM refactoring (e.g. Add
> "p2m_set_permission and p2m_shatter_page helpers.")


That would be most welcome so I don't have to carry those patches.


> but that the bulk of
> the functionality is now 4.6 material.
>
> I'm sorry to be reaching this conclusion after previously being fairly
> confident but the change to the copy to/from guest in the previous round
> made me take a step back and realise that I had gotten caught up in all
> the excitement/rush to get things in. Looking at it with a more level
> head now it is touching some pretty core code, with potentially unknown
> performance implications and we are quite a way past the feature freeze
> already.
>

The patch you reference in the previous round was newly added has been
refactored in this round to avoid adding overhead. If it's your feeling
that there might be some other similar cases and you want to delay so you
have more time to look at it just be sure, that's perfectly understandable,
but IMHO in this version there is no indication that we are adding any
unreasonable overhead.


>
> Having got the major bits of refactoring in should make this series far
> easier to rebase once the 4.6 dev cycle opens and/or for folks who want
> to make use of this stuff to apply locally.
>

Certainly and that is fair.


>
> Sorry again for not reaching this conclusion sooner.
>
> > It has also been proposed that a proper analysis for overhead be
> > performed on this series as to show it does not add too much overhead
> > to non-mem_access users. What that entails is unclear to me and IMHO
> > it's not an easy task considering all the corner-cases and use-cases
> > that would need to be covered to be comprehensive. It has been my goal
> > during this series to minimize the overhead added and to be on-par
> > with the x86 side, but I'm afraid a more in-depth analysis is not
> > something I can contribute. Of course, if specific instances of
> > overhead added are pointed out in the code that can be avoided, I'm
> > happy to do so.
>
> What we would need is some evidence that there is no regression when
> xenaccess is not in use for at least some common benchmarks. e.g.
> hackbench and kernbench for example. Not asking for every corner case
> etc, just some basic stuff. Stefano do you have anything thoughts on
> other small/simple benchmarks?
>
> Ian.
>

I don't see how those benchmarks would be meaningful for this series.
During normal operations, the only overhead for the domain would be in the
trap handlers checking the boolean flag if mem_access is in use in case a
permission fault happened in the second stage translation.. which I have
never observed happening during my tests. So those benchmarks don't really
exercise any paths that mem_access touches.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 5499 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages
  2014-09-29 14:12   ` Julien Grall
@ 2014-09-29 14:44     ` Tamas K Lengyel
  2014-09-29 14:50       ` Julien Grall
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 14:44 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 5137 bytes --]

On Mon, Sep 29, 2014 at 4:12 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hi Tamas,
>
> On 09/29/2014 12:36 PM, Tamas K Lengyel wrote:
> > The guestcopy helpers use the MMU to verify that the given guest has
> read/write
> > access to a given page during hypercalls. As we may have custom
> mem_access
> > permissions set on these pages, we do a software-based type checking in
> case
> > the MMU based approach failed, but only if access_in_use is set.
> >
> > These memory accesses are not forwarded to the mem_event listener.
> Accesses
> > performed by the hypervisor are currently not part of the mem_access
> scheme.
> > This is consistent behaviour with the x86 side as well.
> >
> > Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> > ---
> >  xen/arch/arm/guestcopy.c | 120
> +++++++++++++++++++++++++++++++++++++++++++++--
> >  1 file changed, 117 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> > index 0173597..b0727b1 100644
> > --- a/xen/arch/arm/guestcopy.c
> > +++ b/xen/arch/arm/guestcopy.c
> > @@ -6,6 +6,111 @@
> >
> >  #include <asm/mm.h>
> >  #include <asm/guest_access.h>
> > +#include <asm/p2m.h>
> > +
> > +/*
> > + * If mem_access is in use it might have been the reason why
> get_page_from_gva
> > + * failed to fetch the page, as it uses the MMU for the permission
> checking.
> > + * Only in these cases we do a software-based type check and fetch the
> page if
> > + * we indeed found a conflicting mem_access setting.
> > + */
> > +static int check_type_get_page(vaddr_t gva, unsigned long flag,
> > +                               struct page_info** page)
> > +{
> > +    long rc;
> > +    paddr_t ipa;
> > +    unsigned long maddr;
> > +    xenmem_access_t xma;
> > +    p2m_type_t t;
> > +
> > +    rc = gva_to_ipa(gva, &ipa);
>
> I think you have to extend gva_to_ipa to take the flag in parameter.
> Otherwise you may end up to write in read-only page from the guest POV.
>
>
> > +    if ( rc < 0 )
> > +        return rc;
> > +
> > +    /*
> > +     * We do this first as this is faster in the default case when no
> > +     * permission is set on the page.
> > +     */
> > +    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa), &xma);
> > +    if ( rc < 0 )
> > +        return rc;
> > +
> > +    /* Let's check if mem_access limited the access. */
> > +    switch ( xma )
> > +    {
> > +    default:
> > +    case XENMEM_access_rwx:
>
> access_rwx is used to say there is no permission, right? If so, why
> don't you continue to check permission?
>

So things are backward here. There has already been a MMU fault
(get_page_from_gva failed) and we are trying to determine the cause of that
fault. If the mem_access permission is rwx or rw, that means it had nothing
to do with it so we go back to the original path.


>
> > +    case XENMEM_access_rw:
> > +        return -EFAULT;
> > +    case XENMEM_access_n2rwx:
> > +    case XENMEM_access_n:
> > +    case XENMEM_access_x:
> > +        break;
> > +    case XENMEM_access_wx:
> > +    case XENMEM_access_w:
> > +        if ( flag == GV2M_READ )
> > +            break;
> > +        else return -EFAULT;
> > +    case XENMEM_access_rx2rw:
> > +    case XENMEM_access_rx:
> > +    case XENMEM_access_r:
> > +        if ( flag == GV2M_WRITE )
> > +            break;
> > +        else return -EFAULT;
> > +    }
> > +
> > +    /*
> > +     * We had a mem_access permission limiting the access, but the page
> type
> > +     * could also be limiting, so we need to check that as well.
> > +     */
>
> With your current solution, if an hypercall is trying to read/write to a
> restricted hypercall page, it will fail.
>
> I though the goal was to skip mem access stuff even if the access is
> restricted?
>

It's the other way around. If the hypercall is trying to read/write to a
mem_access protected page, it will be allowed to go through (provided that
the base-type also allows the operation).


>
> > +    maddr = p2m_lookup(current->domain, ipa, &t);
> > +    if ( maddr == INVALID_PADDR )
> > +        return -EFAULT;
> > +
> > +    /*
> > +     * All page types are readable so we only have to check the
> > +     * type if writing.
> > +     */
> > +    if ( flag == GV2M_WRITE )
> > +    {
> > +        switch ( t )
> > +        {
> > +        case p2m_ram_rw:
> > +        case p2m_iommu_map_rw:
> > +        case p2m_map_foreign:
> > +        case p2m_grant_map_rw:
> > +        case p2m_mmio_direct:
> > +            /* Base type allows writing, we are good to get the page. */
> > +            break;
> > +        default:
> > +            return -EFAULT;
> > +        }
> > +    }
> > +
> > +    *page = mfn_to_page(maddr >> PAGE_SHIFT);
>
> You can use maddr_to_page here.
>

Ack.


>
> > +    ASSERT(*page);
>
> mfn_to_page only returns a valid pointer if the MFN is valid (see
> mfn_valid).
>

Hm, I copied this from get_page_from_gva but I'll remove it.

Thanks!
Tamas


>
> Regards,
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 7312 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages
  2014-09-29 14:44     ` Tamas K Lengyel
@ 2014-09-29 14:50       ` Julien Grall
  2014-09-29 14:57         ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Julien Grall @ 2014-09-29 14:50 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel

On 09/29/2014 03:44 PM, Tamas K Lengyel wrote:
>     > +    if ( rc < 0 )
>     > +        return rc;
>     > +
>     > +    /*
>     > +     * We do this first as this is faster in the default case when no
>     > +     * permission is set on the page.
>     > +     */
>     > +    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa), &xma);
>     > +    if ( rc < 0 )
>     > +        return rc;
>     > +
>     > +    /* Let's check if mem_access limited the access. */
>     > +    switch ( xma )
>     > +    {
>     > +    default:
>     > +    case XENMEM_access_rwx:
> 
>     access_rwx is used to say there is no permission, right? If so, why
>     don't you continue to check permission?
> 
> 
> So things are backward here. There has already been a MMU fault
> (get_page_from_gva failed) and we are trying to determine the cause of
> that fault. If the mem_access permission is rwx or rw, that means it had
> nothing to do with it so we go back to the original path.

This is very confusing. As we should never go there, I would add an
ASSERT to catch buggy implementation.

>     > +    ASSERT(*page);
> 
>     mfn_to_page only returns a valid pointer if the MFN is valid (see
>     mfn_valid).
> 
> 
> Hm, I copied this from get_page_from_gva but I'll remove it.

I meant, you forgot to copy the if ( !mfn_valid(mfn) ) return -EFAULT;

But the ASSERT is pointless as the function may return an invalid
pointer which is not NULL.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages
  2014-09-29 14:50       ` Julien Grall
@ 2014-09-29 14:57         ` Tamas K Lengyel
  2014-09-29 15:07           ` Julien Grall
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 14:57 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel


[-- Attachment #1.1: Type: text/plain, Size: 1927 bytes --]

On Mon, Sep 29, 2014 at 4:50 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> On 09/29/2014 03:44 PM, Tamas K Lengyel wrote:
> >     > +    if ( rc < 0 )
> >     > +        return rc;
> >     > +
> >     > +    /*
> >     > +     * We do this first as this is faster in the default case
> when no
> >     > +     * permission is set on the page.
> >     > +     */
> >     > +    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa),
> &xma);
> >     > +    if ( rc < 0 )
> >     > +        return rc;
> >     > +
> >     > +    /* Let's check if mem_access limited the access. */
> >     > +    switch ( xma )
> >     > +    {
> >     > +    default:
> >     > +    case XENMEM_access_rwx:
> >
> >     access_rwx is used to say there is no permission, right? If so, why
> >     don't you continue to check permission?
> >
> >
> > So things are backward here. There has already been a MMU fault
> > (get_page_from_gva failed) and we are trying to determine the cause of
> > that fault. If the mem_access permission is rwx or rw, that means it had
> > nothing to do with it so we go back to the original path.
>
> This is very confusing. As we should never go there, I would add an
> ASSERT to catch buggy implementation.
>

Not sure what you mean. That's the best approach to really avoid adding any
overhead to those cases where mem_access is not in use. We don't even have
to check if its in use if get_page_from_gva just works (which I assume is
the vast majority of the cases).


>
> >     > +    ASSERT(*page);
> >
> >     mfn_to_page only returns a valid pointer if the MFN is valid (see
> >     mfn_valid).
> >
> >
> > Hm, I copied this from get_page_from_gva but I'll remove it.
>
> I meant, you forgot to copy the if ( !mfn_valid(mfn) ) return -EFAULT;
>
> But the ASSERT is pointless as the function may return an invalid
> pointer which is not NULL.
>

Ah I see. OK.


>
> Regards,
>
> --
> Julien Grall
>

[-- Attachment #1.2: Type: text/html, Size: 3028 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 14:21     ` Tamas K Lengyel
@ 2014-09-29 15:07       ` Ian Campbell
  2014-09-29 15:17         ` Tamas K Lengyel
  2014-09-30 11:02       ` Stefano Stabellini
  1 sibling, 1 reply; 31+ messages in thread
From: Ian Campbell @ 2014-09-29 15:07 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Tim Deegan, xen-devel, Stefano Stabellini, Daniel De Graaf

On Mon, 2014-09-29 at 16:21 +0200, Tamas K Lengyel wrote:

> The patch you reference in the previous round was newly added has been
> refactored in this round to avoid adding overhead. If it's your
> feeling that there might be some other similar cases and you want to
> delay so you have more time to look at it just be sure, that's
> perfectly understandable, but IMHO in this version there is no
> indication that we are adding any unreasonable overhead.

As I say I think we need to step back and take our time over this. I
think 4.6 is the right target for this stuff, otherwise we are rushing
and risking slipping in something which has an unexpected impact.


> I don't see how those benchmarks would be meaningful for this series.
> During normal operations, the only overhead for the domain would be in
> the trap handlers checking the boolean flag if mem_access is in use in
> case a permission fault happened in the second stage translation..
> which I have never observed happening during my tests. So those
> benchmarks don't really exercise any paths that mem_access touches.

It touches the p2m update code which is a hot path. Also it previously
touched the copy to/from guest paths which is super hot, if you aren't
doing that anymore then great, if you are then there is still a
potential for regressions.

But in any case the benchmarks will serve to highlight *unexpected*
regressions to serve as confirmation of what you expect.

For example I think they would would have pretty clearly shown poor
performance due to the copy to/from user changes in your previous
iteration.

Ian.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages
  2014-09-29 14:57         ` Tamas K Lengyel
@ 2014-09-29 15:07           ` Julien Grall
  0 siblings, 0 replies; 31+ messages in thread
From: Julien Grall @ 2014-09-29 15:07 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf, Tamas K Lengyel

On 09/29/2014 03:57 PM, Tamas K Lengyel wrote:
> 
> 
> On Mon, Sep 29, 2014 at 4:50 PM, Julien Grall <julien.grall@linaro.org
> <mailto:julien.grall@linaro.org>> wrote:
> 
>     On 09/29/2014 03:44 PM, Tamas K Lengyel wrote:
>     >     > +    if ( rc < 0 )
>     >     > +        return rc;
>     >     > +
>     >     > +    /*
>     >     > +     * We do this first as this is faster in the default case when no
>     >     > +     * permission is set on the page.
>     >     > +     */
>     >     > +    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa), &xma);
>     >     > +    if ( rc < 0 )
>     >     > +        return rc;
>     >     > +
>     >     > +    /* Let's check if mem_access limited the access. */
>     >     > +    switch ( xma )
>     >     > +    {
>     >     > +    default:
>     >     > +    case XENMEM_access_rwx:
>     >
>     >     access_rwx is used to say there is no permission, right? If so, why
>     >     don't you continue to check permission?
>     >
>     >
>     > So things are backward here. There has already been a MMU fault
>     > (get_page_from_gva failed) and we are trying to determine the cause of
>     > that fault. If the mem_access permission is rwx or rw, that means it had
>     > nothing to do with it so we go back to the original path.
> 
>     This is very confusing. As we should never go there, I would add an
>     ASSERT to catch buggy implementation.
> 
> 
> Not sure what you mean. That's the best approach to really avoid adding
> any overhead to those cases where mem_access is not in use. We don't
> even have to check if its in use if get_page_from_gva just works (which
> I assume is the vast majority of the cases).

hmmmm... I'm wrong on this part from the beginning. Sorry for the noise.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 15:07       ` Ian Campbell
@ 2014-09-29 15:17         ` Tamas K Lengyel
  2014-09-29 15:21           ` Ian Campbell
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 15:17 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Tim Deegan, xen-devel, Stefano Stabellini, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 2014 bytes --]

On Mon, Sep 29, 2014 at 5:07 PM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Mon, 2014-09-29 at 16:21 +0200, Tamas K Lengyel wrote:
>
> > The patch you reference in the previous round was newly added has been
> > refactored in this round to avoid adding overhead. If it's your
> > feeling that there might be some other similar cases and you want to
> > delay so you have more time to look at it just be sure, that's
> > perfectly understandable, but IMHO in this version there is no
> > indication that we are adding any unreasonable overhead.
>
> As I say I think we need to step back and take our time over this. I
> think 4.6 is the right target for this stuff, otherwise we are rushing
> and risking slipping in something which has an unexpected impact.
>

Ack.


>
>
> > I don't see how those benchmarks would be meaningful for this series.
> > During normal operations, the only overhead for the domain would be in
> > the trap handlers checking the boolean flag if mem_access is in use in
> > case a permission fault happened in the second stage translation..
> > which I have never observed happening during my tests. So those
> > benchmarks don't really exercise any paths that mem_access touches.
>
> It touches the p2m update code which is a hot path. Also it previously
> touched the copy to/from guest paths which is super hot, if you aren't
> doing that anymore then great, if you are then there is still a
> potential for regressions.
>
> But in any case the benchmarks will serve to highlight *unexpected*
> regressions to serve as confirmation of what you expect.
>

OK.


>
> For example I think they would would have pretty clearly shown poor
> performance due to the copy to/from user changes in your previous
> iteration.
>
> Ian.
>

You are probably right, but aren't the copy to/from helpers only used if
the guest is issuing hypercalls? I mean, what would the hypervisor copy
in/out of the guest during kernbench running in the guest? That doesn't
really sound right to me.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2926 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 15:17         ` Tamas K Lengyel
@ 2014-09-29 15:21           ` Ian Campbell
  2014-09-29 15:29             ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Ian Campbell @ 2014-09-29 15:21 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Tim Deegan, xen-devel, Stefano Stabellini, Daniel De Graaf

On Mon, 2014-09-29 at 17:17 +0200, Tamas K Lengyel wrote:

>         
>         For example I think they would would have pretty clearly shown
>         poor
>         performance due to the copy to/from user changes in your
>         previous
>         iteration.
>         
>         Ian.
> 
> 
> You are probably right, but aren't the copy to/from helpers only used
> if the guest is issuing hypercalls? I mean, what would the hypervisor
> copy in/out of the guest during kernbench running in the guest? That
> doesn't really sound right to me.

Guests (and dom0 issue hypercalls as part of the PV ring protocols
(kicking evtchns, mapping and unmapping grant references, etc) and for a
variety of other things while they are running workloads. For almost all
of those operations some sort of argument struct normally needs to be
copied in/out (often both) of guest memory.

Ian.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 15:21           ` Ian Campbell
@ 2014-09-29 15:29             ` Tamas K Lengyel
  0 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 15:29 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Tim Deegan, xen-devel, Stefano Stabellini, Daniel De Graaf


[-- Attachment #1.1: Type: text/plain, Size: 1133 bytes --]

On Mon, Sep 29, 2014 at 5:21 PM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Mon, 2014-09-29 at 17:17 +0200, Tamas K Lengyel wrote:
>
> >
> >         For example I think they would would have pretty clearly shown
> >         poor
> >         performance due to the copy to/from user changes in your
> >         previous
> >         iteration.
> >
> >         Ian.
> >
> >
> > You are probably right, but aren't the copy to/from helpers only used
> > if the guest is issuing hypercalls? I mean, what would the hypervisor
> > copy in/out of the guest during kernbench running in the guest? That
> > doesn't really sound right to me.
>
> Guests (and dom0 issue hypercalls as part of the PV ring protocols
> (kicking evtchns, mapping and unmapping grant references, etc) and for a
> variety of other things while they are running workloads. For almost all
> of those operations some sort of argument struct normally needs to be
> copied in/out (often both) of guest memory.
>
> Ian.
>

Ah, OK, that makes sense.

I'll resubmit the refactoring patches from this series that I think could
still go into 4.5 shortly.

Thanks!
Tamas

[-- Attachment #1.2: Type: text/html, Size: 1711 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
  2014-09-29 14:21     ` Tamas K Lengyel
  2014-09-29 15:07       ` Ian Campbell
@ 2014-09-30 11:02       ` Stefano Stabellini
  1 sibling, 0 replies; 31+ messages in thread
From: Stefano Stabellini @ 2014-09-30 11:02 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Tim Deegan, xen-devel,
	Stefano Stabellini, Daniel De Graaf

Please use plain text for emails.

On Mon, 29 Sep 2014, Tamas K Lengyel wrote:
> On Mon, Sep 29, 2014 at 3:37 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>       > It has also been proposed that a proper analysis for overhead be
>       > performed on this series as to show it does not add too much overhead
>       > to non-mem_access users. What that entails is unclear to me and IMHO
>       > it's not an easy task considering all the corner-cases and use-cases
>       > that would need to be covered to be comprehensive. It has been my goal
>       > during this series to minimize the overhead added and to be on-par
>       > with the x86 side, but I'm afraid a more in-depth analysis is not
>       > something I can contribute. Of course, if specific instances of
>       > overhead added are pointed out in the code that can be avoided, I'm
>       > happy to do so.
> 
>       What we would need is some evidence that there is no regression when
>       xenaccess is not in use for at least some common benchmarks. e.g.
>       hackbench and kernbench for example. Not asking for every corner case
>       etc, just some basic stuff. Stefano do you have anything thoughts on
>       other small/simple benchmarks?
> 
>       Ian.
> 
> 
> I don't see how those benchmarks would be meaningful for this series. During normal operations, the only overhead for the domain would be in
> the trap handlers checking the boolean flag if mem_access is in use in case a permission fault happened in the second stage translation.. which
> I have never observed happening during my tests. So those benchmarks don't really exercise any paths that mem_access touches.

That is why you should be pretty confident that the benchmarks won't be
a problem for you :-)

FYI it is pretty common to ask for benchmarks for series that change
code on the hot path.

I would suggest to run kernbench
(http://ck.kolivas.org/apps/kernbench/kernbench-0.50/kernbench) 3 times
on a VM without your series, 3 times on a VM with your series, without
using mem_access and 3 times on a VM with your series and using
mem_access (the last run is not actually required but would be useful
to know). Then send out the results to the list together with your
configuration (hardware, amount of memory and physical cpus, amount of
memory and number of vcpus assigned to the VM, software versions, etc),
maybe on your 0/9 patch. If you don't own any off-the-shelf hardware and
you cannot disclose benchmarks figures for the hardware you have, then
please send out the results in terms of overhead: something like "my
series makes kernbench 1% slower without using mem_access and 10% slower
using mem_access".  I would strongly prefer to have the full results
though.

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2014-09-30 11:02 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-29 11:36 [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
2014-09-29 11:36 ` [PATCH for-4.5 v11 1/9] xen/arm: p2m changes for mem_access support Tamas K Lengyel
2014-09-29 12:26   ` Julien Grall
2014-09-29 12:41     ` Tamas K Lengyel
2014-09-29 11:36 ` [PATCH for-4.5 v11 2/9] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
2014-09-29 11:36 ` [PATCH for-4.5 v11 3/9] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers Tamas K Lengyel
2014-09-29 11:36 ` [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
2014-09-29 12:35   ` Julien Grall
2014-09-29 12:47     ` Tamas K Lengyel
2014-09-29 12:52       ` Julien Grall
2014-09-29 12:53         ` Julien Grall
2014-09-29 11:36 ` [PATCH for-4.5 v11 5/9] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
2014-09-29 14:12   ` Julien Grall
2014-09-29 14:44     ` Tamas K Lengyel
2014-09-29 14:50       ` Julien Grall
2014-09-29 14:57         ` Tamas K Lengyel
2014-09-29 15:07           ` Julien Grall
2014-09-29 11:36 ` [PATCH for-4.5 v11 6/9] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
2014-09-29 14:13   ` Julien Grall
2014-09-29 11:36 ` [PATCH for-4.5 v11 7/9] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
2014-09-29 11:36 ` [PATCH for-4.5 v11 8/9] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
2014-09-29 11:36 ` [PATCH for-4.5 v11 9/9] tools/tests: Enable xen-access " Tamas K Lengyel
2014-09-29 14:16   ` Julien Grall
2014-09-29 12:17 ` [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM Tamas K Lengyel
2014-09-29 13:37   ` Ian Campbell
2014-09-29 14:21     ` Tamas K Lengyel
2014-09-29 15:07       ` Ian Campbell
2014-09-29 15:17         ` Tamas K Lengyel
2014-09-29 15:21           ` Ian Campbell
2014-09-29 15:29             ` Tamas K Lengyel
2014-09-30 11:02       ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.