All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V14 0/7] Mem_access for ARM
@ 2015-03-26 22:05 Tamas K Lengyel
  2015-03-26 22:05 ` [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM Tamas K Lengyel
                   ` (7 more replies)
  0 siblings, 8 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

The ARM virtualization extension provides 2-stage paging, a similar mechanisms
to Intel's EPT, which can be used to trace the memory accesses performed by
the guest systems. This series sets up the necessary infrastructure in the
ARM code to deliver the event on R/W/X traps. Finally, we turn on the
compilation of mem_access and mem_event on ARM and perform the necessary
changes to the tools side.

This version of the series is based on master and each patch in the series has
been compile tested on both ARM and x86.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/arm_memaccess14

Julien Grall (1):
  xen/arm: Implement domain_get_maximum_gpfn

Tamas K Lengyel (6):
  xen/arm: groundwork for mem_access support on ARM
  xen/arm: Allow hypervisor access to mem_access protected pages
  xen/arm: Data abort exception (R/W) mem_access events
  xen/arm: Instruction prefetch abort (X) mem_access event handling
  xen/arm: Enable mem_access on ARM
  tools/libxc: Allocate magic page for mem access on ARM

 config/arm32.mk                  |   1 +
 config/arm64.mk                  |   1 +
 tools/libxc/xc_dom_arm.c         |   6 +-
 xen/arch/arm/mm.c                |   2 +-
 xen/arch/arm/p2m.c               | 564 ++++++++++++++++++++++++++++++++++++---
 xen/arch/arm/traps.c             |  79 +++++-
 xen/include/asm-arm/arm32/page.h |   7 +-
 xen/include/asm-arm/arm64/page.h |   7 +-
 xen/include/asm-arm/domain.h     |   1 +
 xen/include/asm-arm/p2m.h        |  32 ++-
 xen/include/asm-arm/page.h       |   4 +-
 xen/include/asm-arm/processor.h  |  13 +-
 xen/include/asm-x86/p2m.h        |  10 -
 xen/include/xen/p2m-common.h     |  10 +
 14 files changed, 668 insertions(+), 69 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-04-15 13:39   ` Ian Campbell
  2015-03-26 22:05 ` [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

Add necessary changes for page table construction routines to pass
the default access information and hypercall continuation mask. Also,
define necessary functions and data fields to be used later by mem_access.

The p2m_access_t info will be stored in a Radix tree as the PTE lacks
enough software programmable bits, thus in this patch we add the radix-tree
construction/destruction portions. The tree itself will be used later
by mem_access.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v14: - Update patch message to be more descriptive.
v13: - Rename access_in_use to mem_access_enabled.
     - Define p2m_get_mem_access function prototype but
        return -ENOSYS for now.
v11: - Move including common/mem_event.h down the series.
v10: - Typo fix and drop reshuffling things that no longer need
      shuffling.
v8: - Drop lock inputs as common mem_access_check is postponed.
    - Resurrect the radix tree with an extra boolean access_in_use flag
      to indicate if the tree is empty to avoid lookups.
v7: - Remove radix tree init/destroy and move p2m_access_t store to page_info.
    - Add p2m_gpfn_lock/unlock functions.
    - Add bool_t lock input to p2m_lookup and apply_p2m_changes so the caller
      can specify if locking should be performed. This is needed in order to
      support mem_access_check from common.
v6: - Move mem_event header include to first patch that needs it.
v5: - #include grouping style-fix.
v4: - Move p2m_get_hostp2m definition here.
---
 xen/arch/arm/p2m.c              | 52 +++++++++++++++++++++++++++--------------
 xen/include/asm-arm/domain.h    |  1 +
 xen/include/asm-arm/p2m.h       | 35 ++++++++++++++++++++++++++-
 xen/include/asm-arm/processor.h |  2 +-
 4 files changed, 70 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8809f5a..137e5a0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -305,7 +305,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
 }
 
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
-                               p2m_type_t t)
+                               p2m_type_t t, p2m_access_t a)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     /* sh, xn and write bit will be defined in the following switches
@@ -335,8 +335,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
     }
 
-    /* We pass p2m_access_rwx as a placeholder for now. */
-    p2m_set_permission(&e, t, p2m_access_rwx);
+    p2m_set_permission(&e, t, a);
 
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
@@ -394,7 +393,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
              pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
-                                    MATTR_MEM, t);
+                                    MATTR_MEM, t, p2m->default_access);
 
              /*
               * First and second level super pages set p2m.table = 0, but
@@ -414,7 +413,8 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
+    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
+                           p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
 
@@ -537,7 +537,8 @@ static int apply_one_level(struct domain *d,
                            paddr_t *maddr,
                            bool_t *flush,
                            int mattr,
-                           p2m_type_t t)
+                           p2m_type_t t,
+                           p2m_access_t a)
 {
     const paddr_t level_size = level_sizes[level];
     const paddr_t level_mask = level_masks[level];
@@ -566,7 +567,7 @@ static int apply_one_level(struct domain *d,
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
+                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
                 p2m_write_pte(entry, pte, flush_cache);
@@ -601,7 +602,7 @@ static int apply_one_level(struct domain *d,
              (level == 3 || !p2m_table(orig_pte)) )
         {
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
+            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
@@ -770,7 +771,9 @@ static int apply_p2m_changes(struct domain *d,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     uint32_t mask,
+                     p2m_type_t t,
+                     p2m_access_t a)
 {
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -863,7 +866,7 @@ static int apply_p2m_changes(struct domain *d,
                                   level, flush_pt, op,
                                   start_gpaddr, end_gpaddr,
                                   &addr, &maddr, &flush,
-                                  mattr, t);
+                                  mattr, t, a);
             if ( ret < 0 ) { rc = ret ; goto out; }
             count += ret;
             /* L3 had better have done something! We cannot descend any further */
@@ -921,7 +924,7 @@ out:
          */
         apply_p2m_changes(d, REMOVE,
                           start_gpaddr, addr + level_sizes[level], orig_maddr,
-                          mattr, p2m_invalid);
+                          mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
     for ( level = P2M_ROOT_LEVEL; level < 4; level ++ )
@@ -940,7 +943,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, 0, p2m_ram_rw,
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -952,7 +956,8 @@ int map_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_mmio_direct);
+                             MATTR_DEV, 0, p2m_mmio_direct,
+                             d->arch.p2m.default_access);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -964,7 +969,8 @@ int unmap_mmio_regions(struct domain *d,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
-                             MATTR_DEV, p2m_invalid);
+                             MATTR_DEV, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -976,7 +982,8 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, 0, t,
+                             d->arch.p2m.default_access);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -986,7 +993,8 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, 0, p2m_invalid,
+                      d->arch.p2m.default_access);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -1090,6 +1098,8 @@ void p2m_teardown(struct domain *d)
 
     p2m_free_vmid(d);
 
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
     spin_unlock(&p2m->lock);
 }
 
@@ -1115,6 +1125,10 @@ int p2m_init(struct domain *d)
     p2m->max_mapped_gfn = 0;
     p2m->lowest_mapped_gfn = ULONG_MAX;
 
+    p2m->default_access = p2m_access_rwx;
+    p2m->mem_access_enabled = false;
+    radix_tree_init(&p2m->mem_access_settings);
+
 err:
     spin_unlock(&p2m->lock);
 
@@ -1129,7 +1143,8 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, 0, p2m_invalid,
+                              d->arch.p2m.default_access);
 }
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
@@ -1143,7 +1158,8 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
-                             MATTR_MEM, p2m_invalid);
+                             MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9e0419e..f1a087e 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -17,6 +17,7 @@ struct hvm_domain
 {
     uint64_t              params[HVM_NR_PARAMS];
     struct hvm_iommu      iommu;
+    bool_t                introspection_enabled;
 }  __cacheline_aligned;
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index da36504..7583d9b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,8 +2,9 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
-
+#include <xen/radix-tree.h>
 #include <xen/p2m-common.h>
+#include <public/memory.h>
 
 #define paddr_bits PADDR_BITS
 
@@ -48,6 +49,18 @@ struct p2m_domain {
     /* If true, and an access fault comes in and there is no mem_event listener,
      * pause domain. Otherwise, remove access restrictions. */
     bool_t access_required;
+
+    /* Defines if mem_access is in use for the domain. */
+    bool_t mem_access_enabled;
+
+    /* Default P2M access type for each page in the the domain: new pages,
+     * swapped in pages, cleared pages, and pages that are ambiguously
+     * retyped get this access type. See definition of p2m_access_t. */
+    p2m_access_t default_access;
+
+    /* Radix tree to store the p2m_access_t settings as the pte's don't have
+     * enough available bits to store this information. */
+    struct radix_tree_root mem_access_settings;
 };
 
 /* List of possible type for each page in the p2m entry.
@@ -217,6 +230,26 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+/* mem_event and mem_access are supported on any ARM guest */
+static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
+static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+{
+    return 1;
+}
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+static inline
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+                       xenmem_access_t *access)
+{
+    return -ENOSYS;
+}
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index fcd26fb..cf7ab7c 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -441,7 +441,7 @@ union hsr {
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-        unsigned long s1ptw:1; /* */
+        unsigned long s1ptw:1; /* Stage 2 fault during stage 1 translation */
         unsigned long cache:1; /* Cache Maintenance */
         unsigned long eat:1;   /* External Abort Type */
 #ifdef CONFIG_ARM_32
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
  2015-03-26 22:05 ` [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-04-08 13:23   ` Julien Grall
  2015-03-26 22:05 ` [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir

From: Julien Grall <julien.grall@linaro.org>

The function domain_get_maximum_gpfn is returning the maximum gpfn ever
mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

We use this in xenaccess as to avoid the user attempting to set page
permissions on pages which don't exist for the domain, as a non-arch specific
sanity check.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 7d4ba0c..ca0aa69 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -985,7 +985,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
  2015-03-26 22:05 ` [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM Tamas K Lengyel
  2015-03-26 22:05 ` [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-04-08 14:33   ` Julien Grall
  2015-04-15 13:48   ` Ian Campbell
  2015-03-26 22:05 ` [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events Tamas K Lengyel
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

The hypervisor may use the MMU to verify that the given guest has read/write
access to a given page during hypercalls. As we may have custom mem_access
permissions set on these pages, we do a software-based type checking in case
the MMU based approach failed, but only if mem_access_enabled is set.

These memory accesses are not forwarded to the mem_event listener. Accesses
performed by the hypervisor are currently not part of the mem_access scheme.
This is consistent behaviour with the x86 side as well.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v14: - Move software-based lookup into p2m, add comments and clean it up a bit
       Only page type allowed is rw
       Extend gva_to_ipa to take flags input for lookup validation
v12: - Check for mfn_valid as well.
---
 xen/arch/arm/p2m.c               | 101 +++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/traps.c             |   2 +-
 xen/include/asm-arm/arm32/page.h |   7 ++-
 xen/include/asm-arm/arm64/page.h |   7 ++-
 xen/include/asm-arm/page.h       |   4 +-
 5 files changed, 114 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 137e5a0..692e0c7 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1168,6 +1168,103 @@ unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
     return p >> PAGE_SHIFT;
 }
 
+/*
+ * If mem_access is in use it might have been the reason why get_page_from_gva
+ * failed to fetch the page, as it uses the MMU for the permission checking.
+ * Only in these cases we do a software-based type check and fetch the page if
+ * we indeed found a conflicting mem_access setting.
+ */
+static struct page_info*
+p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
+{
+    long rc;
+    paddr_t ipa;
+    unsigned long maddr;
+    unsigned long mfn;
+    xenmem_access_t xma;
+    p2m_type_t t;
+    struct page_info *page = NULL;
+
+    rc = gva_to_ipa(gva, &ipa, flag);
+    if ( rc < 0 )
+        goto err;
+
+    /*
+     * We do this first as this is faster in the default case when no
+     * permission is set on the page.
+     */
+    rc = p2m_get_mem_access(current->domain, paddr_to_pfn(ipa), &xma);
+    if ( rc < 0 )
+        goto err;
+
+    /* Let's check if mem_access limited the access. */
+    switch ( xma )
+    {
+    default:
+    case XENMEM_access_rwx:
+    case XENMEM_access_rw:
+        /*
+         * If mem_access contains no rw perm restrictions at all then the original
+         * fault was correct.
+         */
+        goto err;
+    case XENMEM_access_n2rwx:
+    case XENMEM_access_n:
+    case XENMEM_access_x:
+        /*
+         * If no r/w is permitted by mem_access, this was a fault caused by mem_access.
+         */
+        break;
+    case XENMEM_access_wx:
+    case XENMEM_access_w:
+        /*
+         * If this was a read then it was because of mem_access, but if it was
+         * a write then the original get_page_from_gva fault was correct.
+         */
+        if ( flag == GV2M_READ )
+            break;
+        else
+            goto err;
+    case XENMEM_access_rx2rw:
+    case XENMEM_access_rx:
+    case XENMEM_access_r:
+        /*
+         * If this was a write then it was because of mem_access, but if it was
+         * a read then the original get_page_from_gva fault was correct.
+         */
+        if ( flag == GV2M_WRITE )
+            break;
+        else
+            goto err;
+    }
+
+    /*
+     * We had a mem_access permission limiting the access, but the page type
+     * could also be limiting, so we need to check that as well.
+     */
+    maddr = p2m_lookup(current->domain, ipa, &t);
+    if ( maddr == INVALID_PADDR )
+        goto err;
+
+    mfn = maddr >> PAGE_SHIFT;
+    if ( !mfn_valid(mfn) )
+        goto err;
+
+    /*
+     * Base type doesn't allow r/w
+     */
+    if ( t != p2m_ram_rw )
+        goto err;
+
+    page = mfn_to_page(mfn);
+
+    if ( unlikely(!get_page(page, current->domain)) )
+        page = NULL;
+
+err:
+    return page;
+}
+
 struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
                                     unsigned long flags)
 {
@@ -1209,6 +1306,10 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
 
 err:
     spin_unlock(&p2m->lock);
+
+    if ( !page && p2m->mem_access_enabled )
+        page = p2m_mem_access_check_and_get_page(va, flags);
+
     return page;
 }
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ad046e8..5d90609 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1988,7 +1988,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     if (dabt.s1ptw)
         goto bad_data_abort;
 
-    rc = gva_to_ipa(info.gva, &info.gpa);
+    rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
     if ( rc == -EFAULT )
         goto bad_data_abort;
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index a07e217..bccdbfc 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -103,11 +103,14 @@ static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
     WRITE_CP64(tmp, PAR);
     return par;
 }
-static inline uint64_t gva_to_ipa_par(vaddr_t va)
+static inline uint64_t gva_to_ipa_par(vaddr_t va, unsigned int flags)
 {
     uint64_t par, tmp;
     tmp = READ_CP64(PAR);
-    WRITE_CP32(va, ATS1CPR);
+    if ( (flags & GV2M_WRITE) == GV2M_WRITE )
+        WRITE_CP32(va, ATS1CPW);
+    else
+        WRITE_CP32(va, ATS1CPR);
     isb(); /* Ensure result is available. */
     par = READ_CP64(PAR);
     WRITE_CP64(tmp, PAR);
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index e7a761d..29a32cf 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -98,11 +98,14 @@ static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
     return par;
 }
 
-static inline uint64_t gva_to_ipa_par(vaddr_t va)
+static inline uint64_t gva_to_ipa_par(vaddr_t va, unsigned int flags)
 {
     uint64_t par, tmp = READ_SYSREG64(PAR_EL1);
 
-    asm volatile ("at s1e1r, %0;" : : "r" (va));
+    if ( (flags & GV2M_WRITE) == GV2M_WRITE )
+        asm volatile ("at s1e1w, %0;" : : "r" (va));
+    else
+        asm volatile ("at s1e1r, %0;" : : "r" (va));
     isb();
     par = READ_SYSREG64(PAR_EL1);
     WRITE_SYSREG64(tmp, PAR_EL1);
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 3e7b0ae..b31e161 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -423,9 +423,9 @@ static inline uint64_t va_to_par(vaddr_t va)
     return par;
 }
 
-static inline int gva_to_ipa(vaddr_t va, paddr_t *paddr)
+static inline int gva_to_ipa(vaddr_t va, paddr_t *paddr, unsigned int flags)
 {
-    uint64_t par = gva_to_ipa_par(va);
+    uint64_t par = gva_to_ipa_par(va, flags);
     if ( par & PAR_F )
         return -EFAULT;
     *paddr = (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2015-03-26 22:05 ` [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-04-08 15:26   ` Julien Grall
  2015-04-15 13:53   ` Ian Campbell
  2015-03-26 22:05 ` [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling Tamas K Lengyel
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

This patch enables to store, set, check and deliver LPAE R/W mem_events.
As the LPAE PTE's lack enough available software programmable bits,
we store the permissions in a Radix tree. The tree is only looked at if
mem_access_enabled is turned on.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v14: - Make radix_set function easier to read when mem_access not in use.
     - Make dabt trap handler read HPFAR_EL2 when valid
     - Distinguish between read/write access based on dabt.write
     - Combine preemption in apply_p2m_changes
     - Combine shared p2m functions into common/p2m-common.h
v12: - Move the flush in apply_p2m_changes to the label out,
       so we it used in the memaccess preemption case as well.
v11: - Move including common/mem_event.h in here in p2m.h.
     - Flush the tlb in p2m_set_mem_access to cover both the preemption
       and successful finish cases.
v10: - Remove ASSERT from MEMACCESS case.
    - Flush the tlb in the MEMACCESS case as we progress.
    - Typos and style fixes.
v8: - Revert to arch specific p2m_mem_access_check.
    - Retire dabt_dfsc enum and use FSC_FLT defines.
    - Revert to Radix tree approach and use access_in_use flag to
      indicate if the tree is in use or not to avoid uneccessary lookups.
v7: - Removed p2m_shatter_page and p2m_set_permission into separate
      patch.
    - Replace Radix tree settings store with extended struct page_info
      approach. This way the trap handlers can use the MMU directly to
      locate the permission store instead of having to do a tree-lookup.
    - Add p2m_get_entry/set_entry compat functions which are required by
      the common mem_access_check function.
    - Typo fixes.
v6: - Add helper function p2m_shatter_page.
    - Only allocate 4k pages when mem_access is in use.
    - If no setting was found in radix tree but PTE exists,
      return rwx as permission.
    - Move the inclusion of various headers into this patch.
    - Make npfec a const.
v5: - Move p2m_set_entry's logic into apply_one_level via
      a new p2m_op, MEMACCESS.
v4: - Add p2m_mem_access_radix_set function to be called
      when inserting new PTE's and when updating existing entries.
    - Switch p2m_mem_access_check to return bool_t.
    - Use new struct npfec to pass violation info.
v3: - Add new function for updating the PTE entries, p2m_set_entry.
    - Use the new struct npfec to pass violation information.
    - Implement n2rwx, rx2rw and listener required routines.
v2: - Patch been split to ease the review process.
    - Add definitions of data abort data fetch status codes (enum dabt_dfsc)
      and only call p2m_mem_access_check for traps caused by permission violations.
    - Only call p2m_write_pte in p2m_lookup if the PTE permission actually changed.
    - Properly save settings in the Radix tree and pause the VCPU with
      mem_event_vcpu_pause.
---
 xen/arch/arm/p2m.c           | 411 ++++++++++++++++++++++++++++++++++++++++---
 xen/arch/arm/traps.c         |  33 +++-
 xen/include/asm-arm/p2m.h    |  13 +-
 xen/include/asm-x86/p2m.h    |  10 --
 xen/include/xen/p2m-common.h |  10 ++
 5 files changed, 430 insertions(+), 47 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 692e0c7..6bad376 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -5,6 +5,9 @@
 #include <xen/errno.h>
 #include <xen/domain_page.h>
 #include <xen/bitops.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <public/mem_event.h>
 #include <asm/flushtlb.h>
 #include <asm/gic.h>
 #include <asm/event.h>
@@ -421,12 +424,42 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
     return 0;
 }
 
+static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
+                                    p2m_access_t a)
+{
+    int rc;
+
+    if ( !p2m->mem_access_enabled )
+        return 0;
+
+    if ( p2m_access_rwx == a )
+    {
+        radix_tree_delete(&p2m->mem_access_settings, pfn);
+        return 0;
+    }
+
+    rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
+                           radix_tree_int_to_ptr(a));
+    if ( rc == -EEXIST )
+    {
+        /* If a setting existed already, change it to the new one */
+        radix_tree_replace_slot(
+            radix_tree_lookup_slot(
+                &p2m->mem_access_settings, pfn),
+            radix_tree_int_to_ptr(a));
+        rc = 0;
+    }
+
+    return rc;
+}
+
 enum p2m_operation {
     INSERT,
     ALLOCATE,
     REMOVE,
     RELINQUISH,
     CACHEFLUSH,
+    MEMACCESS,
 };
 
 /* Put any references on the single 4K page referenced by pte.  TODO:
@@ -560,13 +593,22 @@ static int apply_one_level(struct domain *d,
         if ( p2m_valid(orig_pte) )
             return P2M_ONE_DESCEND;
 
-        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
+        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
+           /* We only create superpages when mem_access is not in use. */
+             (level == 3 || (level < 3 && !p2m->mem_access_enabled)) )
         {
             struct page_info *page;
 
             page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
             if ( page )
             {
+                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                if ( rc < 0 )
+                {
+                    free_domheap_page(page);
+                    return rc;
+                }
+
                 pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
                 if ( level < 3 )
                     pte.p2m.table = 0;
@@ -587,8 +629,8 @@ static int apply_one_level(struct domain *d,
         /*
          * If we get here then we failed to allocate a sufficiently
          * large contiguous region for this level (which can't be
-         * L3). Create a page table and continue to descend so we try
-         * smaller allocations.
+         * L3) or mem_access is in use. Create a page table and
+         * continue to descend so we try smaller allocations.
          */
         rc = p2m_create_table(d, entry, 0, flush_cache);
         if ( rc < 0 )
@@ -598,9 +640,14 @@ static int apply_one_level(struct domain *d,
 
     case INSERT:
         if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
-           /* We do not handle replacing an existing table with a superpage */
-             (level == 3 || !p2m_table(orig_pte)) )
+           /* We do not handle replacing an existing table with a superpage
+            * or when mem_access is in use. */
+             (level == 3 || (!p2m_table(orig_pte) && !p2m->mem_access_enabled)) )
         {
+            rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+            if ( rc < 0 )
+                return rc;
+
             /* New mapping is superpage aligned, make it */
             pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
             if ( level < 3 )
@@ -716,6 +763,7 @@ static int apply_one_level(struct domain *d,
 
         memset(&pte, 0x00, sizeof(pte));
         p2m_write_pte(entry, pte, flush_cache);
+        p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), p2m_access_rwx);
 
         *addr += level_size;
         *maddr += level_size;
@@ -760,6 +808,44 @@ static int apply_one_level(struct domain *d,
             *addr += PAGE_SIZE;
             return P2M_ONE_PROGRESS_NOP;
         }
+
+    case MEMACCESS:
+        if ( level < 3 )
+        {
+            if ( !p2m_valid(orig_pte) )
+            {
+                *addr += level_size;
+                return P2M_ONE_PROGRESS_NOP;
+            }
+
+            /* Shatter large pages as we descend */
+            if ( p2m_mapping(orig_pte) )
+            {
+                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                if ( rc < 0 )
+                    return rc;
+            } /* else: an existing table mapping -> descend */
+
+            return P2M_ONE_DESCEND;
+        }
+        else
+        {
+            pte = orig_pte;
+
+            if ( p2m_valid(pte) )
+            {
+                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
+                if ( rc < 0 )
+                    return rc;
+
+                p2m_set_permission(&pte, pte.p2m.type, a);
+                p2m_write_pte(entry, pte, flush_cache);
+            }
+
+            *addr += level_size;
+            *flush = true;
+            return P2M_ONE_PROGRESS;
+        }
     }
 
     BUG(); /* Should never get here */
@@ -783,6 +869,9 @@ static int apply_p2m_changes(struct domain *d,
     unsigned int cur_root_table = ~0;
     unsigned int cur_offset[4] = { ~0, ~0, ~0, ~0 };
     unsigned int count = 0;
+    const unsigned long sgfn = paddr_to_pfn(start_gpaddr),
+                        egfn = paddr_to_pfn(end_gpaddr);
+    const unsigned int preempt_count_limit = (op == MEMACCESS) ? 1 : 0x2000;
     bool_t flush = false;
     bool_t flush_pt;
 
@@ -810,21 +899,49 @@ static int apply_p2m_changes(struct domain *d,
         };
 
         /*
-         * Arbitrarily, preempt every 512 operations or 8192 nops.
-         * 512*P2M_ONE_PROGRESS == 8192*P2M_ONE_PROGRESS_NOP == 0x2000
-         *
-         * count is initialised to 0 above, so we are guaranteed to
-         * always make at least one pass.
+         * Check if current iteration should be possibly preempted.
+         * Since count is initialised to 0 above we are guaranteed to
+         * always make at least one pass as long as preempt_count_limit is
+         * initialized with a value >= 1.
          */
-
-        if ( op == RELINQUISH && count >= 0x2000 )
+        if ( count >= preempt_count_limit && hypercall_preempt_check() )
         {
-            if ( hypercall_preempt_check() )
+            switch ( op )
             {
+            case RELINQUISH:
+                /*
+                 * Arbitrarily, preempt every 512 operations or 8192 nops.
+                 * 512*P2M_ONE_PROGRESS == 8192*P2M_ONE_PROGRESS_NOP == 0x2000
+                 * This is set in preempt_count_limit.
+                 *
+                 */
                 p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -ERESTART;
                 goto out;
+
+            case MEMACCESS:
+            {
+                /*
+                 * Preempt setting mem_access permissions as required by XSA-89,
+                 * if it's not the last iteration.
+                 */
+                uint32_t progress = paddr_to_pfn(addr) - sgfn + 1;
+
+                if ( (egfn - sgfn) > progress && !(progress & mask) )
+                {
+                    rc = progress;
+                    goto tlbflush;
+                }
+                break;
             }
+
+            default:
+                break;
+            };
+
+            /*
+             * Reset current iteration counter.
+             */
             count = 0;
         }
 
@@ -891,27 +1008,23 @@ static int apply_p2m_changes(struct domain *d,
         }
     }
 
-    if ( flush )
-    {
-        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
-        unsigned long egfn = paddr_to_pfn(end_gpaddr);
-
-        flush_tlb_domain(d);
-        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
-    }
-
     if ( op == ALLOCATE || op == INSERT )
     {
-        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
-        unsigned long egfn = paddr_to_pfn(end_gpaddr);
-
         p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
         p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
 
+tlbflush:
+    if ( flush )
+    {
+        flush_tlb_domain(d);
+        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+    }
+
 out:
+
     if ( rc < 0 && ( op == INSERT || op == ALLOCATE ) &&
          addr != start_gpaddr )
     {
@@ -1382,6 +1495,254 @@ void __init setup_virt_paging(void)
     smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
+{
+    int rc;
+    bool_t violation;
+    xenmem_access_t xma;
+    mem_event_request_t *req;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+
+    /* Mem_access is not in use. */
+    if ( !p2m->mem_access_enabled )
+        return true;
+
+    rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
+    if ( rc )
+        return true;
+
+    /* Now check for mem_access violation. */
+    switch ( xma )
+    {
+    case XENMEM_access_rwx:
+        violation = false;
+        break;
+    case XENMEM_access_rw:
+        violation = npfec.insn_fetch;
+        break;
+    case XENMEM_access_wx:
+        violation = npfec.read_access;
+        break;
+    case XENMEM_access_rx:
+    case XENMEM_access_rx2rw:
+        violation = npfec.write_access;
+        break;
+    case XENMEM_access_x:
+        violation = npfec.read_access || npfec.write_access;
+        break;
+    case XENMEM_access_w:
+        violation = npfec.read_access || npfec.insn_fetch;
+        break;
+    case XENMEM_access_r:
+        violation = npfec.write_access || npfec.insn_fetch;
+        break;
+    default:
+    case XENMEM_access_n:
+    case XENMEM_access_n2rwx:
+        violation = true;
+        break;
+    }
+
+    if ( !violation )
+        return true;
+
+    /* First, handle rx2rw and n2rwx conversion automatically. */
+    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
+    {
+        rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                0, ~0, XENMEM_access_rw);
+        return false;
+    }
+    else if ( xma == XENMEM_access_n2rwx )
+    {
+        rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                0, ~0, XENMEM_access_rwx);
+    }
+
+    /* Otherwise, check if there is a memory event listener, and send the message along */
+    if ( !mem_event_check_ring(&v->domain->mem_event->access) )
+    {
+        /* No listener */
+        if ( p2m->access_required )
+        {
+            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
+                                  "no mem_event listener VCPU %d, dom %d\n",
+                                  v->vcpu_id, v->domain->domain_id);
+            domain_crash(v->domain);
+        }
+        else
+        {
+            /* n2rwx was already handled */
+            if ( xma != XENMEM_access_n2rwx )
+            {
+                /* A listener is not required, so clear the access
+                 * restrictions. */
+                rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
+                                        0, ~0, XENMEM_access_rwx);
+            }
+        }
+
+        /* No need to reinject */
+        return false;
+    }
+
+    req = xzalloc(mem_event_request_t);
+    if ( req )
+    {
+        req->reason = MEM_EVENT_REASON_VIOLATION;
+        if ( xma != XENMEM_access_n2rwx )
+            req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        req->gfn = gpa >> PAGE_SHIFT;
+        req->offset =  gpa & ((1 << PAGE_SHIFT) - 1);
+        req->gla = gla;
+        req->gla_valid = npfec.gla_valid;
+        req->access_r = npfec.read_access;
+        req->access_w = npfec.write_access;
+        req->access_x = npfec.insn_fetch;
+        if ( npfec_kind_in_gpt == npfec.kind )
+            req->fault_in_gpt = 1;
+        if ( npfec_kind_with_gla == npfec.kind )
+            req->fault_with_gla = 1;
+        req->vcpu_id = v->vcpu_id;
+
+        mem_access_send_req(v->domain, req);
+        xfree(req);
+    }
+
+    /* Pause the current VCPU */
+    if ( xma != XENMEM_access_n2rwx )
+        mem_event_vcpu_pause(v);
+
+    return false;
+}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_access_t a;
+    long rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    switch ( access )
+    {
+    case 0 ... ARRAY_SIZE(memaccess) - 1:
+        a = memaccess[access];
+        break;
+    case XENMEM_access_default:
+        a = p2m->default_access;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    /*
+     * Flip mem_access_enabled to true when a permission is set, as to prevent
+     * allocating or inserting super-pages.
+     */
+    p2m->mem_access_enabled = true;
+
+    /* If request to set default access. */
+    if ( pfn == ~0ul )
+    {
+        p2m->default_access = a;
+        return 0;
+    }
+
+    rc = apply_p2m_changes(d, MEMACCESS,
+                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
+                           0, MATTR_MEM, mask, 0, a);
+    if ( rc < 0 )
+        return rc;
+    else if ( rc > 0 )
+        return start + rc;
+
+    return 0;
+}
+
+int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
+                       xenmem_access_t *access)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    void *i;
+    unsigned int index;
+
+    static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac
+            ACCESS(n),
+            ACCESS(r),
+            ACCESS(w),
+            ACCESS(rw),
+            ACCESS(x),
+            ACCESS(rx),
+            ACCESS(wx),
+            ACCESS(rwx),
+            ACCESS(rx2rw),
+            ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    /* If no setting was ever set, just return rwx. */
+    if ( !p2m->mem_access_enabled )
+    {
+        *access = XENMEM_access_rwx;
+        return 0;
+    }
+
+    /* If request to get default access */
+    if ( gpfn == ~0ull )
+    {
+        *access = memaccess[p2m->default_access];
+        return 0;
+    }
+
+    spin_lock(&p2m->lock);
+    i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
+    spin_unlock(&p2m->lock);
+
+    if ( !i )
+    {
+        /*
+         * No setting was found in the Radix tree. Check if the
+         * entry exists in the page-tables.
+         */
+        paddr_t maddr = p2m_lookup(d, gpfn << PAGE_SHIFT, NULL);
+        if ( INVALID_PADDR == maddr )
+            return -ESRCH;
+
+        /* If entry exists then its rwx. */
+        *access = XENMEM_access_rwx;
+    }
+    else
+    {
+        /* Setting was found in the Radix tree. */
+        index = radix_tree_ptr_to_int(i);
+        if ( index >= ARRAY_SIZE(memaccess) )
+            return -ERANGE;
+
+        *access = memaccess[index];
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 5d90609..e02c848 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1985,11 +1985,36 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     info.gva = READ_SYSREG64(FAR_EL2);
 #endif
 
-    if (dabt.s1ptw)
-        goto bad_data_abort;
+    if ( dabt.s1ptw )
+        info.gpa = READ_SYSREG(HPFAR_EL2);
+    else
+    {
+        rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
+        if ( rc == -EFAULT )
+            goto bad_data_abort;
+    }
+
+    switch ( dabt.dfsc & 0x3f )
+    {
+    case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
+    {
+        const struct npfec npfec = {
+            .read_access = !dabt.write,
+            .write_access = dabt.write,
+            .gla_valid = 1,
+            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+    }
 
-    rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
-    if ( rc == -EFAULT )
+    if ( dabt.s1ptw )
         goto bad_data_abort;
 
     /* XXX: Decode the instruction if ISS is not valid */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 7583d9b..1797830 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -3,6 +3,8 @@
 
 #include <xen/mm.h>
 #include <xen/radix-tree.h>
+#include <public/mem_event.h> /* for mem_event_response_t */
+#include <public/memory.h>
 #include <xen/p2m-common.h>
 #include <public/memory.h>
 
@@ -241,14 +243,9 @@ static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
     return 1;
 }
 
-/* Get access type for a pfn
- * If pfn == -1ul, gets the default access type */
-static inline
-int p2m_get_mem_access(struct domain *d, unsigned long pfn,
-                       xenmem_access_t *access)
-{
-    return -ENOSYS;
-}
+/* Send mem event based on the access. Boolean return value indicates if trap
+ * needs to be injected into guest. */
+bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
 
 #endif /* _XEN_P2M_H */
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index e93c551..0d80820 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -597,16 +597,6 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
                             mem_event_request_t **req_ptr);
 
-/* Set access type for a region of pfns.
- * If start_pfn == -1ul, sets the default access type */
-long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
-                        uint32_t start, uint32_t mask, xenmem_access_t access);
-
-/* Get access type for a pfn
- * If pfn == -1ul, gets the default access type */
-int p2m_get_mem_access(struct domain *d, unsigned long pfn,
-                       xenmem_access_t *access);
-
 /* Check for emulation and mark vcpu for skipping one instruction
  * upon rescheduling if required. */
 void p2m_mem_event_emulate_check(struct vcpu *v,
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 29f3628..56d1afe 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -44,4 +44,14 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        unsigned long mfn);
 
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
+                        uint32_t start, uint32_t mask, xenmem_access_t access);
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+                       xenmem_access_t *access);
+
 #endif /* _XEN_P2M_COMMON_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2015-03-26 22:05 ` [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-03-26 23:21   ` Julien Grall
  2015-03-27 15:52   ` Ian Campbell
  2015-03-26 22:05 ` [PATCH V14 6/7] xen/arm: Enable mem_access on ARM Tamas K Lengyel
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

Add missing structure definition for iabt and update the trap handling
mechanism to only inject the exception if the mem_access checker
decides to do so.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by:  Julien Grall <julien.grall@linaro.org>
---
v14: - Query HPFAR_EL2 when valid in iabt handler
     - Flush TLB before doing GVA->IPA translation in iabt handler
v10: - Minor comment fix for describing s1ptw.
v8: - Revert to arch specific p2m_mem_access_check.
    - Retire iabt_fsc enum and use FSC_FLT instead.
    - Complete the struct definition of hsr_iabt.
v7: - Use the new common mem_access_check.
v6: - Make npfec a const.
v4: - Don't mark instruction fetch violation as read violation.
    - Use new struct npfec to pass violation info.
v2: - Add definition for instruction abort instruction fetch status codes
       (enum iabt_ifsc) and only call p2m_mem_access_check for traps triggered
       for permission violations.
---
 xen/arch/arm/traps.c            | 46 +++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/processor.h | 11 ++++++++++
 2 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index e02c848..be90c9e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -40,6 +40,7 @@
 #include <asm/psci.h>
 #include <asm/mmio.h>
 #include <asm/cpufeature.h>
+#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "vtimer.h"
@@ -1961,8 +1962,49 @@ done:
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
-    register_t addr = READ_SYSREG(FAR_EL2);
-    inject_iabt_exception(regs, addr, hsr.len);
+    struct hsr_iabt iabt = hsr.iabt;
+    int rc;
+    paddr_t gpa;
+    register_t gva = READ_SYSREG(FAR_EL2);
+
+    if ( iabt.s1ptw )
+        gpa = READ_SYSREG(HPFAR_EL2);
+    else
+    {
+        /*
+         * Flush the TLB to make sure the DTLB is clear before
+         * doing GVA->IPA translation. If we got here because of
+         * an entry only present in the ITLB, this translation may
+         * still be inaccurate.
+         */
+        flush_tlb_domain(current->domain);
+
+        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
+        if ( rc == -EFAULT )
+            goto bad_insn_abort;
+    }
+
+    switch ( iabt.ifsc & 0x3f )
+    {
+    case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
+    {
+        const struct npfec npfec = {
+            .insn_fetch = 1,
+            .gla_valid = 1,
+            .kind = iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(gpa, gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+    }
+    break;
+    }
+
+bad_insn_abort:
+    inject_iabt_exception(regs, gva, hsr.len);
 }
 
 static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index cf7ab7c..db07fdd 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -438,6 +438,17 @@ union hsr {
     } sysreg; /* HSR_EC_SYSREG */
 #endif
 
+    struct hsr_iabt {
+        unsigned long ifsc:6;  /* Instruction fault status code */
+        unsigned long res0:1;
+        unsigned long s1ptw:1; /* Stage 2 fault during stage 1 translation */
+        unsigned long res1:1;
+        unsigned long eat:1;   /* External abort type */
+        unsigned long res2:15;
+        unsigned long len:1;   /* Instruction length */
+        unsigned long ec:6;    /* Exception Class */
+    } iabt; /* HSR_EC_INSTR_ABORT_* */
+
     struct hsr_dabt {
         unsigned long dfsc:6;  /* Data Fault Status Code */
         unsigned long write:1; /* Write / not Read */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH V14 6/7] xen/arm: Enable mem_access on ARM
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2015-03-26 22:05 ` [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-03-26 22:05 ` [PATCH V14 7/7] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
  2015-04-15 13:36 ` [PATCH V14 0/7] Mem_access for ARM Ian Campbell
  7 siblings, 0 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
 config/arm32.mk | 1 +
 config/arm64.mk | 1 +
 2 files changed, 2 insertions(+)

diff --git a/config/arm32.mk b/config/arm32.mk
index 268ca9c..cd97e42 100644
--- a/config/arm32.mk
+++ b/config/arm32.mk
@@ -14,6 +14,7 @@ HAS_EXYNOS4210 := y
 HAS_OMAP := y
 HAS_SCIF := y
 HAS_NS16550 := y
+HAS_MEM_ACCESS := y
 
 # Use only if calling $(LD) directly.
 LDFLAGS_DIRECT += -EL
diff --git a/config/arm64.mk b/config/arm64.mk
index df6ad0a..e24c1d1 100644
--- a/config/arm64.mk
+++ b/config/arm64.mk
@@ -9,6 +9,7 @@ CFLAGS += #-marm -march= -mcpu= etc
 HAS_PL011 := y
 HAS_CADENCE_UART := y
 HAS_NS16550 := y
+HAS_MEM_ACCESS := y
 
 # Use only if calling $(LD) directly.
 LDFLAGS_DIRECT += -EL
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH V14 7/7] tools/libxc: Allocate magic page for mem access on ARM
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2015-03-26 22:05 ` [PATCH V14 6/7] xen/arm: Enable mem_access on ARM Tamas K Lengyel
@ 2015-03-26 22:05 ` Tamas K Lengyel
  2015-04-15 13:36 ` [PATCH V14 0/7] Mem_access for ARM Ian Campbell
  7 siblings, 0 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-26 22:05 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir,
	Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/xc_dom_arm.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index c7feca7..aaf835c 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -26,9 +26,10 @@
 #include "xg_private.h"
 #include "xc_dom.h"
 
-#define NR_MAGIC_PAGES 2
+#define NR_MAGIC_PAGES 3
 #define CONSOLE_PFN_OFFSET 0
 #define XENSTORE_PFN_OFFSET 1
+#define MEMACCESS_PFN_OFFSET 2
 
 #define LPAE_SHIFT 9
 
@@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
+    xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
             dom->console_pfn);
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
             dom->xenstore_pfn);
+    xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
+            base + MEMACCESS_PFN_OFFSET);
     /* allocated by toolstack */
     xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
             dom->console_evtchn);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-26 22:05 ` [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling Tamas K Lengyel
@ 2015-03-26 23:21   ` Julien Grall
  2015-03-27  8:32     ` Tamas K Lengyel
  2015-03-27 15:52   ` Ian Campbell
  1 sibling, 1 reply; 36+ messages in thread
From: Julien Grall @ 2015-03-26 23:21 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson, tim,
	stefano.stabellini, jbeulich, keir

Hi Tamas,

On 26/03/2015 22:05, Tamas K Lengyel wrote:
> Add missing structure definition for iabt and update the trap handling
> mechanism to only inject the exception if the mem_access checker
> decides to do so.
>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Reviewed-by:  Julien Grall <julien.grall@linaro.org>

IHMO, the Acked-by and Reviewed-by should not have been carried in this 
patch because of the change you made in v14 (mainly the TLB).

[..]

> +        /*
> +         * Flush the TLB to make sure the DTLB is clear before
> +         * doing GVA->IPA translation. If we got here because of
> +         * an entry only present in the ITLB, this translation may
> +         * still be inaccurate.
> +         */
> +        flush_tlb_domain(current->domain);

flush TLB domain is very expensive, it flushes TLBs on every CPU. While 
you may only need a flush on the current CPU.

Although, on ARMv8, there is no possibility to flush only DTLB or ITLB 
for aarch64. You have to do both at the same time. So the problem you 
are describing can't happen. After reading the ID_MMFR2_EL1, I 
understand that Unified TLB is strongly advice on ARMv8 so any DTLB/ITLB 
flush would flush the unified TLB for aarch32 guest.

This is different for ARMv7, it may be possible to have a split-TLB. The 
register ID_MMFR2 indicates if the platform implement unified TLB or 
harvard TLB. In the case of the former, any DTLB/ITLB flush will be 
perform on the unified TLB (see B4.2.2 in DDI406C.b).

So it would be possible to avoid the flush in most of the case.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-26 23:21   ` Julien Grall
@ 2015-03-27  8:32     ` Tamas K Lengyel
  2015-03-27 12:21       ` Julien Grall
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-27  8:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Ian Jackson, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 2654 bytes --]

On Fri, Mar 27, 2015 at 12:21 AM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hi Tamas,
>
> On 26/03/2015 22:05, Tamas K Lengyel wrote:
>
>> Add missing structure definition for iabt and update the trap handling
>> mechanism to only inject the exception if the mem_access checker
>> decides to do so.
>>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>> Reviewed-by:  Julien Grall <julien.grall@linaro.org>
>>
>
> IHMO, the Acked-by and Reviewed-by should not have been carried in this
> patch because of the change you made in v14 (mainly the TLB).
>

Yea are probably right.


>
> [..]
>
>  +        /*
>> +         * Flush the TLB to make sure the DTLB is clear before
>> +         * doing GVA->IPA translation. If we got here because of
>> +         * an entry only present in the ITLB, this translation may
>> +         * still be inaccurate.
>> +         */
>> +        flush_tlb_domain(current->domain);
>>
>
> flush TLB domain is very expensive, it flushes TLBs on every CPU. While
> you may only need a flush on the current CPU.
>

Ack, there just isn't a function at the moment to do tlbflush only for a
given cpu.  While I understand the argument behind the performance impact
of the flush, any user of the mem_access system would IMHO prefer accuracy
over performance. As I said before, this path seldom ever triggers without
mem_access triggering it.


>
> Although, on ARMv8, there is no possibility to flush only DTLB or ITLB for
> aarch64. You have to do both at the same time. So the problem you are
> describing can't happen. After reading the ID_MMFR2_EL1, I understand that
> Unified TLB is strongly advice on ARMv8 so any DTLB/ITLB flush would flush
> the unified TLB for aarch32 guest.
>

The problem I'm describing doesn't depend on having separate DTLB/ITLB
flush operations available, albeit those making life a lot for a potential
malicious in-guest kernel. There were no separate flush operations on x86
either and split-TLB poisoning was still a thing. The introduction of the
sTLB is what made it less usable for malicious purposes.


>
> This is different for ARMv7, it may be possible to have a split-TLB. The
> register ID_MMFR2 indicates if the platform implement unified TLB or
> harvard TLB. In the case of the former, any DTLB/ITLB flush will be perform
> on the unified TLB (see B4.2.2 in DDI406C.b).
>
> So it would be possible to avoid the flush in most of the case.
>

Ack, if there is no separate ITLB/DTLB on the hardware and we can tell than
this problem doesn't apply to those devices.


>
> Regards,
>
> --
> Julien Grall
>

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 4316 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-27  8:32     ` Tamas K Lengyel
@ 2015-03-27 12:21       ` Julien Grall
  0 siblings, 0 replies; 36+ messages in thread
From: Julien Grall @ 2015-03-27 12:21 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Ian Jackson, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser

Hi Tamas,

On 27/03/15 08:32, Tamas K Lengyel wrote:
>         +        /*
>         +         * Flush the TLB to make sure the DTLB is clear before
>         +         * doing GVA->IPA translation. If we got here because of
>         +         * an entry only present in the ITLB, this translation may
>         +         * still be inaccurate.
>         +         */
>         +        flush_tlb_domain(current->__domain);
> 
> 
>     flush TLB domain is very expensive, it flushes TLBs on every CPU.
>     While you may only need a flush on the current CPU.
> 
> 
> Ack, there just isn't a function at the moment to do tlbflush only for a
> given cpu.

There is one function to flush the TLB only on the current CPU. See
flush_tlb_local.

flush_tlb_domain is an helper in order to flush TLB on another domain
than the current one.

> While I understand the argument behind the performance
> impact of the flush, any user of the mem_access system would IMHO prefer
> accuracy over performance. As I said before, this path seldom ever
> triggers without mem_access triggering it.

Without memaccess, we always inject a prefetch abort with the virtual
address. So we don't care about the flush TLB here.

> 
> 
>     Although, on ARMv8, there is no possibility to flush only DTLB or
>     ITLB for aarch64. You have to do both at the same time. So the
>     problem you are describing can't happen. After reading the
>     ID_MMFR2_EL1, I understand that Unified TLB is strongly advice on
>     ARMv8 so any DTLB/ITLB flush would flush the unified TLB for aarch32
>     guest.
> 
> 
> The problem I'm describing doesn't depend on having separate DTLB/ITLB
> flush operations available, albeit those making life a lot for a
> potential malicious in-guest kernel. There were no separate flush
> operations on x86 either and split-TLB poisoning was still a thing. The
> introduction of the sTLB is what made it less usable for malicious purposes.

What I was trying to say is given ID_MMFR2_EL1, I think ARMv8 only
support unified TLB. But I might be wrong...

Regards,


-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-26 22:05 ` [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling Tamas K Lengyel
  2015-03-26 23:21   ` Julien Grall
@ 2015-03-27 15:52   ` Ian Campbell
  2015-03-27 22:18     ` Tamas K Lengyel
  1 sibling, 1 reply; 36+ messages in thread
From: Ian Campbell @ 2015-03-27 15:52 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, stefano.stabellini, ian.jackson, julien.grall, tim,
	xen-devel, stefano.stabellini, jbeulich, keir

On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> Add missing structure definition for iabt and update the trap handling
> mechanism to only inject the exception if the mem_access checker
> decides to do so.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Reviewed-by:  Julien Grall <julien.grall@linaro.org>
> ---
> v14: - Query HPFAR_EL2 when valid in iabt handler
>      - Flush TLB before doing GVA->IPA translation in iabt handler
> v10: - Minor comment fix for describing s1ptw.
> v8: - Revert to arch specific p2m_mem_access_check.
>     - Retire iabt_fsc enum and use FSC_FLT instead.
>     - Complete the struct definition of hsr_iabt.
> v7: - Use the new common mem_access_check.
> v6: - Make npfec a const.
> v4: - Don't mark instruction fetch violation as read violation.
>     - Use new struct npfec to pass violation info.
> v2: - Add definition for instruction abort instruction fetch status codes
>        (enum iabt_ifsc) and only call p2m_mem_access_check for traps triggered
>        for permission violations.
> ---
>  xen/arch/arm/traps.c            | 46 +++++++++++++++++++++++++++++++++++++++--
>  xen/include/asm-arm/processor.h | 11 ++++++++++
>  2 files changed, 55 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index e02c848..be90c9e 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -40,6 +40,7 @@
>  #include <asm/psci.h>
>  #include <asm/mmio.h>
>  #include <asm/cpufeature.h>
> +#include <asm/flushtlb.h>
>  
>  #include "decode.h"
>  #include "vtimer.h"
> @@ -1961,8 +1962,49 @@ done:
>  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>                                        union hsr hsr)
>  {
> -    register_t addr = READ_SYSREG(FAR_EL2);
> -    inject_iabt_exception(regs, addr, hsr.len);
> +    struct hsr_iabt iabt = hsr.iabt;
> +    int rc;
> +    paddr_t gpa;
> +    register_t gva = READ_SYSREG(FAR_EL2);
> +
> +    if ( iabt.s1ptw )
> +        gpa = READ_SYSREG(HPFAR_EL2);
> +    else

Can you not avoid the else case entirely by extending the if to cover
the other cases where HPFAR is explicitly valid? I can't be bothered to
go look right now but IIRC it included at least stage 2 access
permissions related failures, which would cover more xenaccess
scenarios, no?

> +    {
> +        /*
> +         * Flush the TLB to make sure the DTLB is clear before
> +         * doing GVA->IPA translation. If we got here because of
> +         * an entry only present in the ITLB, this translation may
> +         * still be inaccurate.
> +         */
> +        flush_tlb_domain(current->domain);
> +
> +        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
> +        if ( rc == -EFAULT )
> +            goto bad_insn_abort;
> +    }
> +
> +    switch ( iabt.ifsc & 0x3f )
> +    {
> +    case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
> +    {
> +        const struct npfec npfec = {
> +            .insn_fetch = 1,
> +            .gla_valid = 1,
> +            .kind = iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +        };
> +
> +        rc = p2m_mem_access_check(gpa, gva, npfec);
> +
> +        /* Trap was triggered by mem_access, work here is done */
> +        if ( !rc )
> +            return;
> +    }
> +    break;
> +    }
> +
> +bad_insn_abort:
> +    inject_iabt_exception(regs, gva, hsr.len);
>  }
>  
>  static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index cf7ab7c..db07fdd 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -438,6 +438,17 @@ union hsr {
>      } sysreg; /* HSR_EC_SYSREG */
>  #endif
>  
> +    struct hsr_iabt {
> +        unsigned long ifsc:6;  /* Instruction fault status code */
> +        unsigned long res0:1;
> +        unsigned long s1ptw:1; /* Stage 2 fault during stage 1 translation */
> +        unsigned long res1:1;
> +        unsigned long eat:1;   /* External abort type */
> +        unsigned long res2:15;
> +        unsigned long len:1;   /* Instruction length */
> +        unsigned long ec:6;    /* Exception Class */
> +    } iabt; /* HSR_EC_INSTR_ABORT_* */
> +
>      struct hsr_dabt {
>          unsigned long dfsc:6;  /* Data Fault Status Code */
>          unsigned long write:1; /* Write / not Read */

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-27 15:52   ` Ian Campbell
@ 2015-03-27 22:18     ` Tamas K Lengyel
  2015-03-30  9:41       ` Ian Campbell
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-27 22:18 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 5073 bytes --]

On Fri, Mar 27, 2015 at 3:52 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> > Add missing structure definition for iabt and update the trap handling
> > mechanism to only inject the exception if the mem_access checker
> > decides to do so.
> >
> > Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > Reviewed-by:  Julien Grall <julien.grall@linaro.org>
> > ---
> > v14: - Query HPFAR_EL2 when valid in iabt handler
> >      - Flush TLB before doing GVA->IPA translation in iabt handler
> > v10: - Minor comment fix for describing s1ptw.
> > v8: - Revert to arch specific p2m_mem_access_check.
> >     - Retire iabt_fsc enum and use FSC_FLT instead.
> >     - Complete the struct definition of hsr_iabt.
> > v7: - Use the new common mem_access_check.
> > v6: - Make npfec a const.
> > v4: - Don't mark instruction fetch violation as read violation.
> >     - Use new struct npfec to pass violation info.
> > v2: - Add definition for instruction abort instruction fetch status codes
> >        (enum iabt_ifsc) and only call p2m_mem_access_check for traps
> triggered
> >        for permission violations.
> > ---
> >  xen/arch/arm/traps.c            | 46
> +++++++++++++++++++++++++++++++++++++++--
> >  xen/include/asm-arm/processor.h | 11 ++++++++++
> >  2 files changed, 55 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index e02c848..be90c9e 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -40,6 +40,7 @@
> >  #include <asm/psci.h>
> >  #include <asm/mmio.h>
> >  #include <asm/cpufeature.h>
> > +#include <asm/flushtlb.h>
> >
> >  #include "decode.h"
> >  #include "vtimer.h"
> > @@ -1961,8 +1962,49 @@ done:
> >  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
> >                                        union hsr hsr)
> >  {
> > -    register_t addr = READ_SYSREG(FAR_EL2);
> > -    inject_iabt_exception(regs, addr, hsr.len);
> > +    struct hsr_iabt iabt = hsr.iabt;
> > +    int rc;
> > +    paddr_t gpa;
> > +    register_t gva = READ_SYSREG(FAR_EL2);
> > +
> > +    if ( iabt.s1ptw )
> > +        gpa = READ_SYSREG(HPFAR_EL2);
> > +    else
>
> Can you not avoid the else case entirely by extending the if to cover
> the other cases where HPFAR is explicitly valid? I can't be bothered to
> go look right now but IIRC it included at least stage 2 access
> permissions related failures, which would cover more xenaccess
> scenarios, no?
>

Depending on the fault cause, we might. For permission faults, HPFAR is
only valid during s1ptw.. Given that the only check we do is for permission
faults and that's the only thing that cares about the API anyway, we can
put this entire block into the switch itself once the fault check is
already determined to be a permission fault.


>
> > +    {
> > +        /*
> > +         * Flush the TLB to make sure the DTLB is clear before
> > +         * doing GVA->IPA translation. If we got here because of
> > +         * an entry only present in the ITLB, this translation may
> > +         * still be inaccurate.
> > +         */
> > +        flush_tlb_domain(current->domain);
> > +
> > +        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
> > +        if ( rc == -EFAULT )
> > +            goto bad_insn_abort;
> > +    }
> > +
> > +    switch ( iabt.ifsc & 0x3f )
> > +    {
> > +    case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
> > +    {
> > +        const struct npfec npfec = {
> > +            .insn_fetch = 1,
> > +            .gla_valid = 1,
> > +            .kind = iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> > +        };
> > +
> > +        rc = p2m_mem_access_check(gpa, gva, npfec);
> > +
> > +        /* Trap was triggered by mem_access, work here is done */
> > +        if ( !rc )
> > +            return;
> > +    }
> > +    break;
> > +    }
> > +
> > +bad_insn_abort:
> > +    inject_iabt_exception(regs, gva, hsr.len);
> >  }
> >
> >  static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
> > diff --git a/xen/include/asm-arm/processor.h
> b/xen/include/asm-arm/processor.h
> > index cf7ab7c..db07fdd 100644
> > --- a/xen/include/asm-arm/processor.h
> > +++ b/xen/include/asm-arm/processor.h
> > @@ -438,6 +438,17 @@ union hsr {
> >      } sysreg; /* HSR_EC_SYSREG */
> >  #endif
> >
> > +    struct hsr_iabt {
> > +        unsigned long ifsc:6;  /* Instruction fault status code */
> > +        unsigned long res0:1;
> > +        unsigned long s1ptw:1; /* Stage 2 fault during stage 1
> translation */
> > +        unsigned long res1:1;
> > +        unsigned long eat:1;   /* External abort type */
> > +        unsigned long res2:15;
> > +        unsigned long len:1;   /* Instruction length */
> > +        unsigned long ec:6;    /* Exception Class */
> > +    } iabt; /* HSR_EC_INSTR_ABORT_* */
> > +
> >      struct hsr_dabt {
> >          unsigned long dfsc:6;  /* Data Fault Status Code */
> >          unsigned long write:1; /* Write / not Read */
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 6798 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-27 22:18     ` Tamas K Lengyel
@ 2015-03-30  9:41       ` Ian Campbell
  2015-03-30 15:14         ` Tamas K Lengyel
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Campbell @ 2015-03-30  9:41 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser

On Fri, 2015-03-27 at 22:18 +0000, Tamas K Lengyel wrote:

>         
>         >                                        union hsr hsr)
>         >  {
>         > -    register_t addr = READ_SYSREG(FAR_EL2);
>         > -    inject_iabt_exception(regs, addr, hsr.len);
>         > +    struct hsr_iabt iabt = hsr.iabt;
>         > +    int rc;
>         > +    paddr_t gpa;
>         > +    register_t gva = READ_SYSREG(FAR_EL2);
>         > +
>         > +    if ( iabt.s1ptw )
>         > +        gpa = READ_SYSREG(HPFAR_EL2);
>         > +    else
>         
>         
>         Can you not avoid the else case entirely by extending the if to cover
>         the other cases where HPFAR is explicitly valid? I can't be bothered to
>         go look right now but IIRC it included at least stage 2 access
>         permissions related failures, which would cover more xenaccess
>         scenarios, no?
> 
> 
> Depending on the fault cause, we might. For permission faults, HPFAR
> is only valid during s1ptw.. Given that the only check we do is for
> permission faults and that's the only thing that cares about the API
> anyway, we can put this entire block into the switch itself once the
> fault check is already determined to be a permission fault.

According to ARMv8 ARM HPFAR is valid for any of these:
      * A Translation or Access Flag fault on a stage 2 translation.
      * A stage 2 Address Size fault.
      * A fault on the stage 2 translation of an address accessed in a
        stage 1 translation table walk.

I think what you are (correctly) saying is that it omits "a permission
fault on a stage 2 translation", which is one of the cases which can
occur under xenaccess. Which is shame :-(

Ian.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-30  9:41       ` Ian Campbell
@ 2015-03-30 15:14         ` Tamas K Lengyel
  2015-03-30 15:24           ` Ian Campbell
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-30 15:14 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1974 bytes --]

On Mon, Mar 30, 2015 at 11:41 AM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Fri, 2015-03-27 at 22:18 +0000, Tamas K Lengyel wrote:
>
> >
> >         >                                        union hsr hsr)
> >         >  {
> >         > -    register_t addr = READ_SYSREG(FAR_EL2);
> >         > -    inject_iabt_exception(regs, addr, hsr.len);
> >         > +    struct hsr_iabt iabt = hsr.iabt;
> >         > +    int rc;
> >         > +    paddr_t gpa;
> >         > +    register_t gva = READ_SYSREG(FAR_EL2);
> >         > +
> >         > +    if ( iabt.s1ptw )
> >         > +        gpa = READ_SYSREG(HPFAR_EL2);
> >         > +    else
> >
> >
> >         Can you not avoid the else case entirely by extending the if to
> cover
> >         the other cases where HPFAR is explicitly valid? I can't be
> bothered to
> >         go look right now but IIRC it included at least stage 2 access
> >         permissions related failures, which would cover more xenaccess
> >         scenarios, no?
> >
> >
> > Depending on the fault cause, we might. For permission faults, HPFAR
> > is only valid during s1ptw.. Given that the only check we do is for
> > permission faults and that's the only thing that cares about the API
> > anyway, we can put this entire block into the switch itself once the
> > fault check is already determined to be a permission fault.
>
> According to ARMv8 ARM HPFAR is valid for any of these:
>       * A Translation or Access Flag fault on a stage 2 translation.
>       * A stage 2 Address Size fault.
>       * A fault on the stage 2 translation of an address accessed in a
>         stage 1 translation table walk.
>
> I think what you are (correctly) saying is that it omits "a permission
> fault on a stage 2 translation", which is one of the cases which can
> occur under xenaccess. Which is shame :-(
>
> Ian.
>

Hm, what I recall is that it is valid for permission faults as well, but
only and exclusively during s1ptw.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2795 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-30 15:14         ` Tamas K Lengyel
@ 2015-03-30 15:24           ` Ian Campbell
  2015-03-30 15:28             ` Tamas K Lengyel
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Campbell @ 2015-03-30 15:24 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser

On Mon, 2015-03-30 at 17:14 +0200, Tamas K Lengyel wrote:

>         According to ARMv8 ARM HPFAR is valid for any of these:
>               * A Translation or Access Flag fault on a stage 2 translation.
>               * A stage 2 Address Size fault.
>               * A fault on the stage 2 translation of an address accessed in a
>                 stage 1 translation table walk.
>         
>         I think what you are (correctly) saying is that it omits "a permission
>         fault on a stage 2 translation", which is one of the cases which can
>         occur under xenaccess. Which is shame :-(
> 
> Hm, what I recall is that it is valid for permission faults as well,
> but only and exclusively during s1ptw.

Right, which is the third bullet I quoted. The issue I was referring to
was the lack of permission faults being mentioned in the first bullet (!
s1ptw case).

Ian.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling
  2015-03-30 15:24           ` Ian Campbell
@ 2015-03-30 15:28             ` Tamas K Lengyel
  0 siblings, 0 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-03-30 15:28 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1050 bytes --]

On Mon, Mar 30, 2015 at 5:24 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Mon, 2015-03-30 at 17:14 +0200, Tamas K Lengyel wrote:
>
> >         According to ARMv8 ARM HPFAR is valid for any of these:
> >               * A Translation or Access Flag fault on a stage 2
> translation.
> >               * A stage 2 Address Size fault.
> >               * A fault on the stage 2 translation of an address
> accessed in a
> >                 stage 1 translation table walk.
> >
> >         I think what you are (correctly) saying is that it omits "a
> permission
> >         fault on a stage 2 translation", which is one of the cases which
> can
> >         occur under xenaccess. Which is shame :-(
> >
> > Hm, what I recall is that it is valid for permission faults as well,
> > but only and exclusively during s1ptw.
>
> Right, which is the third bullet I quoted. The issue I was referring to
> was the lack of permission faults being mentioned in the first bullet (!
> s1ptw case).
>
> Ian.
>

Yeap, that's the core of the problem..

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1655 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn
  2015-03-26 22:05 ` [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
@ 2015-04-08 13:23   ` Julien Grall
  2015-04-08 13:38     ` Tamas K Lengyel
  0 siblings, 1 reply; 36+ messages in thread
From: Julien Grall @ 2015-04-08 13:23 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir

Hi Tamas,

On 26/03/15 22:05, Tamas K Lengyel wrote:
> From: Julien Grall <julien.grall@linaro.org>
> 
> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> 
> We use this in xenaccess as to avoid the user attempting to set page
> permissions on pages which don't exist for the domain, as a non-arch specific
> sanity check.

FWIW, xc_domain_maximum_gpfn is buggy for ARM64 as a PFN can be encode
with up to 36 bits which doesn't fit on the return code (an int is used).

Although, I don't see any call to xc_domain_maximum_gpfn in memaccess
(tools/tests/memaccess). So is this patch still useful?

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn
  2015-04-08 13:23   ` Julien Grall
@ 2015-04-08 13:38     ` Tamas K Lengyel
  2015-04-08 13:42       ` Julien Grall
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-04-08 13:38 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1243 bytes --]

The patch that will use xc_domain_maximum_gpfn is not included right now in
this series as my other series significantly cleans up the existing
xen-access test code. Before that's merged, there is no point in carrying
that patch here as it will be in conflict anyway. I'm intending to send
that once my cleanup series is merged (or append it to this series again if
it's still open).

Tamas

On Wed, Apr 8, 2015 at 3:23 PM, Julien Grall <julien.grall.oss@gmail.com>
wrote:

> Hi Tamas,
>
> On 26/03/15 22:05, Tamas K Lengyel wrote:
> > From: Julien Grall <julien.grall@linaro.org>
> >
> > The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> > mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this
> purpose.
> >
> > We use this in xenaccess as to avoid the user attempting to set page
> > permissions on pages which don't exist for the domain, as a non-arch
> specific
> > sanity check.
>
> FWIW, xc_domain_maximum_gpfn is buggy for ARM64 as a PFN can be encode
> with up to 36 bits which doesn't fit on the return code (an int is used).
>
> Although, I don't see any call to xc_domain_maximum_gpfn in memaccess
> (tools/tests/memaccess). So is this patch still useful?
>
> Regards,
>
> --
> Julien Grall
>

[-- Attachment #1.2: Type: text/html, Size: 1789 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn
  2015-04-08 13:38     ` Tamas K Lengyel
@ 2015-04-08 13:42       ` Julien Grall
  2015-04-08 13:47         ` Tamas K Lengyel
  0 siblings, 1 reply; 36+ messages in thread
From: Julien Grall @ 2015-04-08 13:42 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Keir Fraser

On 08/04/15 14:38, Tamas K Lengyel wrote:
> The patch that will use xc_domain_maximum_gpfn is not included right now
> in this series as my other series significantly cleans up the existing
> xen-access test code. Before that's merged, there is no point in
> carrying that patch here as it will be in conflict anyway. I'm intending
> to send that once my cleanup series is merged (or append it to this
> series again if it's still open).

I was going to say that xen-access is not modified to support ARM. But I
guess the xc_domain_maximum_gpfn will be used in order to fix it?

Although, this hypercall is really buggy for ARM platform. It would be
better to pass to add a parameter in order to get the maximum pfn rather
than using the return value which will be truncated in many different
ways later.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn
  2015-04-08 13:42       ` Julien Grall
@ 2015-04-08 13:47         ` Tamas K Lengyel
  2015-04-08 13:49           ` Julien Grall
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-04-08 13:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1168 bytes --]

On Wed, Apr 8, 2015 at 3:42 PM, Julien Grall <julien.grall.oss@gmail.com>
wrote:

> On 08/04/15 14:38, Tamas K Lengyel wrote:
> > The patch that will use xc_domain_maximum_gpfn is not included right now
> > in this series as my other series significantly cleans up the existing
> > xen-access test code. Before that's merged, there is no point in
> > carrying that patch here as it will be in conflict anyway. I'm intending
> > to send that once my cleanup series is merged (or append it to this
> > series again if it's still open).
>
> I was going to say that xen-access is not modified to support ARM. But I
> guess the xc_domain_maximum_gpfn will be used in order to fix it?
>
> Although, this hypercall is really buggy for ARM platform. It would be
> better to pass to add a parameter in order to get the maximum pfn rather
> than using the return value which will be truncated in many different
> ways later.
>
> Regards,
>
> --
> Julien Grall
>

I think updating libxc to not truncate the value would be the preferred way
to go forward (changing return type to uint64_t). I already added those
changes to this patch and it's only a handful of instances.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 1697 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn
  2015-04-08 13:47         ` Tamas K Lengyel
@ 2015-04-08 13:49           ` Julien Grall
  0 siblings, 0 replies; 36+ messages in thread
From: Julien Grall @ 2015-04-08 13:49 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Keir Fraser

On 08/04/15 14:47, Tamas K Lengyel wrote:
> I think updating libxc to not truncate the value would be the preferred
> way to go forward (changing return type to uint64_t). I already added
> those changes to this patch and it's only a handful of instances.

It won't work if the toolstack is running in a 32-bit domain on a 64-bit
platform.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-03-26 22:05 ` [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
@ 2015-04-08 14:33   ` Julien Grall
  2015-04-08 15:57     ` Tamas K Lengyel
  2015-04-15 13:48   ` Ian Campbell
  1 sibling, 1 reply; 36+ messages in thread
From: Julien Grall @ 2015-04-08 14:33 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir

Hi Tamas,

One minor question.

On 26/03/15 22:05, Tamas K Lengyel wrote:
> +    /*
> +     * We had a mem_access permission limiting the access, but the page type
> +     * could also be limiting, so we need to check that as well.
> +     */
> +    maddr = p2m_lookup(current->domain, ipa, &t);
> +    if ( maddr == INVALID_PADDR )
> +        goto err;
> +
> +    mfn = maddr >> PAGE_SHIFT;
> +    if ( !mfn_valid(mfn) )
> +        goto err;
> +
> +    /*
> +     * Base type doesn't allow r/w
> +     */
> +    if ( t != p2m_ram_rw )
> +        goto err;
> +
> +    page = mfn_to_page(mfn);
> +
> +    if ( unlikely(!get_page(page, current->domain)) )
> +        page = NULL;

I just found that this code is very similar to get_page_from_gfn. Would
it make sense to use it?

> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index ad046e8..5d90609 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1988,7 +1988,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      if (dabt.s1ptw)
>          goto bad_data_abort;
>  
> -    rc = gva_to_ipa(info.gva, &info.gpa);
> +    rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);

The correct value would be to use dabt.write in order to give either
GV2M_READ or GV2M_WRITE. Although you keep compatibility and it's out of
the scope of this patch.

I will send a follow-up when your series will be pushed.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events
  2015-03-26 22:05 ` [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events Tamas K Lengyel
@ 2015-04-08 15:26   ` Julien Grall
  2015-04-08 15:45     ` Tamas K Lengyel
  2015-04-15 13:53   ` Ian Campbell
  1 sibling, 1 reply; 36+ messages in thread
From: Julien Grall @ 2015-04-08 15:26 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: wei.liu2, ian.campbell, stefano.stabellini, ian.jackson,
	julien.grall, tim, stefano.stabellini, jbeulich, keir

Hi Tamas,

The code looks good. See few typoes and coding style issue below.

On 26/03/15 22:05, Tamas K Lengyel wrote:
> +static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
> +                                    p2m_access_t a)
> +{
> +    int rc;
> +
> +    if ( !p2m->mem_access_enabled )
> +        return 0;
> +
> +    if ( p2m_access_rwx == a )
> +    {
> +        radix_tree_delete(&p2m->mem_access_settings, pfn);
> +        return 0;
> +    }
> +
> +    rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
> +                           radix_tree_int_to_ptr(a));
> +    if ( rc == -EEXIST )
> +    {
> +        /* If a setting existed already, change it to the new one */

s/existed already/already exists/?

> +        radix_tree_replace_slot(
> +            radix_tree_lookup_slot(
> +                &p2m->mem_access_settings, pfn),
> +            radix_tree_int_to_ptr(a));
> +        rc = 0;
> +    }
> +
> +    return rc;
> +}
> +
>  enum p2m_operation {
>      INSERT,
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
>      CACHEFLUSH,
> +    MEMACCESS,
>  };
>  
>  /* Put any references on the single 4K page referenced by pte.  TODO:
> @@ -560,13 +593,22 @@ static int apply_one_level(struct domain *d,
>          if ( p2m_valid(orig_pte) )
>              return P2M_ONE_DESCEND;
>  
> -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
> +        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
> +           /* We only create superpages when mem_access is not in use. */
> +             (level == 3 || (level < 3 && !p2m->mem_access_enabled)) )

I'm wondering if it would be better to check "a != p2m_access_rwx"
rather than "p2m->mem_access_enabled" in order to avoid split when it's
unnecessary.

Although given the status of this series. I won't bother you to ask you
this change now :).

>          {
>              struct page_info *page;
>  
>              page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
>              if ( page )
>              {
> +                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
> +                if ( rc < 0 )
> +                {
> +                    free_domheap_page(page);
> +                    return rc;
> +                }
> +
>                  pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
>                  if ( level < 3 )
>                      pte.p2m.table = 0;
> @@ -587,8 +629,8 @@ static int apply_one_level(struct domain *d,
>          /*
>           * If we get here then we failed to allocate a sufficiently
>           * large contiguous region for this level (which can't be
> -         * L3). Create a page table and continue to descend so we try
> -         * smaller allocations.
> +         * L3) or mem_access is in use. Create a page table and
> +         * continue to descend so we try smaller allocations.
>           */
>          rc = p2m_create_table(d, entry, 0, flush_cache);
>          if ( rc < 0 )
> @@ -598,9 +640,14 @@ static int apply_one_level(struct domain *d,
>  
>      case INSERT:
>          if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
> -           /* We do not handle replacing an existing table with a superpage */
> -             (level == 3 || !p2m_table(orig_pte)) )
> +           /* We do not handle replacing an existing table with a superpage
> +            * or when mem_access is in use. */

Coding style:
/*
 * blah blah
 */

[..]

> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)

[..]

> +    /* Otherwise, check if there is a memory event listener, and send the message along */
> +    if ( !mem_event_check_ring(&v->domain->mem_event->access) )
> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no mem_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, v->domain->domain_id);

You may want to use gprintk as gdprintk call will be dropped on
non-debug build.

[..]

> +/* Set access type for a region of pfns.
> + * If start_pfn == -1ul, sets the default access type */

Coding style:

/*
 * Blah blah
 */

> +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access)

[..]

> +int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
> +                       xenmem_access_t *access)


[..]

> +    /* If request to get default access */
> +    if ( gpfn == ~0ull )

gpfn is an unsigned long. You may want to use ~0ul here.

[..]

> diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> index 29f3628..56d1afe 100644
> --- a/xen/include/xen/p2m-common.h
> +++ b/xen/include/xen/p2m-common.h
> @@ -44,4 +44,14 @@ int unmap_mmio_regions(struct domain *d,
>                         unsigned long nr,
>                         unsigned long mfn);
>  
> +/* Set access type for a region of pfns.
> + * If start_pfn == -1ul, sets the default access type */
> +long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t access);
> +

Coding style:

/*
 * Blah blah
 */

> +/* Get access type for a pfn
> + * If pfn == -1ul, gets the default access type */
> +int p2m_get_mem_access(struct domain *d, unsigned long pfn,
> +                       xenmem_access_t *access);
> +

Ditto


>  #endif /* _XEN_P2M_COMMON_H */
> 

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events
  2015-04-08 15:26   ` Julien Grall
@ 2015-04-08 15:45     ` Tamas K Lengyel
  0 siblings, 0 replies; 36+ messages in thread
From: Tamas K Lengyel @ 2015-04-08 15:45 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 6427 bytes --]

On Wed, Apr 8, 2015 at 5:26 PM, Julien Grall <julien.grall@citrix.com>
wrote:

> Hi Tamas,
>
> The code looks good. See few typoes and coding style issue below.
>
> On 26/03/15 22:05, Tamas K Lengyel wrote:
> > +static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned
> long pfn,
> > +                                    p2m_access_t a)
> > +{
> > +    int rc;
> > +
> > +    if ( !p2m->mem_access_enabled )
> > +        return 0;
> > +
> > +    if ( p2m_access_rwx == a )
> > +    {
> > +        radix_tree_delete(&p2m->mem_access_settings, pfn);
> > +        return 0;
> > +    }
> > +
> > +    rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
> > +                           radix_tree_int_to_ptr(a));
> > +    if ( rc == -EEXIST )
> > +    {
> > +        /* If a setting existed already, change it to the new one */
>
> s/existed already/already exists/?
>

Ack.


>
> > +        radix_tree_replace_slot(
> > +            radix_tree_lookup_slot(
> > +                &p2m->mem_access_settings, pfn),
> > +            radix_tree_int_to_ptr(a));
> > +        rc = 0;
> > +    }
> > +
> > +    return rc;
> > +}
> > +
> >  enum p2m_operation {
> >      INSERT,
> >      ALLOCATE,
> >      REMOVE,
> >      RELINQUISH,
> >      CACHEFLUSH,
> > +    MEMACCESS,
> >  };
> >
> >  /* Put any references on the single 4K page referenced by pte.  TODO:
> > @@ -560,13 +593,22 @@ static int apply_one_level(struct domain *d,
> >          if ( p2m_valid(orig_pte) )
> >              return P2M_ONE_DESCEND;
> >
> > -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
> > +        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
> > +           /* We only create superpages when mem_access is not in use.
> */
> > +             (level == 3 || (level < 3 && !p2m->mem_access_enabled)) )
>
> I'm wondering if it would be better to check "a != p2m_access_rwx"
> rather than "p2m->mem_access_enabled" in order to avoid split when it's
> unnecessary.
>
> Although given the status of this series. I won't bother you to ask you
> this change now :).
>

I think that's application specific if the performance gain would apply.
Normal use (that I can think of) would warrant a default setting of
!p2m_access_rwx with mem_access.


>
> >          {
> >              struct page_info *page;
> >
> >              page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
> >              if ( page )
> >              {
> > +                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr),
> a);
> > +                if ( rc < 0 )
> > +                {
> > +                    free_domheap_page(page);
> > +                    return rc;
> > +                }
> > +
> >                  pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
> >                  if ( level < 3 )
> >                      pte.p2m.table = 0;
> > @@ -587,8 +629,8 @@ static int apply_one_level(struct domain *d,
> >          /*
> >           * If we get here then we failed to allocate a sufficiently
> >           * large contiguous region for this level (which can't be
> > -         * L3). Create a page table and continue to descend so we try
> > -         * smaller allocations.
> > +         * L3) or mem_access is in use. Create a page table and
> > +         * continue to descend so we try smaller allocations.
> >           */
> >          rc = p2m_create_table(d, entry, 0, flush_cache);
> >          if ( rc < 0 )
> > @@ -598,9 +640,14 @@ static int apply_one_level(struct domain *d,
> >
> >      case INSERT:
> >          if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size)
> &&
> > -           /* We do not handle replacing an existing table with a
> superpage */
> > -             (level == 3 || !p2m_table(orig_pte)) )
> > +           /* We do not handle replacing an existing table with a
> superpage
> > +            * or when mem_access is in use. */
>
> Coding style:
> /*
>  * blah blah
>  */
>
> [..]
>
> > +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
> npfec npfec)
>
> [..]
>
> > +    /* Otherwise, check if there is a memory event listener, and send
> the message along */
> > +    if ( !mem_event_check_ring(&v->domain->mem_event->access) )
> > +    {
> > +        /* No listener */
> > +        if ( p2m->access_required )
> > +        {
> > +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> > +                                  "no mem_event listener VCPU %d, dom
> %d\n",
> > +                                  v->vcpu_id, v->domain->domain_id);
>
> You may want to use gprintk as gdprintk call will be dropped on
> non-debug build.
>

gdprink is used on the x86 as well for this condition so for now I think it
makes sense to keep things consistent.


>
> [..]
>
> > +/* Set access type for a region of pfns.
> > + * If start_pfn == -1ul, sets the default access type */
>
> Coding style:
>
> /*
>  * Blah blah
>  */
>
> > +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t
> nr,
> > +                        uint32_t start, uint32_t mask, xenmem_access_t
> access)
>
> [..]
>
> > +int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
> > +                       xenmem_access_t *access)
>
>
> [..]
>
> > +    /* If request to get default access */
> > +    if ( gpfn == ~0ull )
>
> gpfn is an unsigned long. You may want to use ~0ul here.
>
> [..]
>
> > diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> > index 29f3628..56d1afe 100644
> > --- a/xen/include/xen/p2m-common.h
> > +++ b/xen/include/xen/p2m-common.h
> > @@ -44,4 +44,14 @@ int unmap_mmio_regions(struct domain *d,
> >                         unsigned long nr,
> >                         unsigned long mfn);
> >
> > +/* Set access type for a region of pfns.
> > + * If start_pfn == -1ul, sets the default access type */
> > +long p2m_set_mem_access(struct domain *d, unsigned long start_pfn,
> uint32_t nr,
> > +                        uint32_t start, uint32_t mask, xenmem_access_t
> access);
> > +
>
> Coding style:
>
> /*
>  * Blah blah
>  */
>
> > +/* Get access type for a pfn
> > + * If pfn == -1ul, gets the default access type */
> > +int p2m_get_mem_access(struct domain *d, unsigned long pfn,
> > +                       xenmem_access_t *access);
> > +
>
> Ditto
>
>
> >  #endif /* _XEN_P2M_COMMON_H */
> >
>
> Regards,
>
> --
> Julien Grall
>

Thanks, style issues fixed.

Cheers,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 9140 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-04-08 14:33   ` Julien Grall
@ 2015-04-08 15:57     ` Tamas K Lengyel
  2015-04-08 16:07       ` Julien Grall
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-04-08 15:57 UTC (permalink / raw)
  To: Julien Grall
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Jan Beulich, Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1956 bytes --]

On Wed, Apr 8, 2015 at 4:33 PM, Julien Grall <julien.grall@citrix.com>
wrote:

> Hi Tamas,
>
> One minor question.
>
> On 26/03/15 22:05, Tamas K Lengyel wrote:
> > +    /*
> > +     * We had a mem_access permission limiting the access, but the page
> type
> > +     * could also be limiting, so we need to check that as well.
> > +     */
> > +    maddr = p2m_lookup(current->domain, ipa, &t);
> > +    if ( maddr == INVALID_PADDR )
> > +        goto err;
> > +
> > +    mfn = maddr >> PAGE_SHIFT;
> > +    if ( !mfn_valid(mfn) )
> > +        goto err;
> > +
> > +    /*
> > +     * Base type doesn't allow r/w
> > +     */
> > +    if ( t != p2m_ram_rw )
> > +        goto err;
> > +
> > +    page = mfn_to_page(mfn);
> > +
> > +    if ( unlikely(!get_page(page, current->domain)) )
> > +        page = NULL;
>
> I just found that this code is very similar to get_page_from_gfn. Would
> it make sense to use it?
>

Yea I could reuse that code, although it has some extra stuff that I don't
need (like converting ipa to gfn back to paddr), and type checks which I
ultimately don't care about here as the only type allowed is p2m_ram_rw. So
the current version is probably a bit tighter.


>
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index ad046e8..5d90609 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1988,7 +1988,7 @@ static void do_trap_data_abort_guest(struct
> cpu_user_regs *regs,
> >      if (dabt.s1ptw)
> >          goto bad_data_abort;
> >
> > -    rc = gva_to_ipa(info.gva, &info.gpa);
> > +    rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
>
> The correct value would be to use dabt.write in order to give either
> GV2M_READ or GV2M_WRITE. Although you keep compatibility and it's out of
> the scope of this patch.
>
> I will send a follow-up when your series will be pushed.
>

Yea, my goal here was not to change existing behavior.


> Regards,
>
> --
> Julien Grall
>

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 3108 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-04-08 15:57     ` Tamas K Lengyel
@ 2015-04-08 16:07       ` Julien Grall
  0 siblings, 0 replies; 36+ messages in thread
From: Julien Grall @ 2015-04-08 16:07 UTC (permalink / raw)
  To: Tamas K Lengyel, Julien Grall
  Cc: wei.liu2, Ian Campbell, Stefano Stabellini, Tim Deegan,
	Julien Grall, Ian Jackson, Xen-devel, Stefano Stabellini,
	Jan Beulich, Keir Fraser

On 08/04/15 16:57, Tamas K Lengyel wrote:
> Yea I could reuse that code, although it has some extra stuff that I
> don't need (like converting ipa to gfn back to paddr), and type checks
> which I ultimately don't care about here as the only type allowed is
> p2m_ram_rw. So the current version is probably a bit tighter.

Ok. So:

Reviewed-by: Julien Grall <julien.grall@citrix.com>

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 0/7] Mem_access for ARM
  2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2015-03-26 22:05 ` [PATCH V14 7/7] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
@ 2015-04-15 13:36 ` Ian Campbell
  2015-04-15 14:47   ` Tamas K Lengyel
  7 siblings, 1 reply; 36+ messages in thread
From: Ian Campbell @ 2015-04-15 13:36 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, stefano.stabellini, ian.jackson, julien.grall, tim,
	xen-devel, stefano.stabellini, jbeulich, keir

On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> The ARM virtualization extension provides 2-stage paging, a similar mechanisms
> to Intel's EPT, which can be used to trace the memory accesses performed by
> the guest systems. This series sets up the necessary infrastructure in the
> ARM code to deliver the event on R/W/X traps. Finally, we turn on the
> compilation of mem_access and mem_event on ARM and perform the necessary
> changes to the tools side.
> 
> This version of the series is based on master and each patch in the series has
> been compile tested on both ARM and x86.

based on master == no longer depends on the half of your "memaccess
cleanups" series which didn't go in yet?

> 
> This PATCH series is also available at:
> https://github.com/tklengyel/xen/tree/arm_memaccess14
> 
> Julien Grall (1):
>   xen/arm: Implement domain_get_maximum_gpfn
> 
> Tamas K Lengyel (6):
>   xen/arm: groundwork for mem_access support on ARM
>   xen/arm: Allow hypervisor access to mem_access protected pages
>   xen/arm: Data abort exception (R/W) mem_access events
>   xen/arm: Instruction prefetch abort (X) mem_access event handling
>   xen/arm: Enable mem_access on ARM
>   tools/libxc: Allocate magic page for mem access on ARM
> 
>  config/arm32.mk                  |   1 +
>  config/arm64.mk                  |   1 +
>  tools/libxc/xc_dom_arm.c         |   6 +-
>  xen/arch/arm/mm.c                |   2 +-
>  xen/arch/arm/p2m.c               | 564 ++++++++++++++++++++++++++++++++++++---
>  xen/arch/arm/traps.c             |  79 +++++-
>  xen/include/asm-arm/arm32/page.h |   7 +-
>  xen/include/asm-arm/arm64/page.h |   7 +-
>  xen/include/asm-arm/domain.h     |   1 +
>  xen/include/asm-arm/p2m.h        |  32 ++-
>  xen/include/asm-arm/page.h       |   4 +-
>  xen/include/asm-arm/processor.h  |  13 +-
>  xen/include/asm-x86/p2m.h        |  10 -
>  xen/include/xen/p2m-common.h     |  10 +
>  14 files changed, 668 insertions(+), 69 deletions(-)
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM
  2015-03-26 22:05 ` [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM Tamas K Lengyel
@ 2015-04-15 13:39   ` Ian Campbell
  0 siblings, 0 replies; 36+ messages in thread
From: Ian Campbell @ 2015-04-15 13:39 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, stefano.stabellini, ian.jackson, julien.grall, tim,
	xen-devel, stefano.stabellini, jbeulich, keir

On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> Add necessary changes for page table construction routines to pass
> the default access information and hypercall continuation mask. Also,
> define necessary functions and data fields to be used later by mem_access.
> 
> The p2m_access_t info will be stored in a Radix tree as the PTE lacks
> enough software programmable bits, thus in this patch we add the radix-tree
> construction/destruction portions. The tree itself will be used later
> by mem_access.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-03-26 22:05 ` [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
  2015-04-08 14:33   ` Julien Grall
@ 2015-04-15 13:48   ` Ian Campbell
  2015-04-15 15:36     ` Tamas K Lengyel
  1 sibling, 1 reply; 36+ messages in thread
From: Ian Campbell @ 2015-04-15 13:48 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, stefano.stabellini, ian.jackson, julien.grall, tim,
	xen-devel, stefano.stabellini, jbeulich, keir

On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> @@ -1209,6 +1306,10 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
>  
>  err:
>      spin_unlock(&p2m->lock);
> +
> +    if ( !page && p2m->mem_access_enabled )
> +        page = p2m_mem_access_check_and_get_page(va, flags);

Is this safe/correct to do without continuing to hold the p2m lock?

It seems like the result of gva_to_ipa in the new function perhaps ought
to be? Not sure about the p2m_get_mem_access (or does it have its own
lock? Should it?)

The case I'm thinking about is something else (grant ops etc) changing
the p2m between the first check in get_page_from_gva and this one. Worst
case would be spurious results from a race, which perhaps are tolerable?

The rest of it looked good to me.

Ian.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events
  2015-03-26 22:05 ` [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events Tamas K Lengyel
  2015-04-08 15:26   ` Julien Grall
@ 2015-04-15 13:53   ` Ian Campbell
  1 sibling, 0 replies; 36+ messages in thread
From: Ian Campbell @ 2015-04-15 13:53 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, stefano.stabellini, ian.jackson, julien.grall, tim,
	xen-devel, stefano.stabellini, jbeulich, keir

On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> This patch enables to store, set, check and deliver LPAE R/W mem_events.
> As the LPAE PTE's lack enough available software programmable bits,
> we store the permissions in a Radix tree. The tree is only looked at if
> mem_access_enabled is turned on.
> 
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>

Apart from what Julien said (which you've addressed already for next
time) this looks good to me now.

Ian.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 0/7] Mem_access for ARM
  2015-04-15 13:36 ` [PATCH V14 0/7] Mem_access for ARM Ian Campbell
@ 2015-04-15 14:47   ` Tamas K Lengyel
  2015-04-15 15:04     ` Ian Campbell
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-04-15 14:47 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 2328 bytes --]

On Wed, Apr 15, 2015 at 3:36 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> > The ARM virtualization extension provides 2-stage paging, a similar
> mechanisms
> > to Intel's EPT, which can be used to trace the memory accesses performed
> by
> > the guest systems. This series sets up the necessary infrastructure in
> the
> > ARM code to deliver the event on R/W/X traps. Finally, we turn on the
> > compilation of mem_access and mem_event on ARM and perform the necessary
> > changes to the tools side.
> >
> > This version of the series is based on master and each patch in the
> series has
> > been compile tested on both ARM and x86.
>
> based on master == no longer depends on the half of your "memaccess
> cleanups" series which didn't go in yet?
>

It didn't depend on the cleanup series, but now that it is merged I did
rebase v15 on staging. I'm just waiting to get more feedback on v14 before
I send it.

Tamas


>
> >
> > This PATCH series is also available at:
> > https://github.com/tklengyel/xen/tree/arm_memaccess14
> >
> > Julien Grall (1):
> >   xen/arm: Implement domain_get_maximum_gpfn
> >
> > Tamas K Lengyel (6):
> >   xen/arm: groundwork for mem_access support on ARM
> >   xen/arm: Allow hypervisor access to mem_access protected pages
> >   xen/arm: Data abort exception (R/W) mem_access events
> >   xen/arm: Instruction prefetch abort (X) mem_access event handling
> >   xen/arm: Enable mem_access on ARM
> >   tools/libxc: Allocate magic page for mem access on ARM
> >
> >  config/arm32.mk                  |   1 +
> >  config/arm64.mk                  |   1 +
> >  tools/libxc/xc_dom_arm.c         |   6 +-
> >  xen/arch/arm/mm.c                |   2 +-
> >  xen/arch/arm/p2m.c               | 564
> ++++++++++++++++++++++++++++++++++++---
> >  xen/arch/arm/traps.c             |  79 +++++-
> >  xen/include/asm-arm/arm32/page.h |   7 +-
> >  xen/include/asm-arm/arm64/page.h |   7 +-
> >  xen/include/asm-arm/domain.h     |   1 +
> >  xen/include/asm-arm/p2m.h        |  32 ++-
> >  xen/include/asm-arm/page.h       |   4 +-
> >  xen/include/asm-arm/processor.h  |  13 +-
> >  xen/include/asm-x86/p2m.h        |  10 -
> >  xen/include/xen/p2m-common.h     |  10 +
> >  14 files changed, 668 insertions(+), 69 deletions(-)
> >
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3444 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 0/7] Mem_access for ARM
  2015-04-15 14:47   ` Tamas K Lengyel
@ 2015-04-15 15:04     ` Ian Campbell
  0 siblings, 0 replies; 36+ messages in thread
From: Ian Campbell @ 2015-04-15 15:04 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser

On Wed, 2015-04-15 at 16:47 +0200, Tamas K Lengyel wrote:
> On Wed, Apr 15, 2015 at 3:36 PM, Ian Campbell
> <ian.campbell@citrix.com> wrote:
>         On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
>         > The ARM virtualization extension provides 2-stage paging, a
>         similar mechanisms
>         > to Intel's EPT, which can be used to trace the memory
>         accesses performed by
>         > the guest systems. This series sets up the necessary
>         infrastructure in the
>         > ARM code to deliver the event on R/W/X traps. Finally, we
>         turn on the
>         > compilation of mem_access and mem_event on ARM and perform
>         the necessary
>         > changes to the tools side.
>         >
>         > This version of the series is based on master and each patch
>         in the series has
>         > been compile tested on both ARM and x86.
>         
>         based on master == no longer depends on the half of your
>         "memaccess
>         cleanups" series which didn't go in yet?
> 
> 
> It didn't depend on the cleanup series,

oh, I'd been assuming it did, oops.

> but now that it is merged I did rebase v15 on staging. I'm just
> waiting to get more feedback on v14 before I send it.

OK, FWIW other than Julien's coding style stuff the only concern I have
is the locking one I mentioned.

Ian.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-04-15 13:48   ` Ian Campbell
@ 2015-04-15 15:36     ` Tamas K Lengyel
  2015-04-15 15:45       ` Ian Campbell
  0 siblings, 1 reply; 36+ messages in thread
From: Tamas K Lengyel @ 2015-04-15 15:36 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser


[-- Attachment #1.1: Type: text/plain, Size: 1487 bytes --]

On Wed, Apr 15, 2015 at 3:48 PM, Ian Campbell <ian.campbell@citrix.com>
wrote:

> On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> > @@ -1209,6 +1306,10 @@ struct page_info *get_page_from_gva(struct domain
> *d, vaddr_t va,
> >
> >  err:
> >      spin_unlock(&p2m->lock);
> > +
> > +    if ( !page && p2m->mem_access_enabled )
> > +        page = p2m_mem_access_check_and_get_page(va, flags);
>
> Is this safe/correct to do without continuing to hold the p2m lock?
>
> It seems like the result of gva_to_ipa in the new function perhaps ought
> to be? Not sure about the p2m_get_mem_access (or does it have its own
> lock? Should it?)
>

p2m_get_mem_access does lock p2m->lock before it queries the radix tree.
There is another p2m_lookup in p2m_mem_access_check_and_get_page which also
does its own locking.


>
> The case I'm thinking about is something else (grant ops etc) changing
> the p2m between the first check in get_page_from_gva and this one. Worst
> case would be spurious results from a race, which perhaps are tolerable?
>

I'm not sure. Taking and releasing the lock doesn't seem very efficient for
sure and I guess there could be some race conditions there.. Changing it
however would require an extra flag to be sent to p2m_get_mem_access and
p2m_lookup to forgo their own locking because the caller already holds the
lock. It shouldn't be too drastic of a change, but any thoughts on it?

Thanks,
Tamas


>
> The rest of it looked good to me.
>
> Ian.
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 2414 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-04-15 15:36     ` Tamas K Lengyel
@ 2015-04-15 15:45       ` Ian Campbell
  2015-04-16  9:04         ` Tim Deegan
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Campbell @ 2015-04-15 15:45 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: wei.liu2, Stefano Stabellini, Ian Jackson, Julien Grall,
	Tim Deegan, Xen-devel, Stefano Stabellini, Jan Beulich,
	Keir Fraser

On Wed, 2015-04-15 at 17:36 +0200, Tamas K Lengyel wrote:
> 
> 
> On Wed, Apr 15, 2015 at 3:48 PM, Ian Campbell
> <ian.campbell@citrix.com> wrote:
>         On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
>         > @@ -1209,6 +1306,10 @@ struct page_info
>         *get_page_from_gva(struct domain *d, vaddr_t va,
>         >
>         >  err:
>         >      spin_unlock(&p2m->lock);
>         > +
>         > +    if ( !page && p2m->mem_access_enabled )
>         > +        page = p2m_mem_access_check_and_get_page(va,
>         flags);
>         
>         Is this safe/correct to do without continuing to hold the p2m
>         lock?
>         
>         It seems like the result of gva_to_ipa in the new function
>         perhaps ought
>         to be? Not sure about the p2m_get_mem_access (or does it have
>         its own
>         lock? Should it?)
> 
> 
> p2m_get_mem_access does lock p2m->lock before it queries the radix
> tree. There is another p2m_lookup in p2m_mem_access_check_and_get_page
> which also does its own locking.

Understood, but my concern was whether each of those needs to see a
consistent p2m, or whether we can tolerate it changing and giving a
different result as we progress through the options.

>         The case I'm thinking about is something else (grant ops etc)
>         changing
>         the p2m between the first check in get_page_from_gva and this
>         one. Worst
>         case would be spurious results from a race, which perhaps are
>         tolerable?
> 
> 
> I'm not sure. Taking and releasing the lock doesn't seem very
> efficient for sure and I guess there could be some race conditions
> there.. Changing it however would require an extra flag to be sent to
> p2m_get_mem_access and p2m_lookup to forgo their own locking because
> the caller already holds the lock. It shouldn't be too drastic of a
> change, but any thoughts on it?

Taking p2m_lookup as an example I think the usual approach would be for
__p2m_lookup become an unlocked version and for p2m_lookup become a
simple wrapper which takes the lock.

__p2m_lookup could presumably be static, at least to start with.

Other options would be to push the locking into all the callers
(probably not nice) or to go the x86 route and essentially have a
lock/ref on the pte entry itself and use p2m_get/put_pte instead of
p2m_lookup (at least, that's my limited understanding).

I think x86 does it that way mainly for page sharing and friends.

Ian.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages
  2015-04-15 15:45       ` Ian Campbell
@ 2015-04-16  9:04         ` Tim Deegan
  0 siblings, 0 replies; 36+ messages in thread
From: Tim Deegan @ 2015-04-16  9:04 UTC (permalink / raw)
  To: Ian Campbell
  Cc: wei.liu2, Stefano Stabellini, Julien Grall, Ian Jackson,
	Xen-devel, Stefano Stabellini, Jan Beulich, Keir Fraser,
	Tamas K Lengyel

At 16:45 +0100 on 15 Apr (1429116345), Ian Campbell wrote:
> On Wed, 2015-04-15 at 17:36 +0200, Tamas K Lengyel wrote:
> > 
> > 
> > On Wed, Apr 15, 2015 at 3:48 PM, Ian Campbell
> > <ian.campbell@citrix.com> wrote:
> >         On Thu, 2015-03-26 at 23:05 +0100, Tamas K Lengyel wrote:
> >         > @@ -1209,6 +1306,10 @@ struct page_info
> >         *get_page_from_gva(struct domain *d, vaddr_t va,
> >         >
> >         >  err:
> >         >      spin_unlock(&p2m->lock);
> >         > +
> >         > +    if ( !page && p2m->mem_access_enabled )
> >         > +        page = p2m_mem_access_check_and_get_page(va,
> >         flags);
> >         
> >         Is this safe/correct to do without continuing to hold the p2m
> >         lock?
> >         
> >         It seems like the result of gva_to_ipa in the new function
> >         perhaps ought
> >         to be? Not sure about the p2m_get_mem_access (or does it have
> >         its own
> >         lock? Should it?)
> > 
> > 
> > p2m_get_mem_access does lock p2m->lock before it queries the radix
> > tree. There is another p2m_lookup in p2m_mem_access_check_and_get_page
> > which also does its own locking.
> 
> Understood, but my concern was whether each of those needs to see a
> consistent p2m, or whether we can tolerate it changing and giving a
> different result as we progress through the options.
> 
> >         The case I'm thinking about is something else (grant ops etc)
> >         changing
> >         the p2m between the first check in get_page_from_gva and this
> >         one. Worst
> >         case would be spurious results from a race, which perhaps are
> >         tolerable?
> > 
> > 
> > I'm not sure. Taking and releasing the lock doesn't seem very
> > efficient for sure and I guess there could be some race conditions
> > there.. Changing it however would require an extra flag to be sent to
> > p2m_get_mem_access and p2m_lookup to forgo their own locking because
> > the caller already holds the lock. It shouldn't be too drastic of a
> > change, but any thoughts on it?
> 
> Taking p2m_lookup as an example I think the usual approach would be for
> __p2m_lookup become an unlocked version and for p2m_lookup become a
> simple wrapper which takes the lock.
> 
> __p2m_lookup could presumably be static, at least to start with.
> 
> Other options would be to push the locking into all the callers
> (probably not nice) or to go the x86 route and essentially have a
> lock/ref on the pte entry itself and use p2m_get/put_pte instead of
> p2m_lookup (at least, that's my limited understanding).

Yep.  The lock is nominally per-entry but in fact a single lock over
the whole p2m.  And the preferred interface is get_gfn()/put_gfn().

> I think x86 does it that way mainly for page sharing and friends.

Yes, but it adds a measure of sanity to any state-machine-style p2m
changes.  The alternative is to use CAS operations for every p2m
update, with appropriate unwinding in each caller if they lose the
race.

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2015-04-16  9:04 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-26 22:05 [PATCH V14 0/7] Mem_access for ARM Tamas K Lengyel
2015-03-26 22:05 ` [PATCH V14 1/7] xen/arm: groundwork for mem_access support on ARM Tamas K Lengyel
2015-04-15 13:39   ` Ian Campbell
2015-03-26 22:05 ` [PATCH V14 2/7] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
2015-04-08 13:23   ` Julien Grall
2015-04-08 13:38     ` Tamas K Lengyel
2015-04-08 13:42       ` Julien Grall
2015-04-08 13:47         ` Tamas K Lengyel
2015-04-08 13:49           ` Julien Grall
2015-03-26 22:05 ` [PATCH V14 3/7] xen/arm: Allow hypervisor access to mem_access protected pages Tamas K Lengyel
2015-04-08 14:33   ` Julien Grall
2015-04-08 15:57     ` Tamas K Lengyel
2015-04-08 16:07       ` Julien Grall
2015-04-15 13:48   ` Ian Campbell
2015-04-15 15:36     ` Tamas K Lengyel
2015-04-15 15:45       ` Ian Campbell
2015-04-16  9:04         ` Tim Deegan
2015-03-26 22:05 ` [PATCH V14 4/7] xen/arm: Data abort exception (R/W) mem_access events Tamas K Lengyel
2015-04-08 15:26   ` Julien Grall
2015-04-08 15:45     ` Tamas K Lengyel
2015-04-15 13:53   ` Ian Campbell
2015-03-26 22:05 ` [PATCH V14 5/7] xen/arm: Instruction prefetch abort (X) mem_access event handling Tamas K Lengyel
2015-03-26 23:21   ` Julien Grall
2015-03-27  8:32     ` Tamas K Lengyel
2015-03-27 12:21       ` Julien Grall
2015-03-27 15:52   ` Ian Campbell
2015-03-27 22:18     ` Tamas K Lengyel
2015-03-30  9:41       ` Ian Campbell
2015-03-30 15:14         ` Tamas K Lengyel
2015-03-30 15:24           ` Ian Campbell
2015-03-30 15:28             ` Tamas K Lengyel
2015-03-26 22:05 ` [PATCH V14 6/7] xen/arm: Enable mem_access on ARM Tamas K Lengyel
2015-03-26 22:05 ` [PATCH V14 7/7] tools/libxc: Allocate magic page for mem access " Tamas K Lengyel
2015-04-15 13:36 ` [PATCH V14 0/7] Mem_access for ARM Ian Campbell
2015-04-15 14:47   ` Tamas K Lengyel
2015-04-15 15:04     ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.