All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/16] xen/arm: Clean-up memory subsystems
@ 2017-06-30 15:54 Julien Grall
  2017-06-30 15:54 ` [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64 Julien Grall
                   ` (16 more replies)
  0 siblings, 17 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Hi all,

This series, clean-up a bit the memory subsystems to ease further changes.

Cheers,

Julien Grall (16):
  xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64
  xen/arm: mm: Use typesafe mfn for xenheap_mfn_*
  xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe
  xen/arm: mm: Redefine virt_to_mfn to support typesafe
  xen/arm: domain_build: Redefine virt_to_mfn to support typesafe
  xen/arm: alternative: Redefine virt_to_mfn to support typesafe
  xen/arm: livepatch: Redefine virt_to_mfn to support typesafe
  xen/arm: create_xen_entries: Use typesafe MFN
  xen/arm: Move LPAE definition in a separate header
  xen/arm: lpae: Fix comments coding style
  xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and
    p2m_is_superpage
  xen/arm: p2m: Move lpae_* helpers in lpae.h
  xen/arm: mm: Use lpae_valid and lpae_table in create_xen_entries
  xen/arm: mm: Introduce temporary variable in create_xen_entries
  xen/arm: mm: Use __func__ rather than plain name in format string
  xen/arm: lpae: Switch from bool_t to bool

 xen/arch/arm/alternative.c  |   6 +-
 xen/arch/arm/domain_build.c |   6 +-
 xen/arch/arm/livepatch.c    |   6 +-
 xen/arch/arm/mm.c           |  84 +++++++++---------
 xen/arch/arm/p2m.c          |  72 ++++++---------
 xen/arch/arm/setup.c        |  20 ++---
 xen/include/asm-arm/lpae.h  | 209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/mm.h    |  14 +--
 xen/include/asm-arm/page.h  | 152 +-------------------------------
 9 files changed, 315 insertions(+), 254 deletions(-)
 create mode 100644 xen/include/asm-arm/lpae.h

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 20:07   ` Stefano Stabellini
  2017-06-30 15:54 ` [PATCH v3 02/16] xen/arm: mm: Use typesafe mfn for xenheap_mfn_* Julien Grall
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

xenheap_mfn_end is storing an MFN and not a physical address. The value will
be reset after the loop. So drop this bogus xenheap_mfn_end.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
    Changes in v2:
        - Update commit message

    Changes in v3:
        - Don't mention the fact that Xen is not currently using
        xenheap_mfn_end.
        - Add Stefano's reviewed-by
---
 xen/arch/arm/setup.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index f00f29a45b..ab4d8e4218 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -654,8 +654,6 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
             if ( e > bank_end )
                 e = bank_end;
 
-            xenheap_mfn_end = e;
-
             dt_unreserved_regions(s, e, init_boot_pages, 0);
             s = n;
         }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 02/16] xen/arm: mm: Use typesafe mfn for xenheap_mfn_*
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
  2017-06-30 15:54 ` [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64 Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 03/16] xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe Julien Grall
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Add more safety when using xenheap_mfn_*.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---

    I haven't introduced mfn_less_than() and mfn_greather_than() as
    there would be only a couple of usage. We would be able to introduce
    them and replace the open-coding easily in the future grepping
    mfn_x().

    Changes in v3:
        - Use INVALID_MFN_INITIALIZER rather than INVALID_MFN
---
 xen/arch/arm/mm.c        | 16 ++++++++--------
 xen/arch/arm/setup.c     | 18 +++++++++---------
 xen/include/asm-arm/mm.h | 11 ++++++-----
 3 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 018f44a6d1..d3674e732e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -138,8 +138,8 @@ uint64_t init_ttbr;
 static paddr_t phys_offset;
 
 /* Limits of the Xen heap */
-unsigned long xenheap_mfn_start __read_mostly = ~0UL;
-unsigned long xenheap_mfn_end __read_mostly;
+mfn_t xenheap_mfn_start __read_mostly = INVALID_MFN_INITIALIZER;
+mfn_t xenheap_mfn_end __read_mostly;
 vaddr_t xenheap_virt_end __read_mostly;
 #ifdef CONFIG_ARM_64
 vaddr_t xenheap_virt_start __read_mostly;
@@ -801,8 +801,8 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
 
     /* Record where the xenheap is, for translation routines. */
     xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
-    xenheap_mfn_start = base_mfn;
-    xenheap_mfn_end = base_mfn + nr_mfns;
+    xenheap_mfn_start = _mfn(base_mfn);
+    xenheap_mfn_end = _mfn(base_mfn + nr_mfns);
 }
 #else /* CONFIG_ARM_64 */
 void __init setup_xenheap_mappings(unsigned long base_mfn,
@@ -816,16 +816,16 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
     mfn = base_mfn & ~((FIRST_SIZE>>PAGE_SHIFT)-1);
 
     /* First call sets the xenheap physical and virtual offset. */
-    if ( xenheap_mfn_start == ~0UL )
+    if ( mfn_eq(xenheap_mfn_start, INVALID_MFN) )
     {
-        xenheap_mfn_start = base_mfn;
+        xenheap_mfn_start = _mfn(base_mfn);
         xenheap_virt_start = DIRECTMAP_VIRT_START +
             (base_mfn - mfn) * PAGE_SIZE;
     }
 
-    if ( base_mfn < xenheap_mfn_start )
+    if ( base_mfn < mfn_x(xenheap_mfn_start) )
         panic("cannot add xenheap mapping at %lx below heap start %lx",
-              base_mfn, xenheap_mfn_start);
+              base_mfn, mfn_x(xenheap_mfn_start));
 
     end_mfn = base_mfn + nr_mfns;
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index ab4d8e4218..3b34855668 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -555,8 +555,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
      * and enough mapped pages for copying the DTB.
      */
     dtb_pages = (dtb_size + PAGE_SIZE-1) >> PAGE_SHIFT;
-    boot_mfn_start = xenheap_mfn_end - dtb_pages - 1;
-    boot_mfn_end = xenheap_mfn_end;
+    boot_mfn_start = mfn_x(xenheap_mfn_end) - dtb_pages - 1;
+    boot_mfn_end = mfn_x(xenheap_mfn_end);
 
     init_boot_pages(pfn_to_paddr(boot_mfn_start), pfn_to_paddr(boot_mfn_end));
 
@@ -591,11 +591,11 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
                 e = bank_end;
 
             /* Avoid the xenheap */
-            if ( s < pfn_to_paddr(xenheap_mfn_start+xenheap_pages)
-                 && pfn_to_paddr(xenheap_mfn_start) < e )
+            if ( s < mfn_to_maddr(mfn_add(xenheap_mfn_start, xenheap_pages))
+                 && mfn_to_maddr(xenheap_mfn_start) < e )
             {
-                e = pfn_to_paddr(xenheap_mfn_start);
-                n = pfn_to_paddr(xenheap_mfn_start+xenheap_pages);
+                e = mfn_to_maddr(xenheap_mfn_start);
+                n = mfn_to_maddr(mfn_add(xenheap_mfn_start, xenheap_pages));
             }
 
             dt_unreserved_regions(s, e, init_boot_pages, 0);
@@ -610,7 +610,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     /* Add xenheap memory that was not already added to the boot
        allocator. */
-    init_xenheap_pages(pfn_to_paddr(xenheap_mfn_start),
+    init_xenheap_pages(mfn_to_maddr(xenheap_mfn_start),
                        pfn_to_paddr(boot_mfn_start));
 }
 #else /* CONFIG_ARM_64 */
@@ -662,8 +662,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     total_pages += ram_size >> PAGE_SHIFT;
 
     xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
-    xenheap_mfn_start = ram_start >> PAGE_SHIFT;
-    xenheap_mfn_end = ram_end >> PAGE_SHIFT;
+    xenheap_mfn_start = maddr_to_mfn(ram_start);
+    xenheap_mfn_end = maddr_to_mfn(ram_end);
 
     /*
      * Need enough mapped pages for copying the DTB.
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index ddb3f35ed1..7a35063546 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -107,7 +107,7 @@ struct page_info
 #define PGC_count_width   PG_shift(9)
 #define PGC_count_mask    ((1UL<<PGC_count_width)-1)
 
-extern unsigned long xenheap_mfn_start, xenheap_mfn_end;
+extern mfn_t xenheap_mfn_start, xenheap_mfn_end;
 extern vaddr_t xenheap_virt_end;
 #ifdef CONFIG_ARM_64
 extern vaddr_t xenheap_virt_start;
@@ -117,7 +117,8 @@ extern vaddr_t xenheap_virt_start;
 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
 #define is_xen_heap_mfn(mfn) ({                                 \
     unsigned long _mfn = (mfn);                                 \
-    (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
+    (_mfn >= mfn_x(xenheap_mfn_start) &&                        \
+     _mfn < mfn_x(xenheap_mfn_end));                            \
 })
 #else
 #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
@@ -227,7 +228,7 @@ static inline paddr_t __virt_to_maddr(vaddr_t va)
 static inline void *maddr_to_virt(paddr_t ma)
 {
     ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT));
-    ma -= pfn_to_paddr(xenheap_mfn_start);
+    ma -= mfn_to_maddr(xenheap_mfn_start);
     return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
 }
 #else
@@ -235,7 +236,7 @@ static inline void *maddr_to_virt(paddr_t ma)
 {
     ASSERT(pfn_to_pdx(ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT));
     return (void *)(XENHEAP_VIRT_START -
-                    pfn_to_paddr(xenheap_mfn_start) +
+                    mfn_to_maddr(xenheap_mfn_start) +
                     ((ma & ma_va_bottom_mask) |
                      ((ma & ma_top_mask) >> pfn_pdx_hole_shift)));
 }
@@ -276,7 +277,7 @@ static inline struct page_info *virt_to_page(const void *v)
     ASSERT(va < xenheap_virt_end);
 
     pdx = (va - XENHEAP_VIRT_START) >> PAGE_SHIFT;
-    pdx += pfn_to_pdx(xenheap_mfn_start);
+    pdx += pfn_to_pdx(mfn_x(xenheap_mfn_start));
     return frame_table + pdx - frametable_base_pdx;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 03/16] xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
  2017-06-30 15:54 ` [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64 Julien Grall
  2017-06-30 15:54 ` [PATCH v3 02/16] xen/arm: mm: Use typesafe mfn for xenheap_mfn_* Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 20:08   ` Stefano Stabellini
  2017-06-30 15:54 ` [PATCH v3 04/16] xen/arm: mm: Redefine virt_to_mfn to support typesafe Julien Grall
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The file xen/arch/arm/p2m.c is using typesafe MFN in most of the place.
This requires caller to mfn_to_page and page_to_mfn to use _mfn/mfn_x.

To avoid extra _mfn/mfn_x, re-define mfn_to_page and page_to_mfn within
xen/arch/arm/p2m.c to handle typesafe MFN.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
    The idea behind redefining locally mfn_to_page and page_to_mfn is
    splitting the introduction of typesafe MFN in smaller series and
    between multiple people. We know the file is typesafe ready and if
    we decide to make the main helper typesafe, we would just need to
    drop the definition at the top of the file.

    Changes in v3:
        - Use __page_to_mfn(...) rather than mfn_x(page_to_mfn(...))
---
 xen/arch/arm/p2m.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 05eb584e89..3e20a1ec82 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -38,6 +38,12 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 
 #define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
 
+/* Override macros from asm/mm.h to make them work with mfn_t */
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
+#undef page_to_mfn
+#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
+
 unsigned int __read_mostly p2m_ipa_bits;
 
 /* Helpers to lookup the properties of each level */
@@ -115,7 +121,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
     printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
     printk("P2M @ %p mfn:0x%lx\n",
-           p2m->root, page_to_mfn(p2m->root));
+           p2m->root, __page_to_mfn(p2m->root));
 
     dump_pt_walk(page_to_maddr(p2m->root), addr,
                  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
@@ -591,7 +597,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
      * The access value does not matter because the hardware will ignore
      * the permission fields for table entry.
      */
-    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
+    pte = mfn_to_p2m_entry(page_to_mfn(page), p2m_invalid,
                            p2m->default_access);
 
     p2m_write_pte(entry, pte, p2m->clean_pte);
@@ -650,9 +656,9 @@ static void p2m_put_l3_page(const lpae_t pte)
      */
     if ( p2m_is_foreign(pte.p2m.type) )
     {
-        unsigned long mfn = pte.p2m.base;
+        mfn_t mfn = _mfn(pte.p2m.base);
 
-        ASSERT(mfn_valid(_mfn(mfn)));
+        ASSERT(mfn_valid(mfn));
         put_page(mfn_to_page(mfn));
     }
 }
@@ -702,7 +708,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     mfn = _mfn(entry.p2m.base);
     ASSERT(mfn_valid(mfn));
 
-    pg = mfn_to_page(mfn_x(mfn));
+    pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
     free_domheap_page(pg);
@@ -780,7 +786,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
 
     unmap_domain_page(table);
 
-    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
+    pte = mfn_to_p2m_entry(page_to_mfn(page), p2m_invalid,
                            p2m->default_access);
 
     /*
@@ -1442,7 +1448,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
     if ( !mfn_valid(maddr_to_mfn(maddr)) )
         goto err;
 
-    page = mfn_to_page(mfn_x(maddr_to_mfn(maddr)));
+    page = mfn_to_page(maddr_to_mfn(maddr));
     ASSERT(page);
 
     if ( unlikely(!get_page(page, d)) )
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 04/16] xen/arm: mm: Redefine virt_to_mfn to support typesafe
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (2 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 03/16] xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 05/16] xen/arm: domain_build: " Julien Grall
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The file xen/arch/arm/mm.c is using the typesafe MFN in most of the
place. This requires all caller of virt_to_mfn to prefixed by _mfn(...).

To avoid the extra _mfn(...), re-defined virt_to_mfn within arch/arm/mm.c
to handle typesafe MFN.

This patch also introduce __virt_to_mfn, so virt_to_mfn can be
re-defined easily.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
    Changes in v2:
        - Use __virt_to_mfn rather than mfn_x(virt_to_mfn()).

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/mm.c        | 16 ++++++++++------
 xen/include/asm-arm/mm.h |  3 ++-
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d3674e732e..1e9e342fd3 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -44,6 +44,10 @@
 
 struct domain *dom_xen, *dom_io, *dom_cow;
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
 /* Static start-of-day pagetables that we use before the allocators
  * are up. These are used by all CPUs during bringup before switching
  * to the CPUs own pagetables.
@@ -479,7 +483,7 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
     unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
-        return virt_to_mfn(va);
+        return __virt_to_mfn(va);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
     ASSERT(map[slot].pt.avail != 0);
@@ -764,7 +768,7 @@ int init_secondary_pagetables(int cpu)
      * domheap mapping pages. */
     for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
     {
-        pte = mfn_to_xen_entry(_mfn(virt_to_mfn(domheap+i*LPAE_ENTRIES)),
+        pte = mfn_to_xen_entry(virt_to_mfn(domheap+i*LPAE_ENTRIES),
                                WRITEALLOC);
         pte.pt.table = 1;
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
@@ -961,7 +965,7 @@ static int create_xen_table(lpae_t *entry)
     if ( p == NULL )
         return -ENOMEM;
     clear_page(p);
-    pte = mfn_to_xen_entry(_mfn(virt_to_mfn(p)), WRITEALLOC);
+    pte = mfn_to_xen_entry(virt_to_mfn(p), WRITEALLOC);
     pte.pt.table = 1;
     write_pte(entry, pte);
     return 0;
@@ -1215,7 +1219,7 @@ int xenmem_add_to_physmap_one(
     unsigned long idx,
     gfn_t gfn)
 {
-    unsigned long mfn = 0;
+    mfn_t mfn = INVALID_MFN;
     int rc;
     p2m_type_t t;
     struct page_info *page = NULL;
@@ -1301,7 +1305,7 @@ int xenmem_add_to_physmap_one(
             return -EINVAL;
         }
 
-        mfn = page_to_mfn(page);
+        mfn = _mfn(page_to_mfn(page));
         t = p2m_map_foreign;
 
         rcu_unlock_domain(od);
@@ -1320,7 +1324,7 @@ int xenmem_add_to_physmap_one(
     }
 
     /* Map at new location. */
-    rc = guest_physmap_add_entry(d, gfn, _mfn(mfn), 0, t);
+    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
 
     /* If we fail to add the mapping, we need to drop the reference we
      * took earlier on foreign pages */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 7a35063546..ef84b72474 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -256,7 +256,7 @@ static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags)
 #define __va(x)             (maddr_to_virt(x))
 
 /* Convert between Xen-heap virtual addresses and machine frame numbers. */
-#define virt_to_mfn(va)   (virt_to_maddr(va) >> PAGE_SHIFT)
+#define __virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT)
 #define mfn_to_virt(mfn)  (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
 
 /*
@@ -266,6 +266,7 @@ static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags)
  */
 #define mfn_to_page(mfn)    __mfn_to_page(mfn)
 #define page_to_mfn(pg)     __page_to_mfn(pg)
+#define virt_to_mfn(va)     __virt_to_mfn(va)
 
 /* Convert between Xen-heap virtual addresses and page-info structures. */
 static inline struct page_info *virt_to_page(const void *v)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 05/16] xen/arm: domain_build: Redefine virt_to_mfn to support typesafe
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (3 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 04/16] xen/arm: mm: Redefine virt_to_mfn to support typesafe Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 06/16] xen/arm: alternative: " Julien Grall
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The file xen/arch/arm/domain_build.c is using typesafe MFN in most of
the place. The only caller to virt_to_mfn is using prefixed with
_mfn(...).

To avoid extra _mfn(...), re-define virt_to_mfn within
arch/arm/domain_build.c to handle typesafe MFN.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/domain_build.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c6776d76fc..1bec4fa23d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -45,6 +45,10 @@ struct map_range_data
     p2m_type_t p2mt;
 };
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
 //#define DEBUG_11_ALLOCATION
 #ifdef DEBUG_11_ALLOCATION
 # define D11PRINT(fmt, args...) printk(XENLOG_DEBUG fmt, ##args)
@@ -1903,7 +1907,7 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
     rc = map_regions_p2mt(d,
                           gaddr_to_gfn(d->arch.efi_acpi_gpa),
                           PFN_UP(d->arch.efi_acpi_len),
-                          _mfn(virt_to_mfn(d->arch.efi_acpi_table)),
+                          virt_to_mfn(d->arch.efi_acpi_table),
                           p2m_mmio_direct_c);
     if ( rc != 0 )
     {
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 06/16] xen/arm: alternative: Redefine virt_to_mfn to support typesafe
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (4 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 05/16] xen/arm: domain_build: " Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 07/16] xen/arm: livepatch: " Julien Grall
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The file xen/arch/arm/alternative.c is using typesafe MFN in most of
the place. The only caller to virt_to_mfn is using with _mfn(...).

To avoid extra _mfn(...), re-define virt_to_mfn within
xen/arch/arm/alternative.c to handle typesafe MFN.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabelllini <sstabellini@kernel.org>
---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/alternative.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index 4d7e5b6155..a3bcda3117 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -32,6 +32,10 @@
 #include <asm/insn.h>
 #include <asm/page.h>
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
 extern const struct alt_instr __alt_instructions[], __alt_instructions_end[];
 
 struct alt_region {
@@ -154,7 +158,7 @@ static int __apply_alternatives_multi_stop(void *unused)
     {
         int ret;
         struct alt_region region;
-        mfn_t xen_mfn = _mfn(virt_to_mfn(_start));
+        mfn_t xen_mfn = virt_to_mfn(_start);
         paddr_t xen_size = _end - _start;
         unsigned int xen_order = get_order_from_bytes(xen_size);
         void *xenmap;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 07/16] xen/arm: livepatch: Redefine virt_to_mfn to support typesafe
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (5 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 06/16] xen/arm: alternative: " Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-07-03  8:21   ` Ross Lagerwall
  2017-06-30 15:54 ` [PATCH v3 08/16] xen/arm: create_xen_entries: Use typesafe MFN Julien Grall
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Ross Lagerwall, Julien Grall, sstabellini

The file xen/arch/arm/livepatch.c is using typesafe MFN in most of
the place. The only caller to virt_to_mfn is using with _mfn(...).

To avoid extra _mfn(...), re-define virt_to_mfn within
xen/arch/arm/livepatch.c to handle typesafe MFN.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel..org>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---

Cc: Ross Lagerwall <ross.lagerwall@citrix.com>

    Changes in v2:
        - Add Stefano's reviewed-by
        - Still missing an ack from Konrad and/or Ross.

    Changes in v3:
        - Add Konrad's acked-by
---
 xen/arch/arm/livepatch.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
index de95e54744..3e53524365 100644
--- a/xen/arch/arm/livepatch.c
+++ b/xen/arch/arm/livepatch.c
@@ -12,6 +12,10 @@
 #include <asm/livepatch.h>
 #include <asm/mm.h>
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
 void *vmap_of_xen_text;
 
 int arch_livepatch_quiesce(void)
@@ -22,7 +26,7 @@ int arch_livepatch_quiesce(void)
     if ( vmap_of_xen_text )
         return -EINVAL;
 
-    text_mfn = _mfn(virt_to_mfn(_start));
+    text_mfn = virt_to_mfn(_start);
     text_order = get_order_from_bytes(_end - _start);
 
     /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 08/16] xen/arm: create_xen_entries: Use typesafe MFN
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (6 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 07/16] xen/arm: livepatch: " Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 09/16] xen/arm: Move LPAE definition in a separate header Julien Grall
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Add a bit more safety when using create_xen_entries.

Also when destroying/modifying mapping, the MFN is currently not used.
Rather than passing _mfn(0) use INVALID_MFN to stay consistent with the
other usage.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/mm.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 1e9e342fd3..c732771444 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -980,7 +980,7 @@ enum xenmap_operation {
 
 static int create_xen_entries(enum xenmap_operation op,
                               unsigned long virt,
-                              unsigned long mfn,
+                              mfn_t mfn,
                               unsigned long nr_mfns,
                               unsigned int ai)
 {
@@ -989,7 +989,7 @@ static int create_xen_entries(enum xenmap_operation op,
     lpae_t pte;
     lpae_t *third = NULL;
 
-    for(; addr < addr_end; addr += PAGE_SIZE, mfn++)
+    for(; addr < addr_end; addr += PAGE_SIZE, mfn = mfn_add(mfn, 1))
     {
         if ( !xen_second[second_linear_offset(addr)].pt.valid ||
              !xen_second[second_linear_offset(addr)].pt.table )
@@ -1010,13 +1010,13 @@ static int create_xen_entries(enum xenmap_operation op,
             case RESERVE:
                 if ( third[third_table_offset(addr)].pt.valid )
                 {
-                    printk("create_xen_entries: trying to replace an existing mapping addr=%lx mfn=%lx\n",
-                           addr, mfn);
+                    printk("create_xen_entries: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
+                           addr, mfn_x(mfn));
                     return -EINVAL;
                 }
                 if ( op == RESERVE )
                     break;
-                pte = mfn_to_xen_entry(_mfn(mfn), ai);
+                pte = mfn_to_xen_entry(mfn, ai);
                 pte.pt.table = 1;
                 write_pte(&third[third_table_offset(addr)], pte);
                 break;
@@ -1061,24 +1061,25 @@ int map_pages_to_xen(unsigned long virt,
                      unsigned long nr_mfns,
                      unsigned int flags)
 {
-    return create_xen_entries(INSERT, virt, mfn, nr_mfns, flags);
+    return create_xen_entries(INSERT, virt, _mfn(mfn), nr_mfns, flags);
 }
 
 int populate_pt_range(unsigned long virt, unsigned long mfn,
                       unsigned long nr_mfns)
 {
-    return create_xen_entries(RESERVE, virt, mfn, nr_mfns, 0);
+    return create_xen_entries(RESERVE, virt, _mfn(mfn), nr_mfns, 0);
 }
 
 int destroy_xen_mappings(unsigned long v, unsigned long e)
 {
-    return create_xen_entries(REMOVE, v, 0, (e - v) >> PAGE_SHIFT, 0);
+    return create_xen_entries(REMOVE, v, INVALID_MFN, (e - v) >> PAGE_SHIFT, 0);
 }
 
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
 {
     ASSERT((flags & (PTE_NX | PTE_RO)) == flags);
-    return create_xen_entries(MODIFY, s, 0, (e - s) >> PAGE_SHIFT, flags);
+    return create_xen_entries(MODIFY, s, INVALID_MFN, (e - s) >> PAGE_SHIFT,
+                              flags);
 }
 
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 09/16] xen/arm: Move LPAE definition in a separate header
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (7 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 08/16] xen/arm: create_xen_entries: Use typesafe MFN Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-07-03  8:55   ` Sergej Proskurin
  2017-06-30 15:54 ` [PATCH v3 10/16] xen/arm: lpae: Fix comments coding style Julien Grall
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini

page.h is getting bigger. Move out every LPAE definitions in a separate
header. There is no functional changes.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---

Cc: proskurin@sec.in.tum.de

    Changes in v2:
        - Move comment after the #endif rather than before
        - Add Stefano's reviewed-by
---
 xen/include/asm-arm/lpae.h | 169 +++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/page.h | 152 +---------------------------------------
 2 files changed, 170 insertions(+), 151 deletions(-)
 create mode 100644 xen/include/asm-arm/lpae.h

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
new file mode 100644
index 0000000000..ad8c571ea5
--- /dev/null
+++ b/xen/include/asm-arm/lpae.h
@@ -0,0 +1,169 @@
+#ifndef __ARM_LPAE_H__
+#define __ARM_LPAE_H__
+
+#ifndef __ASSEMBLY__
+
+/* WARNING!  Unlike the Intel pagetable code, where l1 is the lowest
+ * level and l4 is the root of the trie, the ARM pagetables follow ARM's
+ * documentation: the levels are called first, second &c in the order
+ * that the MMU walks them (i.e. "first" is the root of the trie). */
+
+/******************************************************************************
+ * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to
+ * 40-bit output addresses.  Tables at all levels have 512 64-bit entries
+ * (i.e. are 4Kb long).
+ *
+ * The bit-shuffling that has the permission bits in branch nodes in a
+ * different place from those in leaf nodes seems to be to allow linear
+ * pagetable tricks.  If we're not doing that then the set of permission
+ * bits that's not in use in a given node type can be used as
+ * extra software-defined bits. */
+
+typedef struct __packed {
+    /* These are used in all kinds of entry. */
+    unsigned long valid:1;      /* Valid mapping */
+    unsigned long table:1;      /* == 1 in 4k map entries too */
+
+    /* These ten bits are only used in Block entries and are ignored
+     * in Table entries. */
+    unsigned long ai:3;         /* Attribute Index */
+    unsigned long ns:1;         /* Not-Secure */
+    unsigned long user:1;       /* User-visible */
+    unsigned long ro:1;         /* Read-Only */
+    unsigned long sh:2;         /* Shareability */
+    unsigned long af:1;         /* Access Flag */
+    unsigned long ng:1;         /* Not-Global */
+
+    /* The base address must be appropriately aligned for Block entries */
+    unsigned long long base:36; /* Base address of block or next table */
+    unsigned long sbz:4;        /* Must be zero */
+
+    /* These seven bits are only used in Block entries and are ignored
+     * in Table entries. */
+    unsigned long contig:1;     /* In a block of 16 contiguous entries */
+    unsigned long pxn:1;        /* Privileged-XN */
+    unsigned long xn:1;         /* eXecute-Never */
+    unsigned long avail:4;      /* Ignored by hardware */
+
+    /* These 5 bits are only used in Table entries and are ignored in
+     * Block entries */
+    unsigned long pxnt:1;       /* Privileged-XN */
+    unsigned long xnt:1;        /* eXecute-Never */
+    unsigned long apt:2;        /* Access Permissions */
+    unsigned long nst:1;        /* Not-Secure */
+} lpae_pt_t;
+
+/* The p2m tables have almost the same layout, but some of the permission
+ * and cache-control bits are laid out differently (or missing) */
+typedef struct __packed {
+    /* These are used in all kinds of entry. */
+    unsigned long valid:1;      /* Valid mapping */
+    unsigned long table:1;      /* == 1 in 4k map entries too */
+
+    /* These ten bits are only used in Block entries and are ignored
+     * in Table entries. */
+    unsigned long mattr:4;      /* Memory Attributes */
+    unsigned long read:1;       /* Read access */
+    unsigned long write:1;      /* Write access */
+    unsigned long sh:2;         /* Shareability */
+    unsigned long af:1;         /* Access Flag */
+    unsigned long sbz4:1;
+
+    /* The base address must be appropriately aligned for Block entries */
+    unsigned long long base:36; /* Base address of block or next table */
+    unsigned long sbz3:4;
+
+    /* These seven bits are only used in Block entries and are ignored
+     * in Table entries. */
+    unsigned long contig:1;     /* In a block of 16 contiguous entries */
+    unsigned long sbz2:1;
+    unsigned long xn:1;         /* eXecute-Never */
+    unsigned long type:4;       /* Ignore by hardware. Used to store p2m types */
+
+    unsigned long sbz1:5;
+} lpae_p2m_t;
+
+/* Permission mask: xn, write, read */
+#define P2M_PERM_MASK (0x00400000000000C0ULL)
+#define P2M_CLEAR_PERM(pte) ((pte).bits & ~P2M_PERM_MASK)
+
+/*
+ * Walk is the common bits of p2m and pt entries which are needed to
+ * simply walk the table (e.g. for debug).
+ */
+typedef struct __packed {
+    /* These are used in all kinds of entry. */
+    unsigned long valid:1;      /* Valid mapping */
+    unsigned long table:1;      /* == 1 in 4k map entries too */
+
+    unsigned long pad2:10;
+
+    /* The base address must be appropriately aligned for Block entries */
+    unsigned long long base:36; /* Base address of block or next table */
+
+    unsigned long pad1:16;
+} lpae_walk_t;
+
+typedef union {
+    uint64_t bits;
+    lpae_pt_t pt;
+    lpae_p2m_t p2m;
+    lpae_walk_t walk;
+} lpae_t;
+
+#endif /* __ASSEMBLY__ */
+
+/*
+ * These numbers add up to a 48-bit input address space.
+ *
+ * On 32-bit the zeroeth level does not exist, therefore the total is
+ * 39-bits. The ARMv7-A architecture actually specifies a 40-bit input
+ * address space for the p2m, with an 8K (1024-entry) top-level table.
+ * However Xen only supports 16GB of RAM on 32-bit ARM systems and
+ * therefore 39-bits are sufficient.
+ */
+
+#define LPAE_SHIFT      9
+#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
+#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
+
+#define THIRD_SHIFT    (PAGE_SHIFT)
+#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
+#define THIRD_SIZE     ((paddr_t)1 << THIRD_SHIFT)
+#define THIRD_MASK     (~(THIRD_SIZE - 1))
+#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
+#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
+#define SECOND_SIZE    ((paddr_t)1 << SECOND_SHIFT)
+#define SECOND_MASK    (~(SECOND_SIZE - 1))
+#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
+#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
+#define FIRST_SIZE     ((paddr_t)1 << FIRST_SHIFT)
+#define FIRST_MASK     (~(FIRST_SIZE - 1))
+#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
+#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
+#define ZEROETH_SIZE   ((paddr_t)1 << ZEROETH_SHIFT)
+#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
+#define first_linear_offset(va) ((va) >> FIRST_SHIFT)
+#define second_linear_offset(va) ((va) >> SECOND_SHIFT)
+#define third_linear_offset(va) ((va) >> THIRD_SHIFT)
+
+#define TABLE_OFFSET(offs) ((unsigned int)(offs) & LPAE_ENTRY_MASK)
+#define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
+#define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
+#define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
+#define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
+
+#endif /* __ARM_LPAE_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 3670ab665d..cef2f28914 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -3,6 +3,7 @@
 
 #include <public/xen.h>
 #include <asm/processor.h>
+#include <asm/lpae.h>
 
 #ifdef CONFIG_ARM_64
 #define PADDR_BITS              48
@@ -97,114 +98,6 @@
 #include <xen/lib.h>
 #include <asm/system.h>
 
-/* WARNING!  Unlike the Intel pagetable code, where l1 is the lowest
- * level and l4 is the root of the trie, the ARM pagetables follow ARM's
- * documentation: the levels are called first, second &c in the order
- * that the MMU walks them (i.e. "first" is the root of the trie). */
-
-/******************************************************************************
- * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to
- * 40-bit output addresses.  Tables at all levels have 512 64-bit entries
- * (i.e. are 4Kb long).
- *
- * The bit-shuffling that has the permission bits in branch nodes in a
- * different place from those in leaf nodes seems to be to allow linear
- * pagetable tricks.  If we're not doing that then the set of permission
- * bits that's not in use in a given node type can be used as
- * extra software-defined bits. */
-
-typedef struct __packed {
-    /* These are used in all kinds of entry. */
-    unsigned long valid:1;      /* Valid mapping */
-    unsigned long table:1;      /* == 1 in 4k map entries too */
-
-    /* These ten bits are only used in Block entries and are ignored
-     * in Table entries. */
-    unsigned long ai:3;         /* Attribute Index */
-    unsigned long ns:1;         /* Not-Secure */
-    unsigned long user:1;       /* User-visible */
-    unsigned long ro:1;         /* Read-Only */
-    unsigned long sh:2;         /* Shareability */
-    unsigned long af:1;         /* Access Flag */
-    unsigned long ng:1;         /* Not-Global */
-
-    /* The base address must be appropriately aligned for Block entries */
-    unsigned long long base:36; /* Base address of block or next table */
-    unsigned long sbz:4;        /* Must be zero */
-
-    /* These seven bits are only used in Block entries and are ignored
-     * in Table entries. */
-    unsigned long contig:1;     /* In a block of 16 contiguous entries */
-    unsigned long pxn:1;        /* Privileged-XN */
-    unsigned long xn:1;         /* eXecute-Never */
-    unsigned long avail:4;      /* Ignored by hardware */
-
-    /* These 5 bits are only used in Table entries and are ignored in
-     * Block entries */
-    unsigned long pxnt:1;       /* Privileged-XN */
-    unsigned long xnt:1;        /* eXecute-Never */
-    unsigned long apt:2;        /* Access Permissions */
-    unsigned long nst:1;        /* Not-Secure */
-} lpae_pt_t;
-
-/* The p2m tables have almost the same layout, but some of the permission
- * and cache-control bits are laid out differently (or missing) */
-typedef struct __packed {
-    /* These are used in all kinds of entry. */
-    unsigned long valid:1;      /* Valid mapping */
-    unsigned long table:1;      /* == 1 in 4k map entries too */
-
-    /* These ten bits are only used in Block entries and are ignored
-     * in Table entries. */
-    unsigned long mattr:4;      /* Memory Attributes */
-    unsigned long read:1;       /* Read access */
-    unsigned long write:1;      /* Write access */
-    unsigned long sh:2;         /* Shareability */
-    unsigned long af:1;         /* Access Flag */
-    unsigned long sbz4:1;
-
-    /* The base address must be appropriately aligned for Block entries */
-    unsigned long long base:36; /* Base address of block or next table */
-    unsigned long sbz3:4;
-
-    /* These seven bits are only used in Block entries and are ignored
-     * in Table entries. */
-    unsigned long contig:1;     /* In a block of 16 contiguous entries */
-    unsigned long sbz2:1;
-    unsigned long xn:1;         /* eXecute-Never */
-    unsigned long type:4;       /* Ignore by hardware. Used to store p2m types */
-
-    unsigned long sbz1:5;
-} lpae_p2m_t;
-
-/* Permission mask: xn, write, read */
-#define P2M_PERM_MASK (0x00400000000000C0ULL)
-#define P2M_CLEAR_PERM(pte) ((pte).bits & ~P2M_PERM_MASK)
-
-/*
- * Walk is the common bits of p2m and pt entries which are needed to
- * simply walk the table (e.g. for debug).
- */
-typedef struct __packed {
-    /* These are used in all kinds of entry. */
-    unsigned long valid:1;      /* Valid mapping */
-    unsigned long table:1;      /* == 1 in 4k map entries too */
-
-    unsigned long pad2:10;
-
-    /* The base address must be appropriately aligned for Block entries */
-    unsigned long long base:36; /* Base address of block or next table */
-
-    unsigned long pad1:16;
-} lpae_walk_t;
-
-typedef union {
-    uint64_t bits;
-    lpae_pt_t pt;
-    lpae_p2m_t p2m;
-    lpae_walk_t walk;
-} lpae_t;
-
 #if defined(CONFIG_ARM_32)
 # include <asm/arm32/page.h>
 #elif defined(CONFIG_ARM_64)
@@ -390,49 +283,6 @@ static inline int gva_to_ipa(vaddr_t va, paddr_t *paddr, unsigned int flags)
 
 #endif /* __ASSEMBLY__ */
 
-/*
- * These numbers add up to a 48-bit input address space.
- *
- * On 32-bit the zeroeth level does not exist, therefore the total is
- * 39-bits. The ARMv7-A architecture actually specifies a 40-bit input
- * address space for the p2m, with an 8K (1024-entry) top-level table.
- * However Xen only supports 16GB of RAM on 32-bit ARM systems and
- * therefore 39-bits are sufficient.
- */
-
-#define LPAE_SHIFT      9
-#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
-#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
-
-#define THIRD_SHIFT    (PAGE_SHIFT)
-#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
-#define THIRD_SIZE     ((paddr_t)1 << THIRD_SHIFT)
-#define THIRD_MASK     (~(THIRD_SIZE - 1))
-#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
-#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
-#define SECOND_SIZE    ((paddr_t)1 << SECOND_SHIFT)
-#define SECOND_MASK    (~(SECOND_SIZE - 1))
-#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
-#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
-#define FIRST_SIZE     ((paddr_t)1 << FIRST_SHIFT)
-#define FIRST_MASK     (~(FIRST_SIZE - 1))
-#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
-#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
-#define ZEROETH_SIZE   ((paddr_t)1 << ZEROETH_SHIFT)
-#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
-
-/* Calculate the offsets into the pagetables for a given VA */
-#define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
-#define first_linear_offset(va) ((va) >> FIRST_SHIFT)
-#define second_linear_offset(va) ((va) >> SECOND_SHIFT)
-#define third_linear_offset(va) ((va) >> THIRD_SHIFT)
-
-#define TABLE_OFFSET(offs) ((unsigned int)(offs) & LPAE_ENTRY_MASK)
-#define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
-#define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
-#define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
-#define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
-
 #define PAGE_ALIGN(x) (((x) + PAGE_SIZE - 1) & PAGE_MASK)
 
 #endif /* __ARM_PAGE_H__ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 10/16] xen/arm: lpae: Fix comments coding style
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (8 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 09/16] xen/arm: Move LPAE definition in a separate header Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-07-02 17:13   ` Sergej Proskurin
  2017-06-30 15:54 ` [PATCH v3 11/16] xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage Julien Grall
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini

Also adding one missing full stop + fix description

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---

Cc: proskurin@sec.in.tum.de

    I haven't retained Stefano's reviewed-by because of the description
    update.

   Changes in v2:
        - Fix description regarding x86 page-table

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/include/asm-arm/lpae.h | 49 ++++++++++++++++++++++++++++++----------------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index ad8c571ea5..aa85cb8112 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -3,10 +3,12 @@
 
 #ifndef __ASSEMBLY__
 
-/* WARNING!  Unlike the Intel pagetable code, where l1 is the lowest
- * level and l4 is the root of the trie, the ARM pagetables follow ARM's
- * documentation: the levels are called first, second &c in the order
- * that the MMU walks them (i.e. "first" is the root of the trie). */
+/*
+ * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
+ * l4 is the root of the trie, the ARM pagetables follow ARM's documentation:
+ * the levels are called first, second &c in the order that the MMU walks them
+ * (i.e. "first" is the root of the trie).
+ */
 
 /******************************************************************************
  * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to
@@ -17,15 +19,18 @@
  * different place from those in leaf nodes seems to be to allow linear
  * pagetable tricks.  If we're not doing that then the set of permission
  * bits that's not in use in a given node type can be used as
- * extra software-defined bits. */
+ * extra software-defined bits.
+ */
 
 typedef struct __packed {
     /* These are used in all kinds of entry. */
     unsigned long valid:1;      /* Valid mapping */
     unsigned long table:1;      /* == 1 in 4k map entries too */
 
-    /* These ten bits are only used in Block entries and are ignored
-     * in Table entries. */
+    /*
+     * These ten bits are only used in Block entries and are ignored
+     * in Table entries.
+     */
     unsigned long ai:3;         /* Attribute Index */
     unsigned long ns:1;         /* Not-Secure */
     unsigned long user:1;       /* User-visible */
@@ -38,30 +43,38 @@ typedef struct __packed {
     unsigned long long base:36; /* Base address of block or next table */
     unsigned long sbz:4;        /* Must be zero */
 
-    /* These seven bits are only used in Block entries and are ignored
-     * in Table entries. */
+    /*
+     * These seven bits are only used in Block entries and are ignored
+     * in Table entries.
+     */
     unsigned long contig:1;     /* In a block of 16 contiguous entries */
     unsigned long pxn:1;        /* Privileged-XN */
     unsigned long xn:1;         /* eXecute-Never */
     unsigned long avail:4;      /* Ignored by hardware */
 
-    /* These 5 bits are only used in Table entries and are ignored in
-     * Block entries */
+    /*
+     * These 5 bits are only used in Table entries and are ignored in
+     * Block entries.
+     */
     unsigned long pxnt:1;       /* Privileged-XN */
     unsigned long xnt:1;        /* eXecute-Never */
     unsigned long apt:2;        /* Access Permissions */
     unsigned long nst:1;        /* Not-Secure */
 } lpae_pt_t;
 
-/* The p2m tables have almost the same layout, but some of the permission
- * and cache-control bits are laid out differently (or missing) */
+/*
+ * The p2m tables have almost the same layout, but some of the permission
+ * and cache-control bits are laid out differently (or missing).
+ */
 typedef struct __packed {
     /* These are used in all kinds of entry. */
     unsigned long valid:1;      /* Valid mapping */
     unsigned long table:1;      /* == 1 in 4k map entries too */
 
-    /* These ten bits are only used in Block entries and are ignored
-     * in Table entries. */
+    /*
+     * These ten bits are only used in Block entries and are ignored
+     * in Table entries.
+     */
     unsigned long mattr:4;      /* Memory Attributes */
     unsigned long read:1;       /* Read access */
     unsigned long write:1;      /* Write access */
@@ -73,8 +86,10 @@ typedef struct __packed {
     unsigned long long base:36; /* Base address of block or next table */
     unsigned long sbz3:4;
 
-    /* These seven bits are only used in Block entries and are ignored
-     * in Table entries. */
+    /*
+     * These seven bits are only used in Block entries and are ignored
+     * in Table entries.
+     */
     unsigned long contig:1;     /* In a block of 16 contiguous entries */
     unsigned long sbz2:1;
     unsigned long xn:1;         /* eXecute-Never */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 11/16] xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (9 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 10/16] xen/arm: lpae: Fix comments coding style Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-07-02 17:17   ` Sergej Proskurin
  2017-06-30 15:54 ` [PATCH v3 12/16] xen/arm: p2m: Move lpae_* helpers in lpae.h Julien Grall
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini

The helpers p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage are
not specific to the stage-2 translation tables. They can also work on
any LPAE translation tables. So rename then to lpae_* and use pte.walk
to look for the value of the field.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---

Cc: proskurin@sec.in.tum.de

    s/bool_t/bool/ will be done in a separate patch

    Changes in v2:
        - Patch added

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c | 45 +++++++++++++++++++++++----------------------
 1 file changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 3e20a1ec82..381df1f237 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,27 +52,27 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
     { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static inline bool_t p2m_valid(lpae_t pte)
+static inline bool_t lpae_valid(lpae_t pte)
 {
-    return pte.p2m.valid;
+    return pte.walk.valid;
 }
 /*
  * These two can only be used on L0..L2 ptes because L3 mappings set
  * the table bit and therefore these would return the opposite to what
  * you would expect.
  */
-static inline bool_t p2m_table(lpae_t pte)
+static inline bool_t lpae_table(lpae_t pte)
 {
-    return p2m_valid(pte) && pte.p2m.table;
+    return lpae_valid(pte) && pte.walk.table;
 }
-static inline bool_t p2m_mapping(lpae_t pte)
+static inline bool_t lpae_mapping(lpae_t pte)
 {
-    return p2m_valid(pte) && !pte.p2m.table;
+    return lpae_valid(pte) && !pte.walk.table;
 }
 
-static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
+static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
 {
-    return (level < 3) && p2m_mapping(pte);
+    return (level < 3) && lpae_mapping(pte);
 }
 
 static void p2m_flush_tlb(struct p2m_domain *p2m);
@@ -281,7 +281,7 @@ static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
 
     entry = *table + offset;
 
-    if ( !p2m_valid(*entry) )
+    if ( !lpae_valid(*entry) )
     {
         if ( read_only )
             return GUEST_TABLE_MAP_FAILED;
@@ -292,7 +292,7 @@ static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
     }
 
     /* The function p2m_next_level is never called at the 3rd level */
-    if ( p2m_mapping(*entry) )
+    if ( lpae_mapping(*entry) )
         return GUEST_TABLE_SUPER_PAGE;
 
     mfn = _mfn(entry->p2m.base);
@@ -372,7 +372,7 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 
     entry = table[offsets[level]];
 
-    if ( p2m_valid(entry) )
+    if ( lpae_valid(entry) )
     {
         *t = entry.p2m.type;
 
@@ -577,7 +577,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
     lpae_t *p;
     lpae_t pte;
 
-    ASSERT(!p2m_valid(*entry));
+    ASSERT(!lpae_valid(*entry));
 
     page = alloc_domheap_page(NULL, 0);
     if ( page == NULL )
@@ -645,7 +645,7 @@ enum p2m_operation {
  */
 static void p2m_put_l3_page(const lpae_t pte)
 {
-    ASSERT(p2m_valid(pte));
+    ASSERT(lpae_valid(pte));
 
     /*
      * TODO: Handle other p2m types
@@ -673,11 +673,11 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     struct page_info *pg;
 
     /* Nothing to do if the entry is invalid. */
-    if ( !p2m_valid(entry) )
+    if ( !lpae_valid(entry) )
         return;
 
     /* Nothing to do but updating the stats if the entry is a super-page. */
-    if ( p2m_is_superpage(entry, level) )
+    if ( lpae_is_superpage(entry, level) )
     {
         p2m->stats.mappings[level]--;
         return;
@@ -733,7 +733,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
      * a superpage.
      */
     ASSERT(level < target);
-    ASSERT(p2m_is_superpage(*entry, level));
+    ASSERT(lpae_is_superpage(*entry, level));
 
     page = alloc_domheap_page(NULL, 0);
     if ( !page )
@@ -870,7 +870,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
         /* We need to split the original page. */
         lpae_t split_pte = *entry;
 
-        ASSERT(p2m_is_superpage(*entry, level));
+        ASSERT(lpae_is_superpage(*entry, level));
 
         if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
         {
@@ -944,12 +944,12 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
      * sequence when updating the translation table (D4.7.1 in ARM DDI
      * 0487A.j).
      */
-    if ( p2m_valid(orig_pte) )
+    if ( lpae_valid(orig_pte) )
         p2m_remove_pte(entry, p2m->clean_pte);
 
     if ( mfn_eq(smfn, INVALID_MFN) )
         /* Flush can be deferred if the entry is removed */
-        p2m->need_flush |= !!p2m_valid(orig_pte);
+        p2m->need_flush |= !!lpae_valid(orig_pte);
     else
     {
         lpae_t pte = mfn_to_p2m_entry(smfn, t, a);
@@ -964,7 +964,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
          * Although, it could be defered when only the permissions are
          * changed (e.g in case of memaccess).
          */
-        if ( p2m_valid(orig_pte) )
+        if ( lpae_valid(orig_pte) )
         {
             if ( likely(!p2m->mem_access_enabled) ||
                  P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
@@ -986,10 +986,11 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
      * Free the entry only if the original pte was valid and the base
      * is different (to avoid freeing when permission is changed).
      */
-    if ( p2m_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
+    if ( lpae_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
         p2m_free_entry(p2m, orig_pte, level);
 
-    if ( need_iommu(p2m->domain) && (p2m_valid(orig_pte) || p2m_valid(*entry)) )
+    if ( need_iommu(p2m->domain) &&
+         (lpae_valid(orig_pte) || lpae_valid(*entry)) )
         rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
     else
         rc = 0;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 12/16] xen/arm: p2m: Move lpae_* helpers in lpae.h
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (10 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 11/16] xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-07-02 17:20   ` Sergej Proskurin
  2017-06-30 15:54 ` [PATCH v3 13/16] xen/arm: mm: Use lpae_valid and lpae_table in create_xen_entries Julien Grall
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini

lpae_* helpers can work on any LPAE translation tables. Move them in
lpae.h to allow other part of Xen to use them.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---

Cc: proskurin@sec.in.tum.de

    Changes in v2:
        - Patch added

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c         | 23 -----------------------
 xen/include/asm-arm/lpae.h | 25 +++++++++++++++++++++++++
 2 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 381df1f237..9b7a580a87 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,29 +52,6 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
     { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static inline bool_t lpae_valid(lpae_t pte)
-{
-    return pte.walk.valid;
-}
-/*
- * These two can only be used on L0..L2 ptes because L3 mappings set
- * the table bit and therefore these would return the opposite to what
- * you would expect.
- */
-static inline bool_t lpae_table(lpae_t pte)
-{
-    return lpae_valid(pte) && pte.walk.table;
-}
-static inline bool_t lpae_mapping(lpae_t pte)
-{
-    return lpae_valid(pte) && !pte.walk.table;
-}
-
-static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
-{
-    return (level < 3) && lpae_mapping(pte);
-}
-
 static void p2m_flush_tlb(struct p2m_domain *p2m);
 
 /* Unlock the flush and do a P2M TLB flush if necessary */
diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index aa85cb8112..6fbf7c606c 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -126,6 +126,31 @@ typedef union {
     lpae_walk_t walk;
 } lpae_t;
 
+static inline bool_t lpae_valid(lpae_t pte)
+{
+    return pte.walk.valid;
+}
+
+/*
+ * These two can only be used on L0..L2 ptes because L3 mappings set
+ * the table bit and therefore these would return the opposite to what
+ * you would expect.
+ */
+static inline bool_t lpae_table(lpae_t pte)
+{
+    return lpae_valid(pte) && pte.walk.table;
+}
+
+static inline bool_t lpae_mapping(lpae_t pte)
+{
+    return lpae_valid(pte) && !pte.walk.table;
+}
+
+static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
+{
+    return (level < 3) && lpae_mapping(pte);
+}
+
 #endif /* __ASSEMBLY__ */
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 13/16] xen/arm: mm: Use lpae_valid and lpae_table in create_xen_entries
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (11 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 12/16] xen/arm: p2m: Move lpae_* helpers in lpae.h Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 14/16] xen/arm: mm: Introduce temporary variable " Julien Grall
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

This newly introduced lpae_valid and lpae_table helpers will recude the
code and make more readable.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---

    Changes in v2:
        - Patch added

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/mm.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index c732771444..21b3df395f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -991,8 +991,7 @@ static int create_xen_entries(enum xenmap_operation op,
 
     for(; addr < addr_end; addr += PAGE_SIZE, mfn = mfn_add(mfn, 1))
     {
-        if ( !xen_second[second_linear_offset(addr)].pt.valid ||
-             !xen_second[second_linear_offset(addr)].pt.table )
+        if ( !lpae_table(xen_second[second_linear_offset(addr)]) )
         {
             rc = create_xen_table(&xen_second[second_linear_offset(addr)]);
             if ( rc < 0 ) {
@@ -1001,14 +1000,14 @@ static int create_xen_entries(enum xenmap_operation op,
             }
         }
 
-        BUG_ON(!xen_second[second_linear_offset(addr)].pt.valid);
+        BUG_ON(!lpae_valid(xen_second[second_linear_offset(addr)]));
 
         third = mfn_to_virt(xen_second[second_linear_offset(addr)].pt.base);
 
         switch ( op ) {
             case INSERT:
             case RESERVE:
-                if ( third[third_table_offset(addr)].pt.valid )
+                if ( lpae_valid(third[third_table_offset(addr)]) )
                 {
                     printk("create_xen_entries: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
                            addr, mfn_x(mfn));
@@ -1022,7 +1021,7 @@ static int create_xen_entries(enum xenmap_operation op,
                 break;
             case MODIFY:
             case REMOVE:
-                if ( !third[third_table_offset(addr)].pt.valid )
+                if ( !lpae_valid(third[third_table_offset(addr)]) )
                 {
                     printk("create_xen_entries: trying to %s a non-existing mapping addr=%lx\n",
                            op == REMOVE ? "remove" : "modify", addr);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 14/16] xen/arm: mm: Introduce temporary variable in create_xen_entries
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (12 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 13/16] xen/arm: mm: Use lpae_valid and lpae_table in create_xen_entries Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 15/16] xen/arm: mm: Use __func__ rather than plain name in format string Julien Grall
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

This is improving the code readability and avoid to dereference the
table every single time we need to access the entry.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
    Changes in v2:
        - Patch added

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/mm.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 21b3df395f..ea4a35ed7a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -986,28 +986,30 @@ static int create_xen_entries(enum xenmap_operation op,
 {
     int rc;
     unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
-    lpae_t pte;
+    lpae_t pte, *entry;
     lpae_t *third = NULL;
 
     for(; addr < addr_end; addr += PAGE_SIZE, mfn = mfn_add(mfn, 1))
     {
-        if ( !lpae_table(xen_second[second_linear_offset(addr)]) )
+        entry = &xen_second[second_linear_offset(addr)];
+        if ( !lpae_table(*entry) )
         {
-            rc = create_xen_table(&xen_second[second_linear_offset(addr)]);
+            rc = create_xen_table(entry);
             if ( rc < 0 ) {
                 printk("create_xen_entries: L2 failed\n");
                 goto out;
             }
         }
 
-        BUG_ON(!lpae_valid(xen_second[second_linear_offset(addr)]));
+        BUG_ON(!lpae_valid(*entry));
 
-        third = mfn_to_virt(xen_second[second_linear_offset(addr)].pt.base);
+        third = mfn_to_virt(entry->pt.base);
+        entry = &third[third_table_offset(addr)];
 
         switch ( op ) {
             case INSERT:
             case RESERVE:
-                if ( lpae_valid(third[third_table_offset(addr)]) )
+                if ( lpae_valid(*entry) )
                 {
                     printk("create_xen_entries: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
                            addr, mfn_x(mfn));
@@ -1017,11 +1019,11 @@ static int create_xen_entries(enum xenmap_operation op,
                     break;
                 pte = mfn_to_xen_entry(mfn, ai);
                 pte.pt.table = 1;
-                write_pte(&third[third_table_offset(addr)], pte);
+                write_pte(entry, pte);
                 break;
             case MODIFY:
             case REMOVE:
-                if ( !lpae_valid(third[third_table_offset(addr)]) )
+                if ( !lpae_valid(*entry) )
                 {
                     printk("create_xen_entries: trying to %s a non-existing mapping addr=%lx\n",
                            op == REMOVE ? "remove" : "modify", addr);
@@ -1031,7 +1033,7 @@ static int create_xen_entries(enum xenmap_operation op,
                     pte.bits = 0;
                 else
                 {
-                    pte = third[third_table_offset(addr)];
+                    pte = *entry;
                     pte.pt.ro = PTE_RO_MASK(ai);
                     pte.pt.xn = PTE_NX_MASK(ai);
                     if ( !pte.pt.ro && !pte.pt.xn )
@@ -1041,7 +1043,7 @@ static int create_xen_entries(enum xenmap_operation op,
                         return -EINVAL;
                     }
                 }
-                write_pte(&third[third_table_offset(addr)], pte);
+                write_pte(entry, pte);
                 break;
             default:
                 BUG();
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 15/16] xen/arm: mm: Use __func__ rather than plain name in format string
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (13 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 14/16] xen/arm: mm: Introduce temporary variable " Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 15:54 ` [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool Julien Grall
  2017-06-30 20:16 ` [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Stefano Stabellini
  16 siblings, 0 replies; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabelllini <sstabellini@kernel.org>
---

    Changes in v2:
        - Patch added

    Changes in v3:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index ea4a35ed7a..98260f604c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -996,7 +996,7 @@ static int create_xen_entries(enum xenmap_operation op,
         {
             rc = create_xen_table(entry);
             if ( rc < 0 ) {
-                printk("create_xen_entries: L2 failed\n");
+                printk("%s: L2 failed\n", __func__);
                 goto out;
             }
         }
@@ -1011,8 +1011,8 @@ static int create_xen_entries(enum xenmap_operation op,
             case RESERVE:
                 if ( lpae_valid(*entry) )
                 {
-                    printk("create_xen_entries: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
-                           addr, mfn_x(mfn));
+                    printk("%s: trying to replace an existing mapping addr=%lx mfn=%"PRI_mfn"\n",
+                           __func__, addr, mfn_x(mfn));
                     return -EINVAL;
                 }
                 if ( op == RESERVE )
@@ -1025,8 +1025,8 @@ static int create_xen_entries(enum xenmap_operation op,
             case REMOVE:
                 if ( !lpae_valid(*entry) )
                 {
-                    printk("create_xen_entries: trying to %s a non-existing mapping addr=%lx\n",
-                           op == REMOVE ? "remove" : "modify", addr);
+                    printk("%s: trying to %s a non-existing mapping addr=%lx\n",
+                           __func__, op == REMOVE ? "remove" : "modify", addr);
                     return -EINVAL;
                 }
                 if ( op == REMOVE )
@@ -1038,8 +1038,8 @@ static int create_xen_entries(enum xenmap_operation op,
                     pte.pt.xn = PTE_NX_MASK(ai);
                     if ( !pte.pt.ro && !pte.pt.xn )
                     {
-                        printk("create_xen_entries: Incorrect combination for addr=%lx\n",
-                               addr);
+                        printk("%s: Incorrect combination for addr=%lx\n",
+                               __func__, addr);
                         return -EINVAL;
                     }
                 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (14 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 15/16] xen/arm: mm: Use __func__ rather than plain name in format string Julien Grall
@ 2017-06-30 15:54 ` Julien Grall
  2017-06-30 20:11   ` Stefano Stabellini
  2017-06-30 20:16 ` [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Stefano Stabellini
  16 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 15:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

We are phasing out the use of bool_t in the hypervisor code.

---
    Changes in v3:
        - Patch added
---
 xen/include/asm-arm/lpae.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index 6fbf7c606c..a62b118630 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -126,7 +126,7 @@ typedef union {
     lpae_walk_t walk;
 } lpae_t;
 
-static inline bool_t lpae_valid(lpae_t pte)
+static inline bool lpae_valid(lpae_t pte)
 {
     return pte.walk.valid;
 }
@@ -136,12 +136,12 @@ static inline bool_t lpae_valid(lpae_t pte)
  * the table bit and therefore these would return the opposite to what
  * you would expect.
  */
-static inline bool_t lpae_table(lpae_t pte)
+static inline bool lpae_table(lpae_t pte)
 {
     return lpae_valid(pte) && pte.walk.table;
 }
 
-static inline bool_t lpae_mapping(lpae_t pte)
+static inline bool lpae_mapping(lpae_t pte)
 {
     return lpae_valid(pte) && !pte.walk.table;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64
  2017-06-30 15:54 ` [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64 Julien Grall
@ 2017-06-30 20:07   ` Stefano Stabellini
  0 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2017-06-30 20:07 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Fri, 30 Jun 2017, Julien Grall wrote:
> xenheap_mfn_end is storing an MFN and not a physical address. The value will
> be reset after the loop. So drop this bogus xenheap_mfn_end.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>     Changes in v2:
>         - Update commit message
> 
>     Changes in v3:
>         - Don't mention the fact that Xen is not currently using
>         xenheap_mfn_end.
>         - Add Stefano's reviewed-by

You didn't actually add my reviewed-by :-)
But that's OK


> ---
>  xen/arch/arm/setup.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index f00f29a45b..ab4d8e4218 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -654,8 +654,6 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
>              if ( e > bank_end )
>                  e = bank_end;
>  
> -            xenheap_mfn_end = e;
> -
>              dt_unreserved_regions(s, e, init_boot_pages, 0);
>              s = n;
>          }
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 03/16] xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe
  2017-06-30 15:54 ` [PATCH v3 03/16] xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe Julien Grall
@ 2017-06-30 20:08   ` Stefano Stabellini
  0 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2017-06-30 20:08 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Fri, 30 Jun 2017, Julien Grall wrote:
> The file xen/arch/arm/p2m.c is using typesafe MFN in most of the place.
> This requires caller to mfn_to_page and page_to_mfn to use _mfn/mfn_x.
> 
> To avoid extra _mfn/mfn_x, re-define mfn_to_page and page_to_mfn within
> xen/arch/arm/p2m.c to handle typesafe MFN.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     The idea behind redefining locally mfn_to_page and page_to_mfn is
>     splitting the introduction of typesafe MFN in smaller series and
>     between multiple people. We know the file is typesafe ready and if
>     we decide to make the main helper typesafe, we would just need to
>     drop the definition at the top of the file.
> 
>     Changes in v3:
>         - Use __page_to_mfn(...) rather than mfn_x(page_to_mfn(...))
> ---
>  xen/arch/arm/p2m.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 05eb584e89..3e20a1ec82 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -38,6 +38,12 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>  
>  #define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
>  
> +/* Override macros from asm/mm.h to make them work with mfn_t */
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> +#undef page_to_mfn
> +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> +
>  unsigned int __read_mostly p2m_ipa_bits;
>  
>  /* Helpers to lookup the properties of each level */
> @@ -115,7 +121,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>      printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
>  
>      printk("P2M @ %p mfn:0x%lx\n",
> -           p2m->root, page_to_mfn(p2m->root));
> +           p2m->root, __page_to_mfn(p2m->root));
>  
>      dump_pt_walk(page_to_maddr(p2m->root), addr,
>                   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
> @@ -591,7 +597,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
>       * The access value does not matter because the hardware will ignore
>       * the permission fields for table entry.
>       */
> -    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
> +    pte = mfn_to_p2m_entry(page_to_mfn(page), p2m_invalid,
>                             p2m->default_access);
>  
>      p2m_write_pte(entry, pte, p2m->clean_pte);
> @@ -650,9 +656,9 @@ static void p2m_put_l3_page(const lpae_t pte)
>       */
>      if ( p2m_is_foreign(pte.p2m.type) )
>      {
> -        unsigned long mfn = pte.p2m.base;
> +        mfn_t mfn = _mfn(pte.p2m.base);
>  
> -        ASSERT(mfn_valid(_mfn(mfn)));
> +        ASSERT(mfn_valid(mfn));
>          put_page(mfn_to_page(mfn));
>      }
>  }
> @@ -702,7 +708,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>      mfn = _mfn(entry.p2m.base);
>      ASSERT(mfn_valid(mfn));
>  
> -    pg = mfn_to_page(mfn_x(mfn));
> +    pg = mfn_to_page(mfn);
>  
>      page_list_del(pg, &p2m->pages);
>      free_domheap_page(pg);
> @@ -780,7 +786,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
>  
>      unmap_domain_page(table);
>  
> -    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
> +    pte = mfn_to_p2m_entry(page_to_mfn(page), p2m_invalid,
>                             p2m->default_access);
>  
>      /*
> @@ -1442,7 +1448,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>      if ( !mfn_valid(maddr_to_mfn(maddr)) )
>          goto err;
>  
> -    page = mfn_to_page(mfn_x(maddr_to_mfn(maddr)));
> +    page = mfn_to_page(maddr_to_mfn(maddr));
>      ASSERT(page);
>  
>      if ( unlikely(!get_page(page, d)) )
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool
  2017-06-30 15:54 ` [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool Julien Grall
@ 2017-06-30 20:11   ` Stefano Stabellini
  2017-06-30 20:20     ` Julien Grall
  0 siblings, 1 reply; 28+ messages in thread
From: Stefano Stabellini @ 2017-06-30 20:11 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Fri, 30 Jun 2017, Julien Grall wrote:
> We are phasing out the use of bool_t in the hypervisor code.

This is missing your signed-off-by.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v3:
>         - Patch added
> ---
>  xen/include/asm-arm/lpae.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index 6fbf7c606c..a62b118630 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -126,7 +126,7 @@ typedef union {
>      lpae_walk_t walk;
>  } lpae_t;
>  
> -static inline bool_t lpae_valid(lpae_t pte)
> +static inline bool lpae_valid(lpae_t pte)
>  {
>      return pte.walk.valid;
>  }
> @@ -136,12 +136,12 @@ static inline bool_t lpae_valid(lpae_t pte)
>   * the table bit and therefore these would return the opposite to what
>   * you would expect.
>   */
> -static inline bool_t lpae_table(lpae_t pte)
> +static inline bool lpae_table(lpae_t pte)
>  {
>      return lpae_valid(pte) && pte.walk.table;
>  }
>  
> -static inline bool_t lpae_mapping(lpae_t pte)
> +static inline bool lpae_mapping(lpae_t pte)
>  {
>      return lpae_valid(pte) && !pte.walk.table;
>  }
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 00/16] xen/arm: Clean-up memory subsystems
  2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
                   ` (15 preceding siblings ...)
  2017-06-30 15:54 ` [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool Julien Grall
@ 2017-06-30 20:16 ` Stefano Stabellini
  16 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2017-06-30 20:16 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

I committed the first 15 patches (all but the last)

On Fri, 30 Jun 2017, Julien Grall wrote:
> Hi all,
> 
> This series, clean-up a bit the memory subsystems to ease further changes.
> 
> Cheers,
> 
> Julien Grall (16):
>   xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64
>   xen/arm: mm: Use typesafe mfn for xenheap_mfn_*
>   xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe
>   xen/arm: mm: Redefine virt_to_mfn to support typesafe
>   xen/arm: domain_build: Redefine virt_to_mfn to support typesafe
>   xen/arm: alternative: Redefine virt_to_mfn to support typesafe
>   xen/arm: livepatch: Redefine virt_to_mfn to support typesafe
>   xen/arm: create_xen_entries: Use typesafe MFN
>   xen/arm: Move LPAE definition in a separate header
>   xen/arm: lpae: Fix comments coding style
>   xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and
>     p2m_is_superpage
>   xen/arm: p2m: Move lpae_* helpers in lpae.h
>   xen/arm: mm: Use lpae_valid and lpae_table in create_xen_entries
>   xen/arm: mm: Introduce temporary variable in create_xen_entries
>   xen/arm: mm: Use __func__ rather than plain name in format string
>   xen/arm: lpae: Switch from bool_t to bool
> 
>  xen/arch/arm/alternative.c  |   6 +-
>  xen/arch/arm/domain_build.c |   6 +-
>  xen/arch/arm/livepatch.c    |   6 +-
>  xen/arch/arm/mm.c           |  84 +++++++++---------
>  xen/arch/arm/p2m.c          |  72 ++++++---------
>  xen/arch/arm/setup.c        |  20 ++---
>  xen/include/asm-arm/lpae.h  | 209 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/mm.h    |  14 +--
>  xen/include/asm-arm/page.h  | 152 +-------------------------------
>  9 files changed, 315 insertions(+), 254 deletions(-)
>  create mode 100644 xen/include/asm-arm/lpae.h
> 
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool
  2017-06-30 20:11   ` Stefano Stabellini
@ 2017-06-30 20:20     ` Julien Grall
  2017-06-30 21:20       ` Stefano Stabellini
  0 siblings, 1 reply; 28+ messages in thread
From: Julien Grall @ 2017-06-30 20:20 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1640 bytes --]

On Fri, 30 Jun 2017, 21:12 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Fri, 30 Jun 2017, Julien Grall wrote:
> > We are phasing out the use of bool_t in the hypervisor code.
>
> This is missing your signed-off-by.
>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>

Doh.

Signed-off-by: Julien Grall <julien.grall@arm.com>

Shall I resend the patch?

Cheers,


> > ---
> >     Changes in v3:
> >         - Patch added
> > ---
> >  xen/include/asm-arm/lpae.h | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> > index 6fbf7c606c..a62b118630 100644
> > --- a/xen/include/asm-arm/lpae.h
> > +++ b/xen/include/asm-arm/lpae.h
> > @@ -126,7 +126,7 @@ typedef union {
> >      lpae_walk_t walk;
> >  } lpae_t;
> >
> > -static inline bool_t lpae_valid(lpae_t pte)
> > +static inline bool lpae_valid(lpae_t pte)
> >  {
> >      return pte.walk.valid;
> >  }
> > @@ -136,12 +136,12 @@ static inline bool_t lpae_valid(lpae_t pte)
> >   * the table bit and therefore these would return the opposite to what
> >   * you would expect.
> >   */
> > -static inline bool_t lpae_table(lpae_t pte)
> > +static inline bool lpae_table(lpae_t pte)
> >  {
> >      return lpae_valid(pte) && pte.walk.table;
> >  }
> >
> > -static inline bool_t lpae_mapping(lpae_t pte)
> > +static inline bool lpae_mapping(lpae_t pte)
> >  {
> >      return lpae_valid(pte) && !pte.walk.table;
> >  }
> > --
> > 2.11.0
> >
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 2698 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool
  2017-06-30 20:20     ` Julien Grall
@ 2017-06-30 21:20       ` Stefano Stabellini
  0 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2017-06-30 21:20 UTC (permalink / raw)
  To: Julien Grall; +Cc: Julien Grall, Stefano Stabellini, xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2029 bytes --]

On Fri, 30 Jun 2017, Julien Grall wrote:
> On Fri, 30 Jun 2017, 21:12 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>       On Fri, 30 Jun 2017, Julien Grall wrote:
>       > We are phasing out the use of bool_t in the hypervisor code.
> 
>       This is missing your signed-off-by.
> 
>       Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> 
> Doh.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> Shall I resend the patch?

No, I can fix this.

> 
>       > ---
>       >     Changes in v3:
>       >         - Patch added
>       > ---
>       >  xen/include/asm-arm/lpae.h | 6 +++---
>       >  1 file changed, 3 insertions(+), 3 deletions(-)
>       >
>       > diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
>       > index 6fbf7c606c..a62b118630 100644
>       > --- a/xen/include/asm-arm/lpae.h
>       > +++ b/xen/include/asm-arm/lpae.h
>       > @@ -126,7 +126,7 @@ typedef union {
>       >      lpae_walk_t walk;
>       >  } lpae_t;
>       >
>       > -static inline bool_t lpae_valid(lpae_t pte)
>       > +static inline bool lpae_valid(lpae_t pte)
>       >  {
>       >      return pte.walk.valid;
>       >  }
>       > @@ -136,12 +136,12 @@ static inline bool_t lpae_valid(lpae_t pte)
>       >   * the table bit and therefore these would return the opposite to what
>       >   * you would expect.
>       >   */
>       > -static inline bool_t lpae_table(lpae_t pte)
>       > +static inline bool lpae_table(lpae_t pte)
>       >  {
>       >      return lpae_valid(pte) && pte.walk.table;
>       >  }
>       >
>       > -static inline bool_t lpae_mapping(lpae_t pte)
>       > +static inline bool lpae_mapping(lpae_t pte)
>       >  {
>       >      return lpae_valid(pte) && !pte.walk.table;
>       >  }
>       > --
>       > 2.11.0
>       >
> 
>       _______________________________________________
>       Xen-devel mailing list
>       Xen-devel@lists.xen.org
>       https://lists.xen.org/xen-devel
> 
> 
> 

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 10/16] xen/arm: lpae: Fix comments coding style
  2017-06-30 15:54 ` [PATCH v3 10/16] xen/arm: lpae: Fix comments coding style Julien Grall
@ 2017-07-02 17:13   ` Sergej Proskurin
  0 siblings, 0 replies; 28+ messages in thread
From: Sergej Proskurin @ 2017-07-02 17:13 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini

Hi Julien,

On 06/30/2017 05:54 PM, Julien Grall wrote:
> Also adding one missing full stop + fix description
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> 
> Cc: proskurin@sec.in.tum.de
> 
>     I haven't retained Stefano's reviewed-by because of the description
>     update.
> 
>    Changes in v2:
>         - Fix description regarding x86 page-table
> 
>     Changes in v3:
>         - Add Stefano's reviewed-by


Reviewed-by: Sergej Proskurin <proskurin@sec.in.tum.de>


> ---
>  xen/include/asm-arm/lpae.h | 49 ++++++++++++++++++++++++++++++----------------
>  1 file changed, 32 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index ad8c571ea5..aa85cb8112 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -3,10 +3,12 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -/* WARNING!  Unlike the Intel pagetable code, where l1 is the lowest
> - * level and l4 is the root of the trie, the ARM pagetables follow ARM's
> - * documentation: the levels are called first, second &c in the order
> - * that the MMU walks them (i.e. "first" is the root of the trie). */
> +/*
> + * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
> + * l4 is the root of the trie, the ARM pagetables follow ARM's documentation:
> + * the levels are called first, second &c in the order that the MMU walks them
> + * (i.e. "first" is the root of the trie).
> + */
>  
>  /******************************************************************************
>   * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to
> @@ -17,15 +19,18 @@
>   * different place from those in leaf nodes seems to be to allow linear
>   * pagetable tricks.  If we're not doing that then the set of permission
>   * bits that's not in use in a given node type can be used as
> - * extra software-defined bits. */
> + * extra software-defined bits.
> + */
>  
>  typedef struct __packed {
>      /* These are used in all kinds of entry. */
>      unsigned long valid:1;      /* Valid mapping */
>      unsigned long table:1;      /* == 1 in 4k map entries too */
>  
> -    /* These ten bits are only used in Block entries and are ignored
> -     * in Table entries. */
> +    /*
> +     * These ten bits are only used in Block entries and are ignored
> +     * in Table entries.
> +     */
>      unsigned long ai:3;         /* Attribute Index */
>      unsigned long ns:1;         /* Not-Secure */
>      unsigned long user:1;       /* User-visible */
> @@ -38,30 +43,38 @@ typedef struct __packed {
>      unsigned long long base:36; /* Base address of block or next table */
>      unsigned long sbz:4;        /* Must be zero */
>  
> -    /* These seven bits are only used in Block entries and are ignored
> -     * in Table entries. */
> +    /*
> +     * These seven bits are only used in Block entries and are ignored
> +     * in Table entries.
> +     */
>      unsigned long contig:1;     /* In a block of 16 contiguous entries */
>      unsigned long pxn:1;        /* Privileged-XN */
>      unsigned long xn:1;         /* eXecute-Never */
>      unsigned long avail:4;      /* Ignored by hardware */
>  
> -    /* These 5 bits are only used in Table entries and are ignored in
> -     * Block entries */
> +    /*
> +     * These 5 bits are only used in Table entries and are ignored in
> +     * Block entries.
> +     */
>      unsigned long pxnt:1;       /* Privileged-XN */
>      unsigned long xnt:1;        /* eXecute-Never */
>      unsigned long apt:2;        /* Access Permissions */
>      unsigned long nst:1;        /* Not-Secure */
>  } lpae_pt_t;
>  
> -/* The p2m tables have almost the same layout, but some of the permission
> - * and cache-control bits are laid out differently (or missing) */
> +/*
> + * The p2m tables have almost the same layout, but some of the permission
> + * and cache-control bits are laid out differently (or missing).
> + */
>  typedef struct __packed {
>      /* These are used in all kinds of entry. */
>      unsigned long valid:1;      /* Valid mapping */
>      unsigned long table:1;      /* == 1 in 4k map entries too */
>  
> -    /* These ten bits are only used in Block entries and are ignored
> -     * in Table entries. */
> +    /*
> +     * These ten bits are only used in Block entries and are ignored
> +     * in Table entries.
> +     */
>      unsigned long mattr:4;      /* Memory Attributes */
>      unsigned long read:1;       /* Read access */
>      unsigned long write:1;      /* Write access */
> @@ -73,8 +86,10 @@ typedef struct __packed {
>      unsigned long long base:36; /* Base address of block or next table */
>      unsigned long sbz3:4;
>  
> -    /* These seven bits are only used in Block entries and are ignored
> -     * in Table entries. */
> +    /*
> +     * These seven bits are only used in Block entries and are ignored
> +     * in Table entries.
> +     */
>      unsigned long contig:1;     /* In a block of 16 contiguous entries */
>      unsigned long sbz2:1;
>      unsigned long xn:1;         /* eXecute-Never */
> 

-- 
Sergej Proskurin, M.Sc.
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579

proskurin@sec.in.tum.de
www.sec.in.tum.de

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 11/16] xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage
  2017-06-30 15:54 ` [PATCH v3 11/16] xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage Julien Grall
@ 2017-07-02 17:17   ` Sergej Proskurin
  0 siblings, 0 replies; 28+ messages in thread
From: Sergej Proskurin @ 2017-07-02 17:17 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini

Hi Julien,

On 06/30/2017 05:54 PM, Julien Grall wrote:
> The helpers p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage are
> not specific to the stage-2 translation tables. They can also work on
> any LPAE translation tables. So rename then to lpae_* and use pte.walk
> to look for the value of the field.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> 
> Cc: proskurin@sec.in.tum.de
> 
>     s/bool_t/bool/ will be done in a separate patch
> 
>     Changes in v2:
>         - Patch added
> 
>     Changes in v3:
>         - Add Stefano's reviewed-by


Reviewed-by: Sergej Proskurin <proskurin@sec.in.tum.de>


> ---
>  xen/arch/arm/p2m.c | 45 +++++++++++++++++++++++----------------------
>  1 file changed, 23 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 3e20a1ec82..381df1f237 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -52,27 +52,27 @@ static const paddr_t level_masks[] =
>  static const uint8_t level_orders[] =
>      { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
>  
> -static inline bool_t p2m_valid(lpae_t pte)
> +static inline bool_t lpae_valid(lpae_t pte)
>  {
> -    return pte.p2m.valid;
> +    return pte.walk.valid;
>  }
>  /*
>   * These two can only be used on L0..L2 ptes because L3 mappings set
>   * the table bit and therefore these would return the opposite to what
>   * you would expect.
>   */
> -static inline bool_t p2m_table(lpae_t pte)
> +static inline bool_t lpae_table(lpae_t pte)
>  {
> -    return p2m_valid(pte) && pte.p2m.table;
> +    return lpae_valid(pte) && pte.walk.table;
>  }
> -static inline bool_t p2m_mapping(lpae_t pte)
> +static inline bool_t lpae_mapping(lpae_t pte)
>  {
> -    return p2m_valid(pte) && !pte.p2m.table;
> +    return lpae_valid(pte) && !pte.walk.table;
>  }
>  
> -static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
> +static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>  {
> -    return (level < 3) && p2m_mapping(pte);
> +    return (level < 3) && lpae_mapping(pte);
>  }
>  
>  static void p2m_flush_tlb(struct p2m_domain *p2m);
> @@ -281,7 +281,7 @@ static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
>  
>      entry = *table + offset;
>  
> -    if ( !p2m_valid(*entry) )
> +    if ( !lpae_valid(*entry) )
>      {
>          if ( read_only )
>              return GUEST_TABLE_MAP_FAILED;
> @@ -292,7 +292,7 @@ static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
>      }
>  
>      /* The function p2m_next_level is never called at the 3rd level */
> -    if ( p2m_mapping(*entry) )
> +    if ( lpae_mapping(*entry) )
>          return GUEST_TABLE_SUPER_PAGE;
>  
>      mfn = _mfn(entry->p2m.base);
> @@ -372,7 +372,7 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
>  
>      entry = table[offsets[level]];
>  
> -    if ( p2m_valid(entry) )
> +    if ( lpae_valid(entry) )
>      {
>          *t = entry.p2m.type;
>  
> @@ -577,7 +577,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
>      lpae_t *p;
>      lpae_t pte;
>  
> -    ASSERT(!p2m_valid(*entry));
> +    ASSERT(!lpae_valid(*entry));
>  
>      page = alloc_domheap_page(NULL, 0);
>      if ( page == NULL )
> @@ -645,7 +645,7 @@ enum p2m_operation {
>   */
>  static void p2m_put_l3_page(const lpae_t pte)
>  {
> -    ASSERT(p2m_valid(pte));
> +    ASSERT(lpae_valid(pte));
>  
>      /*
>       * TODO: Handle other p2m types
> @@ -673,11 +673,11 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>      struct page_info *pg;
>  
>      /* Nothing to do if the entry is invalid. */
> -    if ( !p2m_valid(entry) )
> +    if ( !lpae_valid(entry) )
>          return;
>  
>      /* Nothing to do but updating the stats if the entry is a super-page. */
> -    if ( p2m_is_superpage(entry, level) )
> +    if ( lpae_is_superpage(entry, level) )
>      {
>          p2m->stats.mappings[level]--;
>          return;
> @@ -733,7 +733,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
>       * a superpage.
>       */
>      ASSERT(level < target);
> -    ASSERT(p2m_is_superpage(*entry, level));
> +    ASSERT(lpae_is_superpage(*entry, level));
>  
>      page = alloc_domheap_page(NULL, 0);
>      if ( !page )
> @@ -870,7 +870,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>          /* We need to split the original page. */
>          lpae_t split_pte = *entry;
>  
> -        ASSERT(p2m_is_superpage(*entry, level));
> +        ASSERT(lpae_is_superpage(*entry, level));
>  
>          if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
>          {
> @@ -944,12 +944,12 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>       * sequence when updating the translation table (D4.7.1 in ARM DDI
>       * 0487A.j).
>       */
> -    if ( p2m_valid(orig_pte) )
> +    if ( lpae_valid(orig_pte) )
>          p2m_remove_pte(entry, p2m->clean_pte);
>  
>      if ( mfn_eq(smfn, INVALID_MFN) )
>          /* Flush can be deferred if the entry is removed */
> -        p2m->need_flush |= !!p2m_valid(orig_pte);
> +        p2m->need_flush |= !!lpae_valid(orig_pte);
>      else
>      {
>          lpae_t pte = mfn_to_p2m_entry(smfn, t, a);
> @@ -964,7 +964,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>           * Although, it could be defered when only the permissions are
>           * changed (e.g in case of memaccess).
>           */
> -        if ( p2m_valid(orig_pte) )
> +        if ( lpae_valid(orig_pte) )
>          {
>              if ( likely(!p2m->mem_access_enabled) ||
>                   P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
> @@ -986,10 +986,11 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>       * Free the entry only if the original pte was valid and the base
>       * is different (to avoid freeing when permission is changed).
>       */
> -    if ( p2m_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
> +    if ( lpae_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
>          p2m_free_entry(p2m, orig_pte, level);
>  
> -    if ( need_iommu(p2m->domain) && (p2m_valid(orig_pte) || p2m_valid(*entry)) )
> +    if ( need_iommu(p2m->domain) &&
> +         (lpae_valid(orig_pte) || lpae_valid(*entry)) )
>          rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
>      else
>          rc = 0;
> 

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 12/16] xen/arm: p2m: Move lpae_* helpers in lpae.h
  2017-06-30 15:54 ` [PATCH v3 12/16] xen/arm: p2m: Move lpae_* helpers in lpae.h Julien Grall
@ 2017-07-02 17:20   ` Sergej Proskurin
  0 siblings, 0 replies; 28+ messages in thread
From: Sergej Proskurin @ 2017-07-02 17:20 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini

Hi Julien,

On 06/30/2017 05:54 PM, Julien Grall wrote:
> lpae_* helpers can work on any LPAE translation tables. Move them in
> lpae.h to allow other part of Xen to use them.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> 
> Cc: proskurin@sec.in.tum.de
> 
>     Changes in v2:
>         - Patch added
> 
>     Changes in v3:
>         - Add Stefano's reviewed-by


Reviewed-by: Sergej Proskurin <proskurin@sec.in.tum.de>


> ---
>  xen/arch/arm/p2m.c         | 23 -----------------------
>  xen/include/asm-arm/lpae.h | 25 +++++++++++++++++++++++++
>  2 files changed, 25 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 381df1f237..9b7a580a87 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -52,29 +52,6 @@ static const paddr_t level_masks[] =
>  static const uint8_t level_orders[] =
>      { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
>  
> -static inline bool_t lpae_valid(lpae_t pte)
> -{
> -    return pte.walk.valid;
> -}
> -/*
> - * These two can only be used on L0..L2 ptes because L3 mappings set
> - * the table bit and therefore these would return the opposite to what
> - * you would expect.
> - */
> -static inline bool_t lpae_table(lpae_t pte)
> -{
> -    return lpae_valid(pte) && pte.walk.table;
> -}
> -static inline bool_t lpae_mapping(lpae_t pte)
> -{
> -    return lpae_valid(pte) && !pte.walk.table;
> -}
> -
> -static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
> -{
> -    return (level < 3) && lpae_mapping(pte);
> -}
> -
>  static void p2m_flush_tlb(struct p2m_domain *p2m);
>  
>  /* Unlock the flush and do a P2M TLB flush if necessary */
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index aa85cb8112..6fbf7c606c 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -126,6 +126,31 @@ typedef union {
>      lpae_walk_t walk;
>  } lpae_t;
>  
> +static inline bool_t lpae_valid(lpae_t pte)
> +{
> +    return pte.walk.valid;
> +}
> +
> +/*
> + * These two can only be used on L0..L2 ptes because L3 mappings set
> + * the table bit and therefore these would return the opposite to what
> + * you would expect.
> + */
> +static inline bool_t lpae_table(lpae_t pte)
> +{
> +    return lpae_valid(pte) && pte.walk.table;
> +}
> +
> +static inline bool_t lpae_mapping(lpae_t pte)
> +{
> +    return lpae_valid(pte) && !pte.walk.table;
> +}
> +
> +static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
> +{
> +    return (level < 3) && lpae_mapping(pte);
> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  /*
> 

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 07/16] xen/arm: livepatch: Redefine virt_to_mfn to support typesafe
  2017-06-30 15:54 ` [PATCH v3 07/16] xen/arm: livepatch: " Julien Grall
@ 2017-07-03  8:21   ` Ross Lagerwall
  0 siblings, 0 replies; 28+ messages in thread
From: Ross Lagerwall @ 2017-07-03  8:21 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini

On 06/30/2017 04:54 PM, Julien Grall wrote:
> The file xen/arch/arm/livepatch.c is using typesafe MFN in most of
> the place. The only caller to virt_to_mfn is using with _mfn(...).
> 
> To avoid extra _mfn(...), re-define virt_to_mfn within
> xen/arch/arm/livepatch.c to handle typesafe MFN.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel..org>
> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> 
> Cc: Ross Lagerwall <ross.lagerwall@citrix.com>

Acked-by: Ross Lagerwall <ross.lagerwall@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 09/16] xen/arm: Move LPAE definition in a separate header
  2017-06-30 15:54 ` [PATCH v3 09/16] xen/arm: Move LPAE definition in a separate header Julien Grall
@ 2017-07-03  8:55   ` Sergej Proskurin
  0 siblings, 0 replies; 28+ messages in thread
From: Sergej Proskurin @ 2017-07-03  8:55 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini


[-- Attachment #1.1: Type: text/plain, Size: 15954 bytes --]

Hi Julien,


On 06/30/2017 05:54 PM, Julien Grall wrote:
> page.h is getting bigger. Move out every LPAE definitions in a separate
> header. There is no functional changes.
>
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>
> ---
>
> Cc: proskurin@sec.in.tum.de
>
>     Changes in v2:
>         - Move comment after the #endif rather than before
>         - Add Stefano's reviewed-by

Reviewed-by: Sergej Proskurin <proskurin@sec.in.tum.de>


> ---
>  xen/include/asm-arm/lpae.h | 169 +++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/page.h | 152 +---------------------------------------
>  2 files changed, 170 insertions(+), 151 deletions(-)
>  create mode 100644 xen/include/asm-arm/lpae.h
>
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> new file mode 100644
> index 0000000000..ad8c571ea5
> --- /dev/null
> +++ b/xen/include/asm-arm/lpae.h
> @@ -0,0 +1,169 @@
> +#ifndef __ARM_LPAE_H__
> +#define __ARM_LPAE_H__
> +
> +#ifndef __ASSEMBLY__
> +
> +/* WARNING!  Unlike the Intel pagetable code, where l1 is the lowest
> + * level and l4 is the root of the trie, the ARM pagetables follow ARM's
> + * documentation: the levels are called first, second &c in the order
> + * that the MMU walks them (i.e. "first" is the root of the trie). */
> +
> +/******************************************************************************
> + * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to
> + * 40-bit output addresses.  Tables at all levels have 512 64-bit entries
> + * (i.e. are 4Kb long).
> + *
> + * The bit-shuffling that has the permission bits in branch nodes in a
> + * different place from those in leaf nodes seems to be to allow linear
> + * pagetable tricks.  If we're not doing that then the set of permission
> + * bits that's not in use in a given node type can be used as
> + * extra software-defined bits. */
> +
> +typedef struct __packed {
> +    /* These are used in all kinds of entry. */
> +    unsigned long valid:1;      /* Valid mapping */
> +    unsigned long table:1;      /* == 1 in 4k map entries too */
> +
> +    /* These ten bits are only used in Block entries and are ignored
> +     * in Table entries. */
> +    unsigned long ai:3;         /* Attribute Index */
> +    unsigned long ns:1;         /* Not-Secure */
> +    unsigned long user:1;       /* User-visible */
> +    unsigned long ro:1;         /* Read-Only */
> +    unsigned long sh:2;         /* Shareability */
> +    unsigned long af:1;         /* Access Flag */
> +    unsigned long ng:1;         /* Not-Global */
> +
> +    /* The base address must be appropriately aligned for Block entries */
> +    unsigned long long base:36; /* Base address of block or next table */
> +    unsigned long sbz:4;        /* Must be zero */
> +
> +    /* These seven bits are only used in Block entries and are ignored
> +     * in Table entries. */
> +    unsigned long contig:1;     /* In a block of 16 contiguous entries */
> +    unsigned long pxn:1;        /* Privileged-XN */
> +    unsigned long xn:1;         /* eXecute-Never */
> +    unsigned long avail:4;      /* Ignored by hardware */
> +
> +    /* These 5 bits are only used in Table entries and are ignored in
> +     * Block entries */
> +    unsigned long pxnt:1;       /* Privileged-XN */
> +    unsigned long xnt:1;        /* eXecute-Never */
> +    unsigned long apt:2;        /* Access Permissions */
> +    unsigned long nst:1;        /* Not-Secure */
> +} lpae_pt_t;
> +
> +/* The p2m tables have almost the same layout, but some of the permission
> + * and cache-control bits are laid out differently (or missing) */
> +typedef struct __packed {
> +    /* These are used in all kinds of entry. */
> +    unsigned long valid:1;      /* Valid mapping */
> +    unsigned long table:1;      /* == 1 in 4k map entries too */
> +
> +    /* These ten bits are only used in Block entries and are ignored
> +     * in Table entries. */
> +    unsigned long mattr:4;      /* Memory Attributes */
> +    unsigned long read:1;       /* Read access */
> +    unsigned long write:1;      /* Write access */
> +    unsigned long sh:2;         /* Shareability */
> +    unsigned long af:1;         /* Access Flag */
> +    unsigned long sbz4:1;
> +
> +    /* The base address must be appropriately aligned for Block entries */
> +    unsigned long long base:36; /* Base address of block or next table */
> +    unsigned long sbz3:4;
> +
> +    /* These seven bits are only used in Block entries and are ignored
> +     * in Table entries. */
> +    unsigned long contig:1;     /* In a block of 16 contiguous entries */
> +    unsigned long sbz2:1;
> +    unsigned long xn:1;         /* eXecute-Never */
> +    unsigned long type:4;       /* Ignore by hardware. Used to store p2m types */
> +
> +    unsigned long sbz1:5;
> +} lpae_p2m_t;
> +
> +/* Permission mask: xn, write, read */
> +#define P2M_PERM_MASK (0x00400000000000C0ULL)
> +#define P2M_CLEAR_PERM(pte) ((pte).bits & ~P2M_PERM_MASK)
> +
> +/*
> + * Walk is the common bits of p2m and pt entries which are needed to
> + * simply walk the table (e.g. for debug).
> + */
> +typedef struct __packed {
> +    /* These are used in all kinds of entry. */
> +    unsigned long valid:1;      /* Valid mapping */
> +    unsigned long table:1;      /* == 1 in 4k map entries too */
> +
> +    unsigned long pad2:10;
> +
> +    /* The base address must be appropriately aligned for Block entries */
> +    unsigned long long base:36; /* Base address of block or next table */
> +
> +    unsigned long pad1:16;
> +} lpae_walk_t;
> +
> +typedef union {
> +    uint64_t bits;
> +    lpae_pt_t pt;
> +    lpae_p2m_t p2m;
> +    lpae_walk_t walk;
> +} lpae_t;
> +
> +#endif /* __ASSEMBLY__ */
> +
> +/*
> + * These numbers add up to a 48-bit input address space.
> + *
> + * On 32-bit the zeroeth level does not exist, therefore the total is
> + * 39-bits. The ARMv7-A architecture actually specifies a 40-bit input
> + * address space for the p2m, with an 8K (1024-entry) top-level table.
> + * However Xen only supports 16GB of RAM on 32-bit ARM systems and
> + * therefore 39-bits are sufficient.
> + */
> +
> +#define LPAE_SHIFT      9
> +#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
> +#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
> +
> +#define THIRD_SHIFT    (PAGE_SHIFT)
> +#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
> +#define THIRD_SIZE     ((paddr_t)1 << THIRD_SHIFT)
> +#define THIRD_MASK     (~(THIRD_SIZE - 1))
> +#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
> +#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
> +#define SECOND_SIZE    ((paddr_t)1 << SECOND_SHIFT)
> +#define SECOND_MASK    (~(SECOND_SIZE - 1))
> +#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
> +#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
> +#define FIRST_SIZE     ((paddr_t)1 << FIRST_SHIFT)
> +#define FIRST_MASK     (~(FIRST_SIZE - 1))
> +#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
> +#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
> +#define ZEROETH_SIZE   ((paddr_t)1 << ZEROETH_SHIFT)
> +#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
> +
> +/* Calculate the offsets into the pagetables for a given VA */
> +#define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
> +#define first_linear_offset(va) ((va) >> FIRST_SHIFT)
> +#define second_linear_offset(va) ((va) >> SECOND_SHIFT)
> +#define third_linear_offset(va) ((va) >> THIRD_SHIFT)
> +
> +#define TABLE_OFFSET(offs) ((unsigned int)(offs) & LPAE_ENTRY_MASK)
> +#define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
> +#define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
> +#define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
> +#define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
> +
> +#endif /* __ARM_LPAE_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 3670ab665d..cef2f28914 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -3,6 +3,7 @@
>  
>  #include <public/xen.h>
>  #include <asm/processor.h>
> +#include <asm/lpae.h>
>  
>  #ifdef CONFIG_ARM_64
>  #define PADDR_BITS              48
> @@ -97,114 +98,6 @@
>  #include <xen/lib.h>
>  #include <asm/system.h>
>  
> -/* WARNING!  Unlike the Intel pagetable code, where l1 is the lowest
> - * level and l4 is the root of the trie, the ARM pagetables follow ARM's
> - * documentation: the levels are called first, second &c in the order
> - * that the MMU walks them (i.e. "first" is the root of the trie). */
> -
> -/******************************************************************************
> - * ARMv7-A LPAE pagetables: 3-level trie, mapping 40-bit input to
> - * 40-bit output addresses.  Tables at all levels have 512 64-bit entries
> - * (i.e. are 4Kb long).
> - *
> - * The bit-shuffling that has the permission bits in branch nodes in a
> - * different place from those in leaf nodes seems to be to allow linear
> - * pagetable tricks.  If we're not doing that then the set of permission
> - * bits that's not in use in a given node type can be used as
> - * extra software-defined bits. */
> -
> -typedef struct __packed {
> -    /* These are used in all kinds of entry. */
> -    unsigned long valid:1;      /* Valid mapping */
> -    unsigned long table:1;      /* == 1 in 4k map entries too */
> -
> -    /* These ten bits are only used in Block entries and are ignored
> -     * in Table entries. */
> -    unsigned long ai:3;         /* Attribute Index */
> -    unsigned long ns:1;         /* Not-Secure */
> -    unsigned long user:1;       /* User-visible */
> -    unsigned long ro:1;         /* Read-Only */
> -    unsigned long sh:2;         /* Shareability */
> -    unsigned long af:1;         /* Access Flag */
> -    unsigned long ng:1;         /* Not-Global */
> -
> -    /* The base address must be appropriately aligned for Block entries */
> -    unsigned long long base:36; /* Base address of block or next table */
> -    unsigned long sbz:4;        /* Must be zero */
> -
> -    /* These seven bits are only used in Block entries and are ignored
> -     * in Table entries. */
> -    unsigned long contig:1;     /* In a block of 16 contiguous entries */
> -    unsigned long pxn:1;        /* Privileged-XN */
> -    unsigned long xn:1;         /* eXecute-Never */
> -    unsigned long avail:4;      /* Ignored by hardware */
> -
> -    /* These 5 bits are only used in Table entries and are ignored in
> -     * Block entries */
> -    unsigned long pxnt:1;       /* Privileged-XN */
> -    unsigned long xnt:1;        /* eXecute-Never */
> -    unsigned long apt:2;        /* Access Permissions */
> -    unsigned long nst:1;        /* Not-Secure */
> -} lpae_pt_t;
> -
> -/* The p2m tables have almost the same layout, but some of the permission
> - * and cache-control bits are laid out differently (or missing) */
> -typedef struct __packed {
> -    /* These are used in all kinds of entry. */
> -    unsigned long valid:1;      /* Valid mapping */
> -    unsigned long table:1;      /* == 1 in 4k map entries too */
> -
> -    /* These ten bits are only used in Block entries and are ignored
> -     * in Table entries. */
> -    unsigned long mattr:4;      /* Memory Attributes */
> -    unsigned long read:1;       /* Read access */
> -    unsigned long write:1;      /* Write access */
> -    unsigned long sh:2;         /* Shareability */
> -    unsigned long af:1;         /* Access Flag */
> -    unsigned long sbz4:1;
> -
> -    /* The base address must be appropriately aligned for Block entries */
> -    unsigned long long base:36; /* Base address of block or next table */
> -    unsigned long sbz3:4;
> -
> -    /* These seven bits are only used in Block entries and are ignored
> -     * in Table entries. */
> -    unsigned long contig:1;     /* In a block of 16 contiguous entries */
> -    unsigned long sbz2:1;
> -    unsigned long xn:1;         /* eXecute-Never */
> -    unsigned long type:4;       /* Ignore by hardware. Used to store p2m types */
> -
> -    unsigned long sbz1:5;
> -} lpae_p2m_t;
> -
> -/* Permission mask: xn, write, read */
> -#define P2M_PERM_MASK (0x00400000000000C0ULL)
> -#define P2M_CLEAR_PERM(pte) ((pte).bits & ~P2M_PERM_MASK)
> -
> -/*
> - * Walk is the common bits of p2m and pt entries which are needed to
> - * simply walk the table (e.g. for debug).
> - */
> -typedef struct __packed {
> -    /* These are used in all kinds of entry. */
> -    unsigned long valid:1;      /* Valid mapping */
> -    unsigned long table:1;      /* == 1 in 4k map entries too */
> -
> -    unsigned long pad2:10;
> -
> -    /* The base address must be appropriately aligned for Block entries */
> -    unsigned long long base:36; /* Base address of block or next table */
> -
> -    unsigned long pad1:16;
> -} lpae_walk_t;
> -
> -typedef union {
> -    uint64_t bits;
> -    lpae_pt_t pt;
> -    lpae_p2m_t p2m;
> -    lpae_walk_t walk;
> -} lpae_t;
> -
>  #if defined(CONFIG_ARM_32)
>  # include <asm/arm32/page.h>
>  #elif defined(CONFIG_ARM_64)
> @@ -390,49 +283,6 @@ static inline int gva_to_ipa(vaddr_t va, paddr_t *paddr, unsigned int flags)
>  
>  #endif /* __ASSEMBLY__ */
>  
> -/*
> - * These numbers add up to a 48-bit input address space.
> - *
> - * On 32-bit the zeroeth level does not exist, therefore the total is
> - * 39-bits. The ARMv7-A architecture actually specifies a 40-bit input
> - * address space for the p2m, with an 8K (1024-entry) top-level table.
> - * However Xen only supports 16GB of RAM on 32-bit ARM systems and
> - * therefore 39-bits are sufficient.
> - */
> -
> -#define LPAE_SHIFT      9
> -#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
> -#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
> -
> -#define THIRD_SHIFT    (PAGE_SHIFT)
> -#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
> -#define THIRD_SIZE     ((paddr_t)1 << THIRD_SHIFT)
> -#define THIRD_MASK     (~(THIRD_SIZE - 1))
> -#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
> -#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
> -#define SECOND_SIZE    ((paddr_t)1 << SECOND_SHIFT)
> -#define SECOND_MASK    (~(SECOND_SIZE - 1))
> -#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
> -#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
> -#define FIRST_SIZE     ((paddr_t)1 << FIRST_SHIFT)
> -#define FIRST_MASK     (~(FIRST_SIZE - 1))
> -#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
> -#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
> -#define ZEROETH_SIZE   ((paddr_t)1 << ZEROETH_SHIFT)
> -#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
> -
> -/* Calculate the offsets into the pagetables for a given VA */
> -#define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
> -#define first_linear_offset(va) ((va) >> FIRST_SHIFT)
> -#define second_linear_offset(va) ((va) >> SECOND_SHIFT)
> -#define third_linear_offset(va) ((va) >> THIRD_SHIFT)
> -
> -#define TABLE_OFFSET(offs) ((unsigned int)(offs) & LPAE_ENTRY_MASK)
> -#define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
> -#define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
> -#define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
> -#define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
> -
>  #define PAGE_ALIGN(x) (((x) + PAGE_SIZE - 1) & PAGE_MASK)
>  
>  #endif /* __ARM_PAGE_H__ */

Cheers,
~Sergej

[-- Attachment #1.2: Type: text/html, Size: 15895 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2017-07-03  8:55 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-30 15:54 [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Julien Grall
2017-06-30 15:54 ` [PATCH v3 01/16] xen/arm: setup: Remove bogus xenheap_mfn_end in setup_mm for arm64 Julien Grall
2017-06-30 20:07   ` Stefano Stabellini
2017-06-30 15:54 ` [PATCH v3 02/16] xen/arm: mm: Use typesafe mfn for xenheap_mfn_* Julien Grall
2017-06-30 15:54 ` [PATCH v3 03/16] xen/arm: p2m: Redefine mfn_to_page and page_to_mfn to use typesafe Julien Grall
2017-06-30 20:08   ` Stefano Stabellini
2017-06-30 15:54 ` [PATCH v3 04/16] xen/arm: mm: Redefine virt_to_mfn to support typesafe Julien Grall
2017-06-30 15:54 ` [PATCH v3 05/16] xen/arm: domain_build: " Julien Grall
2017-06-30 15:54 ` [PATCH v3 06/16] xen/arm: alternative: " Julien Grall
2017-06-30 15:54 ` [PATCH v3 07/16] xen/arm: livepatch: " Julien Grall
2017-07-03  8:21   ` Ross Lagerwall
2017-06-30 15:54 ` [PATCH v3 08/16] xen/arm: create_xen_entries: Use typesafe MFN Julien Grall
2017-06-30 15:54 ` [PATCH v3 09/16] xen/arm: Move LPAE definition in a separate header Julien Grall
2017-07-03  8:55   ` Sergej Proskurin
2017-06-30 15:54 ` [PATCH v3 10/16] xen/arm: lpae: Fix comments coding style Julien Grall
2017-07-02 17:13   ` Sergej Proskurin
2017-06-30 15:54 ` [PATCH v3 11/16] xen/arm: p2m: Rename p2m_valid, p2m_table, p2m_mapping and p2m_is_superpage Julien Grall
2017-07-02 17:17   ` Sergej Proskurin
2017-06-30 15:54 ` [PATCH v3 12/16] xen/arm: p2m: Move lpae_* helpers in lpae.h Julien Grall
2017-07-02 17:20   ` Sergej Proskurin
2017-06-30 15:54 ` [PATCH v3 13/16] xen/arm: mm: Use lpae_valid and lpae_table in create_xen_entries Julien Grall
2017-06-30 15:54 ` [PATCH v3 14/16] xen/arm: mm: Introduce temporary variable " Julien Grall
2017-06-30 15:54 ` [PATCH v3 15/16] xen/arm: mm: Use __func__ rather than plain name in format string Julien Grall
2017-06-30 15:54 ` [PATCH v3 16/16] xen/arm: lpae: Switch from bool_t to bool Julien Grall
2017-06-30 20:11   ` Stefano Stabellini
2017-06-30 20:20     ` Julien Grall
2017-06-30 21:20       ` Stefano Stabellini
2017-06-30 20:16 ` [PATCH v3 00/16] xen/arm: Clean-up memory subsystems Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.