xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn
@ 2016-06-28 16:17 Julien Grall
  2016-06-28 16:17 ` [PATCH v5 01/17] xen: Use typesafe gfn/mfn in guest_physmap_* helpers Julien Grall
                   ` (17 more replies)
  0 siblings, 18 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Kevin Tian, sstabellini, Feng Wu, Jun Nakajima,
	George Dunlap, Andrew Cooper, Christoph Egger, Ian Jackson,
	Tim Deegan, Julien Grall, Paul Durrant, Shannon Zhao,
	Jan Beulich, Liu Jinsong, Suravee Suthikulpanit, Boris Ostrovsky,
	Mukesh Rathor

Hello all,

Some of the ARM functions are mixing gfn vs mfn and even physical vs frame.

To avoid more confusion, this patch series makes use of the terminology
described in xen/include/xen/mm.h and the associated typesafe.

This series requires the patch [1] to be applied beforehand. I pushed a
branch with this patch and this series applied on xenbits:
git://xenbits.xen.org/people/julieng/xen-unstable.git branch typesafe-v4

For all the changes see in each patch.

Yours sincerely,

[1] http://lists.xenproject.org/archives/html/xen-devel/2016-06/msg01744.html

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Christoph Egger <chegger@amazon.de>
Cc: Feng Wu <feng.wu@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Shannon Zhao <shannon.zhao@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

Julien Grall (17):
  xen: Use typesafe gfn/mfn in guest_physmap_* helpers
  xen: Use typesafe gfn in xenmem_add_to_physmap_one
  xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the
    typesafe gfn
  xen: Use the typesafe mfn and gfn in map_mmio_regions...
  xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  xen: Use a typesafe to define INVALID_MFN
  xen: Use a typesafe to define INVALID_GFN
  xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and
    mfn
  xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn
  xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  xen/arm: dom0_build: Remove dead code in allocate_memory
  xen/arm: p2m: Remove unused operation ALLOCATE
  xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region...
  xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache ...
  xen/arm: p2m: Introduce helpers to insert and remove mapping
  xen/arm: p2m: Use typesafe gfn for {max,lowest}_mapped_gfn
  xen/arm: p2m: Rework the interface of apply_p2m_changes and use
    typesafe

 xen/arch/arm/domain.c                   |   4 +-
 xen/arch/arm/domain_build.c             |  72 ++-------
 xen/arch/arm/domctl.c                   |   2 +-
 xen/arch/arm/gic-v2.c                   |   4 +-
 xen/arch/arm/mm.c                       |  20 +--
 xen/arch/arm/p2m.c                      | 269 ++++++++++++--------------------
 xen/arch/arm/platforms/exynos5.c        |   8 +-
 xen/arch/arm/platforms/omap5.c          |  16 +-
 xen/arch/arm/traps.c                    |  21 +--
 xen/arch/arm/vgic-v2.c                  |   4 +-
 xen/arch/x86/cpu/mcheck/mce.c           |   2 +-
 xen/arch/x86/debug.c                    |  64 ++++----
 xen/arch/x86/domain.c                   |   7 +-
 xen/arch/x86/domain_build.c             |   6 +-
 xen/arch/x86/hvm/emulate.c              |   7 +-
 xen/arch/x86/hvm/hvm.c                  |  12 +-
 xen/arch/x86/hvm/ioreq.c                |  16 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   2 +-
 xen/arch/x86/hvm/viridian.c             |   6 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   8 +-
 xen/arch/x86/mm.c                       |  21 +--
 xen/arch/x86/mm/altp2m.c                |   2 +-
 xen/arch/x86/mm/guest_walk.c            |   4 +-
 xen/arch/x86/mm/hap/guest_walk.c        |  10 +-
 xen/arch/x86/mm/hap/hap.c               |   4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |   2 +-
 xen/arch/x86/mm/p2m-ept.c               |   6 +-
 xen/arch/x86/mm/p2m-pod.c               |  24 +--
 xen/arch/x86/mm/p2m-pt.c                |  18 +--
 xen/arch/x86/mm/p2m.c                   | 164 ++++++++++---------
 xen/arch/x86/mm/paging.c                |  12 +-
 xen/arch/x86/mm/shadow/common.c         |  46 +++---
 xen/arch/x86/mm/shadow/multi.c          |  38 ++---
 xen/arch/x86/mm/shadow/private.h        |   2 +-
 xen/common/domain.c                     |   6 +-
 xen/common/domctl.c                     |   4 +-
 xen/common/grant_table.c                |  13 +-
 xen/common/memory.c                     |  38 ++---
 xen/drivers/passthrough/amd/iommu_map.c |   2 +-
 xen/drivers/passthrough/arm/smmu.c      |   4 +-
 xen/drivers/passthrough/vtd/iommu.c     |   4 +-
 xen/drivers/passthrough/x86/iommu.c     |   2 +-
 xen/include/asm-arm/domain.h            |   2 +-
 xen/include/asm-arm/grant_table.h       |   2 +-
 xen/include/asm-arm/p2m.h               |  44 +++---
 xen/include/asm-x86/guest_pt.h          |   4 +-
 xen/include/asm-x86/p2m.h               |  13 +-
 xen/include/xen/mm.h                    |   8 +-
 xen/include/xen/p2m-common.h            |   8 +-
 49 files changed, 481 insertions(+), 576 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v5 01/17] xen: Use typesafe gfn/mfn in guest_physmap_* helpers
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 16:17 ` [PATCH v5 02/17] xen: Use typesafe gfn in xenmem_add_to_physmap_one Julien Grall
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Julien Grall, Paul Durrant, Jan Beulich

Also rename some variables to gfn or mfn when it does not require much
rework.

Finally replace %hu with %d when printing the domain id in
guest_physmap_add_entry (arch/x86/mm/p2m.c).

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

    Changes in v4:
        - Add Stefano's Acked-by

    Changes in v3:
        - Use %d to print the domain id rather than %hu
        - Add Jan's Acked-by for non-ARM bits

    Changes in v2:
        - Don't use a wrapper for x86. Instead use mfn_* to make
        the change simpler.
---
 xen/arch/arm/domain_build.c        |  2 +-
 xen/arch/arm/mm.c                  | 10 ++---
 xen/arch/arm/p2m.c                 | 20 +++++-----
 xen/arch/x86/domain.c              |  5 ++-
 xen/arch/x86/domain_build.c        |  6 +--
 xen/arch/x86/hvm/ioreq.c           |  8 ++--
 xen/arch/x86/mm.c                  | 12 +++---
 xen/arch/x86/mm/p2m.c              | 78 ++++++++++++++++++++------------------
 xen/common/grant_table.c           |  7 ++--
 xen/common/memory.c                | 32 ++++++++--------
 xen/drivers/passthrough/arm/smmu.c |  4 +-
 xen/include/asm-arm/p2m.h          | 12 +++---
 xen/include/asm-x86/p2m.h          | 11 +++---
 xen/include/xen/mm.h               |  2 +-
 14 files changed, 110 insertions(+), 99 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 410bb4f..9035486 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -117,7 +117,7 @@ static bool_t insert_11_bank(struct domain *d,
         goto fail;
     }
 
-    res = guest_physmap_add_page(d, spfn, spfn, order);
+    res = guest_physmap_add_page(d, _gfn(spfn), _mfn(spfn), order);
     if ( res )
         panic("Failed map pages to DOM0: %d", res);
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2ec211b..5ab9b75 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1153,7 +1153,7 @@ int xenmem_add_to_physmap_one(
     }
 
     /* Map at new location. */
-    rc = guest_physmap_add_entry(d, gpfn, mfn, 0, t);
+    rc = guest_physmap_add_entry(d, _gfn(gpfn), _mfn(mfn), 0, t);
 
     /* If we fail to add the mapping, we need to drop the reference we
      * took earlier on foreign pages */
@@ -1282,8 +1282,8 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame,
     if ( flags & GNTMAP_readonly )
         t = p2m_grant_map_ro;
 
-    rc = guest_physmap_add_entry(current->domain, addr >> PAGE_SHIFT,
-                                 frame, 0, t);
+    rc = guest_physmap_add_entry(current->domain, _gfn(addr >> PAGE_SHIFT),
+                                 _mfn(frame), 0, t);
 
     if ( rc )
         return GNTST_general_error;
@@ -1294,13 +1294,13 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame,
 int replace_grant_host_mapping(unsigned long addr, unsigned long mfn,
         unsigned long new_addr, unsigned int flags)
 {
-    unsigned long gfn = (unsigned long)(addr >> PAGE_SHIFT);
+    gfn_t gfn = _gfn(addr >> PAGE_SHIFT);
     struct domain *d = current->domain;
 
     if ( new_addr != 0 || (flags & GNTMAP_contains_pte) )
         return GNTST_general_error;
 
-    guest_physmap_remove_page(d, gfn, mfn, 0);
+    guest_physmap_remove_page(d, gfn, _mfn(mfn), 0);
 
     return GNTST_okay;
 }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5afae1d..0395a40 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1292,26 +1292,26 @@ int map_dev_mmio_region(struct domain *d,
 }
 
 int guest_physmap_add_entry(struct domain *d,
-                            unsigned long gpfn,
-                            unsigned long mfn,
+                            gfn_t gfn,
+                            mfn_t mfn,
                             unsigned long page_order,
                             p2m_type_t t)
 {
     return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gpfn),
-                             pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, 0, t,
+                             pfn_to_paddr(gfn_x(gfn)),
+                             pfn_to_paddr(gfn_x(gfn) + (1 << page_order)),
+                             pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t,
                              d->arch.p2m.default_access);
 }
 
 void guest_physmap_remove_page(struct domain *d,
-                               unsigned long gpfn,
-                               unsigned long mfn, unsigned int page_order)
+                               gfn_t gfn,
+                               mfn_t mfn, unsigned int page_order)
 {
     apply_p2m_changes(d, REMOVE,
-                      pfn_to_paddr(gpfn),
-                      pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, 0, p2m_invalid,
+                      pfn_to_paddr(gfn_x(gfn)),
+                      pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
+                      pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
                       d->arch.p2m.default_access);
 }
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 3ba7ed1..bb59247 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -802,9 +802,10 @@ int arch_domain_soft_reset(struct domain *d)
         ret = -ENOMEM;
         goto exit_put_gfn;
     }
-    guest_physmap_remove_page(d, gfn, mfn, PAGE_ORDER_4K);
+    guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), PAGE_ORDER_4K);
 
-    ret = guest_physmap_add_page(d, gfn, page_to_mfn(new_page), PAGE_ORDER_4K);
+    ret = guest_physmap_add_page(d, _gfn(gfn), _mfn(page_to_mfn(new_page)),
+                                 PAGE_ORDER_4K);
     if ( ret )
     {
         printk(XENLOG_G_ERR "Failed to add a page to replace"
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b29c377..0a02d65 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -427,7 +427,7 @@ static __init void pvh_add_mem_mapping(struct domain *d, unsigned long gfn,
         if ( !iomem_access_permitted(d, mfn + i, mfn + i) )
         {
             omfn = get_gfn_query_unlocked(d, gfn + i, &t);
-            guest_physmap_remove_page(d, gfn + i, mfn_x(omfn), PAGE_ORDER_4K);
+            guest_physmap_remove_page(d, _gfn(gfn + i), omfn, PAGE_ORDER_4K);
             continue;
         }
 
@@ -530,7 +530,7 @@ static __init void pvh_map_all_iomem(struct domain *d, unsigned long nr_pages)
             if ( get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY )
                 continue;
 
-            rc = guest_physmap_add_page(d, start_pfn, mfn, 0);
+            rc = guest_physmap_add_page(d, _gfn(start_pfn), _mfn(mfn), 0);
             if ( rc != 0 )
                 panic("Unable to add gpfn %#lx mfn %#lx to Dom0 physmap: %d",
                       start_pfn, mfn, rc);
@@ -605,7 +605,7 @@ static __init void dom0_update_physmap(struct domain *d, unsigned long pfn,
 {
     if ( is_pvh_domain(d) )
     {
-        int rc = guest_physmap_add_page(d, pfn, mfn, 0);
+        int rc = guest_physmap_add_page(d, _gfn(pfn), _mfn(mfn), 0);
         BUG_ON(rc);
         return;
     }
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 333ce14..7148ac4 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -267,8 +267,8 @@ bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page)
 static void hvm_remove_ioreq_gmfn(
     struct domain *d, struct hvm_ioreq_page *iorp)
 {
-    guest_physmap_remove_page(d, iorp->gmfn,
-                              page_to_mfn(iorp->page), 0);
+    guest_physmap_remove_page(d, _gfn(iorp->gmfn),
+                              _mfn(page_to_mfn(iorp->page)), 0);
     clear_page(iorp->va);
 }
 
@@ -279,8 +279,8 @@ static int hvm_add_ioreq_gmfn(
 
     clear_page(iorp->va);
 
-    rc = guest_physmap_add_page(d, iorp->gmfn,
-                                page_to_mfn(iorp->page), 0);
+    rc = guest_physmap_add_page(d, _gfn(iorp->gmfn),
+                                _mfn(page_to_mfn(iorp->page)), 0);
     if ( rc == 0 )
         paging_mark_dirty(d, page_to_mfn(iorp->page));
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ae7c8ab..7fbc94e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4211,7 +4211,8 @@ static int create_grant_p2m_mapping(uint64_t addr, unsigned long frame,
     else
         p2mt = p2m_grant_map_rw;
     rc = guest_physmap_add_entry(current->domain,
-                                 addr >> PAGE_SHIFT, frame, PAGE_ORDER_4K, p2mt);
+                                 _gfn(addr >> PAGE_SHIFT),
+                                 _mfn(frame), PAGE_ORDER_4K, p2mt);
     if ( rc )
         return GNTST_general_error;
     else
@@ -4268,7 +4269,7 @@ static int replace_grant_p2m_mapping(
                 type, mfn_x(old_mfn), frame);
         return GNTST_general_error;
     }
-    guest_physmap_remove_page(d, gfn, frame, PAGE_ORDER_4K);
+    guest_physmap_remove_page(d, _gfn(gfn), _mfn(frame), PAGE_ORDER_4K);
 
     put_gfn(d, gfn);
     return GNTST_okay;
@@ -4853,7 +4854,8 @@ int xenmem_add_to_physmap_one(
     {
         if ( is_xen_heap_mfn(prev_mfn) )
             /* Xen heap frames are simply unhooked from this phys slot. */
-            guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
+            guest_physmap_remove_page(d, _gfn(gpfn), _mfn(prev_mfn),
+                                      PAGE_ORDER_4K);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
             guest_remove_page(d, gpfn);
@@ -4867,10 +4869,10 @@ int xenmem_add_to_physmap_one(
     if ( space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range )
         ASSERT( old_gpfn == gfn );
     if ( old_gpfn != INVALID_M2P_ENTRY )
-        guest_physmap_remove_page(d, old_gpfn, mfn, PAGE_ORDER_4K);
+        guest_physmap_remove_page(d, _gfn(old_gpfn), _mfn(mfn), PAGE_ORDER_4K);
 
     /* Map at new location. */
-    rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
+    rc = guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), PAGE_ORDER_4K);
 
     /* In the XENMAPSPACE_gmfn, we took a ref of the gfn at the top */
     if ( space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 89462b2..16733a4 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -675,21 +675,20 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
 }
 
 int
-guest_physmap_remove_page(struct domain *d, unsigned long gfn,
-                          unsigned long mfn, unsigned int page_order)
+guest_physmap_remove_page(struct domain *d, gfn_t gfn,
+                          mfn_t mfn, unsigned int page_order)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
     gfn_lock(p2m, gfn, page_order);
-    rc = p2m_remove_page(p2m, gfn, mfn, page_order);
+    rc = p2m_remove_page(p2m, gfn_x(gfn), mfn_x(mfn), page_order);
     gfn_unlock(p2m, gfn, page_order);
     return rc;
 }
 
 int
-guest_physmap_add_entry(struct domain *d, unsigned long gfn,
-                        unsigned long mfn, unsigned int page_order, 
-                        p2m_type_t t)
+guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
+                        unsigned int page_order, p2m_type_t t)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long i, ogfn;
@@ -705,13 +704,14 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
         {
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                rc = iommu_map_page(
-                    d, mfn + i, mfn + i, IOMMUF_readable|IOMMUF_writable);
+                rc = iommu_map_page(d, mfn_x(mfn_add(mfn, i)),
+                                    mfn_x(mfn_add(mfn, i)),
+                                    IOMMUF_readable|IOMMUF_writable);
                 if ( rc != 0 )
                 {
                     while ( i-- > 0 )
                         /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(d, mfn + i) )
+                        if ( iommu_unmap_page(d, mfn_x(mfn_add(mfn, i))) )
                             continue;
 
                     return rc;
@@ -727,18 +727,20 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
 
     p2m_lock(p2m);
 
-    P2M_DEBUG("adding gfn=%#lx mfn=%#lx\n", gfn, mfn);
+    P2M_DEBUG("adding gfn=%#lx mfn=%#lx\n", gfn_x(gfn), mfn_x(mfn));
 
     /* First, remove m->p mappings for existing p->m mappings */
     for ( i = 0; i < (1UL << page_order); i++ )
     {
-        omfn = p2m->get_entry(p2m, gfn + i, &ot, &a, 0, NULL, NULL);
+        omfn = p2m->get_entry(p2m, gfn_x(gfn_add(gfn, i)), &ot,
+                              &a, 0, NULL, NULL);
         if ( p2m_is_shared(ot) )
         {
             /* Do an unshare to cleanly take care of all corner 
              * cases. */
             int rc;
-            rc = mem_sharing_unshare_page(p2m->domain, gfn + i, 0);
+            rc = mem_sharing_unshare_page(p2m->domain,
+                                          gfn_x(gfn_add(gfn, i)), 0);
             if ( rc )
             {
                 p2m_unlock(p2m);
@@ -753,10 +755,13 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
                  *
                  * Foreign domains are okay to place an event as they 
                  * won't go to sleep. */
-                (void)mem_sharing_notify_enomem(p2m->domain, gfn + i, 0);
+                (void)mem_sharing_notify_enomem(p2m->domain,
+                                                gfn_x(gfn_add(gfn, i)),
+                                                0);
                 return rc;
             }
-            omfn = p2m->get_entry(p2m, gfn + i, &ot, &a, 0, NULL, NULL);
+            omfn = p2m->get_entry(p2m, gfn_x(gfn_add(gfn, i)),
+                                  &ot, &a, 0, NULL, NULL);
             ASSERT(!p2m_is_shared(ot));
         }
         if ( p2m_is_grant(ot) || p2m_is_foreign(ot) )
@@ -787,39 +792,39 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
     /* Then, look for m->p mappings for this range and deal with them */
     for ( i = 0; i < (1UL << page_order); i++ )
     {
-        if ( page_get_owner(mfn_to_page(_mfn(mfn + i))) == dom_cow )
+        if ( page_get_owner(mfn_to_page(mfn_add(mfn, i))) == dom_cow )
         {
             /* This is no way to add a shared page to your physmap! */
-            gdprintk(XENLOG_ERR, "Adding shared mfn %lx directly to dom %hu "
-                        "physmap not allowed.\n", mfn+i, d->domain_id);
+            gdprintk(XENLOG_ERR, "Adding shared mfn %lx directly to dom%d physmap not allowed.\n",
+                     mfn_x(mfn_add(mfn, i)), d->domain_id);
             p2m_unlock(p2m);
             return -EINVAL;
         }
-        if ( page_get_owner(mfn_to_page(_mfn(mfn + i))) != d )
+        if ( page_get_owner(mfn_to_page(mfn_add(mfn, i))) != d )
             continue;
-        ogfn = mfn_to_gfn(d, _mfn(mfn+i));
-        if ( (ogfn != INVALID_M2P_ENTRY) && (ogfn != gfn + i) )
+        ogfn = mfn_to_gfn(d, mfn_add(mfn, i));
+        if ( (ogfn != INVALID_M2P_ENTRY) && (ogfn != gfn_x(gfn_add(gfn, i))) )
         {
             /* This machine frame is already mapped at another physical
              * address */
             P2M_DEBUG("aliased! mfn=%#lx, old gfn=%#lx, new gfn=%#lx\n",
-                      mfn + i, ogfn, gfn + i);
+                      mfn_x(mfn_add(mfn, i)), ogfn, gfn_x(gfn_add(gfn, i)));
             omfn = p2m->get_entry(p2m, ogfn, &ot, &a, 0, NULL, NULL);
             if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
             {
                 ASSERT(mfn_valid(omfn));
                 P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
                           ogfn , mfn_x(omfn));
-                if ( mfn_x(omfn) == (mfn + i) )
-                    p2m_remove_page(p2m, ogfn, mfn + i, 0);
+                if ( mfn_eq(omfn, mfn_add(mfn, i)) )
+                    p2m_remove_page(p2m, ogfn, mfn_x(mfn_add(mfn, i)), 0);
             }
         }
     }
 
     /* Now, actually do the two-way mapping */
-    if ( mfn_valid(_mfn(mfn)) ) 
+    if ( mfn_valid(mfn) )
     {
-        rc = p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t,
+        rc = p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, t,
                            p2m->default_access);
         if ( rc )
             goto out; /* Failed to update p2m, bail without updating m2p. */
@@ -827,14 +832,15 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
-                set_gpfn_from_mfn(mfn+i, gfn+i);
+                set_gpfn_from_mfn(mfn_x(mfn_add(mfn, i)),
+                                  gfn_x(gfn_add(gfn, i)));
         }
     }
     else
     {
         gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
-                 gfn, mfn);
-        rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order,
+                 gfn_x(gfn), mfn_x(mfn));
+        rc = p2m_set_entry(p2m, gfn_x(gfn), _mfn(INVALID_MFN), page_order,
                            p2m_invalid, p2m->default_access);
         if ( rc == 0 )
         {
@@ -2798,7 +2804,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
                     unsigned long gpfn, domid_t foreigndom)
 {
     p2m_type_t p2mt, p2mt_prev;
-    unsigned long prev_mfn, mfn;
+    mfn_t prev_mfn, mfn;
     struct page_info *page;
     int rc;
     struct domain *fdom;
@@ -2841,15 +2847,15 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
         rc = -EINVAL;
         goto out;
     }
-    mfn = mfn_x(page_to_mfn(page));
+    mfn = page_to_mfn(page);
 
     /* Remove previously mapped page if it is present. */
-    prev_mfn = mfn_x(get_gfn(tdom, gpfn, &p2mt_prev));
-    if ( mfn_valid(_mfn(prev_mfn)) )
+    prev_mfn = get_gfn(tdom, gpfn, &p2mt_prev);
+    if ( mfn_valid(prev_mfn) )
     {
-        if ( is_xen_heap_mfn(prev_mfn) )
+        if ( is_xen_heap_mfn(mfn_x(prev_mfn)) )
             /* Xen heap frames are simply unhooked from this phys slot */
-            guest_physmap_remove_page(tdom, gpfn, prev_mfn, 0);
+            guest_physmap_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
             guest_remove_page(tdom, gpfn);
@@ -2859,11 +2865,11 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
      * will update the m2p table which will result in  mfn -> gpfn of dom0
      * and not fgfn of domU.
      */
-    rc = set_foreign_p2m_entry(tdom, gpfn, _mfn(mfn));
+    rc = set_foreign_p2m_entry(tdom, gpfn, mfn);
     if ( rc )
         gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. "
                  "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n",
-                 gpfn, mfn, fgfn, tdom->domain_id, fdom->domain_id);
+                 gpfn, mfn_x(mfn), fgfn, tdom->domain_id, fdom->domain_id);
 
     put_page(page);
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 3c304f4..3f15543 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1818,7 +1818,7 @@ gnttab_transfer(
             goto copyback;
         }
 
-        guest_physmap_remove_page(d, gop.mfn, mfn, 0);
+        guest_physmap_remove_page(d, _gfn(gop.mfn), _mfn(mfn), 0);
         gnttab_flush_tlb(d);
 
         /* Find the target domain. */
@@ -1946,7 +1946,7 @@ gnttab_transfer(
         {
             grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref);
 
-            guest_physmap_add_page(e, sha->frame, mfn, 0);
+            guest_physmap_add_page(e, _gfn(sha->frame), _mfn(mfn), 0);
             if ( !paging_mode_translate(e) )
                 sha->frame = mfn;
         }
@@ -1954,7 +1954,8 @@ gnttab_transfer(
         {
             grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref);
 
-            guest_physmap_add_page(e, sha->full_page.frame, mfn, 0);
+            guest_physmap_add_page(e, _gfn(sha->full_page.frame),
+                                   _mfn(mfn), 0);
             if ( !paging_mode_translate(e) )
                 sha->full_page.frame = mfn;
         }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index b54b076..a8a75e0 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -213,7 +213,7 @@ static void populate_physmap(struct memop_args *a)
                 mfn = page_to_mfn(page);
             }
 
-            guest_physmap_add_page(d, gpfn, mfn, a->extent_order);
+            guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), a->extent_order);
 
             if ( !paging_mode_translate(d) )
             {
@@ -237,20 +237,20 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
 #ifdef CONFIG_X86
     p2m_type_t p2mt;
 #endif
-    unsigned long mfn;
+    mfn_t mfn;
 
 #ifdef CONFIG_X86
-    mfn = mfn_x(get_gfn_query(d, gmfn, &p2mt)); 
+    mfn = get_gfn_query(d, gmfn, &p2mt);
     if ( unlikely(p2m_is_paging(p2mt)) )
     {
-        guest_physmap_remove_page(d, gmfn, mfn, 0);
+        guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
         put_gfn(d, gmfn);
         /* If the page hasn't yet been paged out, there is an
          * actual page that needs to be released. */
         if ( p2mt == p2m_ram_paging_out )
         {
-            ASSERT(mfn_valid(mfn));
-            page = mfn_to_page(mfn);
+            ASSERT(mfn_valid(mfn_x(mfn)));
+            page = mfn_to_page(mfn_x(mfn));
             if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
                 put_page(page);
         }
@@ -259,14 +259,14 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
     }
     if ( p2mt == p2m_mmio_direct )
     {
-        clear_mmio_p2m_entry(d, gmfn, _mfn(mfn), 0);
+        clear_mmio_p2m_entry(d, gmfn, mfn, 0);
         put_gfn(d, gmfn);
         return 1;
     }
 #else
-    mfn = mfn_x(gfn_to_mfn(d, _gfn(gmfn)));
+    mfn = gfn_to_mfn(d, _gfn(gmfn));
 #endif
-    if ( unlikely(!mfn_valid(mfn)) )
+    if ( unlikely(!mfn_valid(mfn_x(mfn))) )
     {
         put_gfn(d, gmfn);
         gdprintk(XENLOG_INFO, "Domain %u page number %lx invalid\n",
@@ -288,12 +288,12 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
             return 0;
         }
         /* Maybe the mfn changed */
-        mfn = mfn_x(get_gfn_query_unlocked(d, gmfn, &p2mt));
+        mfn = get_gfn_query_unlocked(d, gmfn, &p2mt);
         ASSERT(!p2m_is_shared(p2mt));
     }
 #endif /* CONFIG_X86 */
 
-    page = mfn_to_page(mfn);
+    page = mfn_to_page(mfn_x(mfn));
     if ( unlikely(!get_page(page, d)) )
     {
         put_gfn(d, gmfn);
@@ -316,7 +316,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
          test_and_clear_bit(_PGC_allocated, &page->count_info) )
         put_page(page);
 
-    guest_physmap_remove_page(d, gmfn, mfn, 0);
+    guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
 
     put_page(page);
     put_gfn(d, gmfn);
@@ -540,7 +540,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
             gfn = mfn_to_gmfn(d, mfn);
             /* Pages were unshared above */
             BUG_ON(SHARED_M2P(gfn));
-            guest_physmap_remove_page(d, gfn, mfn, 0);
+            guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), 0);
             put_page(page);
         }
 
@@ -584,7 +584,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
             }
 
             mfn = page_to_mfn(page);
-            guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order);
+            guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn),
+                                   exch.out.extent_order);
 
             if ( !paging_mode_translate(d) )
             {
@@ -1095,7 +1096,8 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         page = get_page_from_gfn(d, xrfp.gpfn, NULL, P2M_ALLOC);
         if ( page )
         {
-            guest_physmap_remove_page(d, xrfp.gpfn, page_to_mfn(page), 0);
+            guest_physmap_remove_page(d, _gfn(xrfp.gpfn),
+                                      _mfn(page_to_mfn(page)), 0);
             put_page(page);
         }
         else
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 8a4b123..cf8b8b8 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2774,7 +2774,7 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	 * The function guest_physmap_add_entry replaces the current mapping
 	 * if there is already one...
 	 */
-	return guest_physmap_add_entry(d, gfn, mfn, 0, t);
+	return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
 }
 
 static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
@@ -2786,7 +2786,7 @@ static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
 	if ( !is_domain_direct_mapped(d) )
 		return -EINVAL;
 
-	guest_physmap_remove_page(d, gfn, gfn, 0);
+	guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
 
 	return 0;
 }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 75c65a8..0d1e61e 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -160,23 +160,23 @@ int map_dev_mmio_region(struct domain *d,
                         unsigned long mfn);
 
 int guest_physmap_add_entry(struct domain *d,
-                            unsigned long gfn,
-                            unsigned long mfn,
+                            gfn_t gfn,
+                            mfn_t mfn,
                             unsigned long page_order,
                             p2m_type_t t);
 
 /* Untyped version for RAM only, for compatibility */
 static inline int guest_physmap_add_page(struct domain *d,
-                                         unsigned long gfn,
-                                         unsigned long mfn,
+                                         gfn_t gfn,
+                                         mfn_t mfn,
                                          unsigned int page_order)
 {
     return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw);
 }
 
 void guest_physmap_remove_page(struct domain *d,
-                               unsigned long gpfn,
-                               unsigned long mfn, unsigned int page_order);
+                               gfn_t gfn,
+                               mfn_t mfn, unsigned int page_order);
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 65675a2..4ab3574 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -545,14 +545,14 @@ void p2m_teardown(struct p2m_domain *p2m);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
-int guest_physmap_add_entry(struct domain *d, unsigned long gfn,
-                            unsigned long mfn, unsigned int page_order, 
+int guest_physmap_add_entry(struct domain *d, gfn_t gfn,
+                            mfn_t mfn, unsigned int page_order,
                             p2m_type_t t);
 
 /* Untyped version for RAM only, for compatibility */
 static inline int guest_physmap_add_page(struct domain *d,
-                                         unsigned long gfn,
-                                         unsigned long mfn,
+                                         gfn_t gfn,
+                                         mfn_t mfn,
                                          unsigned int page_order)
 {
     return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw);
@@ -560,8 +560,7 @@ static inline int guest_physmap_add_page(struct domain *d,
 
 /* Remove a page from a domain's p2m table */
 int guest_physmap_remove_page(struct domain *d,
-                              unsigned long gfn,
-                              unsigned long mfn, unsigned int page_order);
+                              gfn_t gfn, mfn_t mfn, unsigned int page_order);
 
 /* Set a p2m range as populate-on-demand */
 int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 13f706e..b62f473 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -552,7 +552,7 @@ int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
 
 /* Returns 1 on success, 0 on error, negative if the ring
  * for event propagation is full in the presence of paging */
-int guest_remove_page(struct domain *d, unsigned long gmfn);
+int guest_remove_page(struct domain *d, unsigned long gfn);
 
 #define RAM_TYPE_CONVENTIONAL 0x00000001
 #define RAM_TYPE_RESERVED     0x00000002
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 02/17] xen: Use typesafe gfn in xenmem_add_to_physmap_one
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
  2016-06-28 16:17 ` [PATCH v5 01/17] xen: Use typesafe gfn/mfn in guest_physmap_* helpers Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 16:17 ` [PATCH v5 03/17] xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the typesafe gfn Julien Grall
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Julien Grall, Jan Beulich

The x86 version of the function xenmem_add_to_physmap_one contains
variable name gpfn and gfn which make the code very confusing.
I have left unchanged for now.

Also, rename gpfn to gfn in the ARM version as the latter is the correct
acronym for a guest physical frame.

Finally, remove the trailing whitespace around the changes.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

    Changes in v4:
        - Add Stefano's Acked-by

    Changes in v3:
        - Add Jan's Acked-by for non-ARM bits
---
 xen/arch/arm/mm.c    | 10 +++++-----
 xen/arch/x86/mm.c    | 15 +++++++--------
 xen/common/memory.c  |  6 +++---
 xen/include/xen/mm.h |  2 +-
 4 files changed, 16 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 5ab9b75..6882d54 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1046,7 +1046,7 @@ int xenmem_add_to_physmap_one(
     unsigned int space,
     union xen_add_to_physmap_batch_extra extra,
     unsigned long idx,
-    xen_pfn_t gpfn)
+    gfn_t gfn)
 {
     unsigned long mfn = 0;
     int rc;
@@ -1081,8 +1081,8 @@ int xenmem_add_to_physmap_one(
             else
                 return -EINVAL;
         }
-        
-        d->arch.grant_table_gpfn[idx] = gpfn;
+
+        d->arch.grant_table_gpfn[idx] = gfn_x(gfn);
 
         t = p2m_ram_rw;
 
@@ -1145,7 +1145,7 @@ int xenmem_add_to_physmap_one(
         if ( extra.res0 )
             return -EOPNOTSUPP;
 
-        rc = map_dev_mmio_region(d, gpfn, 1, idx);
+        rc = map_dev_mmio_region(d, gfn_x(gfn), 1, idx);
         return rc;
 
     default:
@@ -1153,7 +1153,7 @@ int xenmem_add_to_physmap_one(
     }
 
     /* Map at new location. */
-    rc = guest_physmap_add_entry(d, _gfn(gpfn), _mfn(mfn), 0, t);
+    rc = guest_physmap_add_entry(d, gfn, _mfn(mfn), 0, t);
 
     /* If we fail to add the mapping, we need to drop the reference we
      * took earlier on foreign pages */
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7fbc94e..dbcf6cb 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4775,7 +4775,7 @@ int xenmem_add_to_physmap_one(
     unsigned int space,
     union xen_add_to_physmap_batch_extra extra,
     unsigned long idx,
-    xen_pfn_t gpfn)
+    gfn_t gpfn)
 {
     struct page_info *page = NULL;
     unsigned long gfn = 0; /* gcc ... */
@@ -4834,7 +4834,7 @@ int xenmem_add_to_physmap_one(
             break;
         }
         case XENMAPSPACE_gmfn_foreign:
-            return p2m_add_foreign(d, idx, gpfn, extra.foreign_domid);
+            return p2m_add_foreign(d, idx, gfn_x(gpfn), extra.foreign_domid);
         default:
             break;
     }
@@ -4849,19 +4849,18 @@ int xenmem_add_to_physmap_one(
     }
 
     /* Remove previously mapped page if it was present. */
-    prev_mfn = mfn_x(get_gfn(d, gpfn, &p2mt));
+    prev_mfn = mfn_x(get_gfn(d, gfn_x(gpfn), &p2mt));
     if ( mfn_valid(prev_mfn) )
     {
         if ( is_xen_heap_mfn(prev_mfn) )
             /* Xen heap frames are simply unhooked from this phys slot. */
-            guest_physmap_remove_page(d, _gfn(gpfn), _mfn(prev_mfn),
-                                      PAGE_ORDER_4K);
+            guest_physmap_remove_page(d, gpfn, _mfn(prev_mfn), PAGE_ORDER_4K);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
-            guest_remove_page(d, gpfn);
+            guest_remove_page(d, gfn_x(gpfn));
     }
     /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-    put_gfn(d, gpfn);
+    put_gfn(d, gfn_x(gpfn));
 
     /* Unmap from old location, if any. */
     old_gpfn = get_gpfn_from_mfn(mfn);
@@ -4872,7 +4871,7 @@ int xenmem_add_to_physmap_one(
         guest_physmap_remove_page(d, _gfn(old_gpfn), _mfn(mfn), PAGE_ORDER_4K);
 
     /* Map at new location. */
-    rc = guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), PAGE_ORDER_4K);
+    rc = guest_physmap_add_page(d, gpfn, _mfn(mfn), PAGE_ORDER_4K);
 
     /* In the XENMAPSPACE_gmfn, we took a ref of the gfn at the top */
     if ( space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range )
diff --git a/xen/common/memory.c b/xen/common/memory.c
index a8a75e0..812334b 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -649,7 +649,7 @@ static int xenmem_add_to_physmap(struct domain *d,
 
     if ( xatp->space != XENMAPSPACE_gmfn_range )
         return xenmem_add_to_physmap_one(d, xatp->space, extra,
-                                         xatp->idx, xatp->gpfn);
+                                         xatp->idx, _gfn(xatp->gpfn));
 
     if ( xatp->size < start )
         return -EILSEQ;
@@ -666,7 +666,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     while ( xatp->size > done )
     {
         rc = xenmem_add_to_physmap_one(d, xatp->space, extra,
-                                       xatp->idx, xatp->gpfn);
+                                       xatp->idx, _gfn(xatp->gpfn));
         if ( rc < 0 )
             break;
 
@@ -735,7 +735,7 @@ static int xenmem_add_to_physmap_batch(struct domain *d,
 
         rc = xenmem_add_to_physmap_one(d, xatpb->space,
                                        xatpb->u,
-                                       idx, gpfn);
+                                       idx, _gfn(gpfn));
 
         if ( unlikely(__copy_to_guest_offset(xatpb->errs, 0, &rc, 1)) )
         {
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index b62f473..afbb1a1 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -548,7 +548,7 @@ void scrub_one_page(struct page_info *);
 
 int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
                               union xen_add_to_physmap_batch_extra extra,
-                              unsigned long idx, xen_pfn_t gpfn);
+                              unsigned long idx, gfn_t gfn);
 
 /* Returns 1 on success, 0 on error, negative if the ring
  * for event propagation is full in the presence of paging */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 03/17] xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the typesafe gfn
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
  2016-06-28 16:17 ` [PATCH v5 01/17] xen: Use typesafe gfn/mfn in guest_physmap_* helpers Julien Grall
  2016-06-28 16:17 ` [PATCH v5 02/17] xen: Use typesafe gfn in xenmem_add_to_physmap_one Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 16:17 ` [PATCH v5 04/17] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The correct acronym for a guest physical frame is gfn. Also use
the typesafe gfn to ensure that a guest frame is effectively used.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v4:
        - Add Stefano's acked-by

    Changes in v2:
        - Remove extra pair of brackets.
---
 xen/arch/arm/domain.c             | 4 ++--
 xen/arch/arm/mm.c                 | 2 +-
 xen/include/asm-arm/domain.h      | 2 +-
 xen/include/asm-arm/grant_table.h | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d8a804c..6ce4645 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -464,13 +464,13 @@ struct domain *alloc_domain_struct(void)
         return NULL;
 
     clear_page(d);
-    d->arch.grant_table_gpfn = xzalloc_array(xen_pfn_t, max_grant_frames);
+    d->arch.grant_table_gfn = xzalloc_array(gfn_t, max_grant_frames);
     return d;
 }
 
 void free_domain_struct(struct domain *d)
 {
-    xfree(d->arch.grant_table_gpfn);
+    xfree(d->arch.grant_table_gfn);
     free_xenheap_page(d);
 }
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 6882d54..0e408f8 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1082,7 +1082,7 @@ int xenmem_add_to_physmap_one(
                 return -EINVAL;
         }
 
-        d->arch.grant_table_gpfn[idx] = gfn_x(gfn);
+        d->arch.grant_table_gfn[idx] = gfn;
 
         t = p2m_ram_rw;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 370cdeb..979f7de 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -51,7 +51,7 @@ struct arch_domain
     uint64_t vttbr;
 
     struct hvm_domain hvm_domain;
-    xen_pfn_t *grant_table_gpfn;
+    gfn_t *grant_table_gfn;
 
     struct vmmio vmmio;
 
diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h
index 5e076cc..eb02423 100644
--- a/xen/include/asm-arm/grant_table.h
+++ b/xen/include/asm-arm/grant_table.h
@@ -30,7 +30,7 @@ static inline int replace_grant_supported(void)
 
 #define gnttab_shared_gmfn(d, t, i)                                      \
     ( ((i >= nr_grant_frames(d->grant_table)) &&                         \
-     (i < max_grant_frames)) ? 0 : (d->arch.grant_table_gpfn[i]))
+     (i < max_grant_frames)) ? 0 : gfn_x(d->arch.grant_table_gfn[i]))
 
 #define gnttab_need_iommu_mapping(d)                    \
     (is_domain_direct_mapped(d) && need_iommu(d))
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 04/17] xen: Use the typesafe mfn and gfn in map_mmio_regions...
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (2 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 03/17] xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the typesafe gfn Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:23   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Julien Grall, Jan Beulich

to avoid mixing machine frame with guest frame.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

    Changes in v3:
        - Use mfn_add when it is possible
        - Add Jan's acked-by
---
 xen/arch/arm/domain_build.c      |  4 ++--
 xen/arch/arm/gic-v2.c            |  4 ++--
 xen/arch/arm/p2m.c               | 22 +++++++++++-----------
 xen/arch/arm/platforms/exynos5.c |  8 ++++----
 xen/arch/arm/platforms/omap5.c   | 16 ++++++++--------
 xen/arch/arm/vgic-v2.c           |  4 ++--
 xen/arch/x86/mm/p2m.c            | 18 ++++++++++--------
 xen/common/domctl.c              |  4 ++--
 xen/include/xen/p2m-common.h     |  8 ++++----
 9 files changed, 45 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9035486..49185f0 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1036,9 +1036,9 @@ static int map_range_to_domain(const struct dt_device_node *dev,
     if ( need_mapping )
     {
         res = map_mmio_regions(d,
-                               paddr_to_pfn(addr),
+                               _gfn(paddr_to_pfn(addr)),
                                DIV_ROUND_UP(len, PAGE_SIZE),
-                               paddr_to_pfn(addr));
+                               _mfn(paddr_to_pfn(addr)));
         if ( res < 0 )
         {
             printk(XENLOG_ERR "Unable to map 0x%"PRIx64
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 4e2f4c7..3893ece 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -601,9 +601,9 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
                d->domain_id, v2m_data->addr, v2m_data->size,
                v2m_data->spi_start, v2m_data->nr_spis);
 
-        ret = map_mmio_regions(d, paddr_to_pfn(v2m_data->addr),
+        ret = map_mmio_regions(d, _gfn(paddr_to_pfn(v2m_data->addr)),
                             DIV_ROUND_UP(v2m_data->size, PAGE_SIZE),
-                            paddr_to_pfn(v2m_data->addr));
+                            _mfn(paddr_to_pfn(v2m_data->addr)));
         if ( ret )
         {
             printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0395a40..34563bb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1245,27 +1245,27 @@ int unmap_regions_rw_cache(struct domain *d,
 }
 
 int map_mmio_regions(struct domain *d,
-                     unsigned long start_gfn,
+                     gfn_t start_gfn,
                      unsigned long nr,
-                     unsigned long mfn)
+                     mfn_t mfn)
 {
     return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_DEV, 0, p2m_mmio_direct,
                              d->arch.p2m.default_access);
 }
 
 int unmap_mmio_regions(struct domain *d,
-                       unsigned long start_gfn,
+                       gfn_t start_gfn,
                        unsigned long nr,
-                       unsigned long mfn)
+                       mfn_t mfn)
 {
     return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_DEV, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
@@ -1280,7 +1280,7 @@ int map_dev_mmio_region(struct domain *d,
     if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
         return 0;
 
-    res = map_mmio_regions(d, start_gfn, nr, mfn);
+    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
     if ( res < 0 )
     {
         printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index bf4964d..c43934f 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -83,12 +83,12 @@ static int exynos5_init_time(void)
 static int exynos5250_specific_mapping(struct domain *d)
 {
     /* Map the chip ID */
-    map_mmio_regions(d, paddr_to_pfn(EXYNOS5_PA_CHIPID), 1,
-                     paddr_to_pfn(EXYNOS5_PA_CHIPID));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(EXYNOS5_PA_CHIPID)), 1,
+                     _mfn(paddr_to_pfn(EXYNOS5_PA_CHIPID)));
 
     /* Map the PWM region */
-    map_mmio_regions(d, paddr_to_pfn(EXYNOS5_PA_TIMER), 2,
-                     paddr_to_pfn(EXYNOS5_PA_TIMER));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(EXYNOS5_PA_TIMER)), 2,
+                     _mfn(paddr_to_pfn(EXYNOS5_PA_TIMER)));
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
index a49ba62..539588e 100644
--- a/xen/arch/arm/platforms/omap5.c
+++ b/xen/arch/arm/platforms/omap5.c
@@ -102,20 +102,20 @@ static int omap5_init_time(void)
 static int omap5_specific_mapping(struct domain *d)
 {
     /* Map the PRM module */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_PRM_BASE), 2,
-                     paddr_to_pfn(OMAP5_PRM_BASE));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_PRM_BASE)), 2,
+                     _mfn(paddr_to_pfn(OMAP5_PRM_BASE)));
 
     /* Map the PRM_MPU */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_PRCM_MPU_BASE), 1,
-                     paddr_to_pfn(OMAP5_PRCM_MPU_BASE));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_PRCM_MPU_BASE)), 1,
+                     _mfn(paddr_to_pfn(OMAP5_PRCM_MPU_BASE)));
 
     /* Map the Wakeup Gen */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_WKUPGEN_BASE), 1,
-                     paddr_to_pfn(OMAP5_WKUPGEN_BASE));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_WKUPGEN_BASE)), 1,
+                     _mfn(paddr_to_pfn(OMAP5_WKUPGEN_BASE)));
 
     /* Map the on-chip SRAM */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_SRAM_PA), 32,
-                     paddr_to_pfn(OMAP5_SRAM_PA));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_SRAM_PA)), 32,
+                     _mfn(paddr_to_pfn(OMAP5_SRAM_PA)));
 
     return 0;
 }
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 9adb4a9..cbe61cf 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -688,8 +688,8 @@ static int vgic_v2_domain_init(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, paddr_to_pfn(cbase), csize / PAGE_SIZE,
-                           paddr_to_pfn(vbase));
+    ret = map_mmio_regions(d, _gfn(paddr_to_pfn(cbase)), csize / PAGE_SIZE,
+                           _mfn(paddr_to_pfn(vbase)));
     if ( ret )
         return ret;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 16733a4..6258a5b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2214,9 +2214,9 @@ static unsigned int mmio_order(const struct domain *d,
 #define MAP_MMIO_MAX_ITER 64 /* pretty arbitrary */
 
 int map_mmio_regions(struct domain *d,
-                     unsigned long start_gfn,
+                     gfn_t start_gfn,
                      unsigned long nr,
-                     unsigned long mfn)
+                     mfn_t mfn)
 {
     int ret = 0;
     unsigned long i;
@@ -2229,10 +2229,11 @@ int map_mmio_regions(struct domain *d,
           i += 1UL << order, ++iter )
     {
         /* OR'ing gfn and mfn values will return an order suitable to both. */
-        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
+        for ( order = mmio_order(d, (gfn_x(start_gfn) + i) | (mfn_x(mfn) + i), nr - i); ;
               order = ret - 1 )
         {
-            ret = set_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i), order,
+            ret = set_mmio_p2m_entry(d, gfn_x(start_gfn) + i,
+                                     mfn_add(mfn, i), order,
                                      p2m_get_hostp2m(d)->default_access);
             if ( ret <= 0 )
                 break;
@@ -2246,9 +2247,9 @@ int map_mmio_regions(struct domain *d,
 }
 
 int unmap_mmio_regions(struct domain *d,
-                       unsigned long start_gfn,
+                       gfn_t start_gfn,
                        unsigned long nr,
-                       unsigned long mfn)
+                       mfn_t mfn)
 {
     int ret = 0;
     unsigned long i;
@@ -2261,10 +2262,11 @@ int unmap_mmio_regions(struct domain *d,
           i += 1UL << order, ++iter )
     {
         /* OR'ing gfn and mfn values will return an order suitable to both. */
-        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
+        for ( order = mmio_order(d, (gfn_x(start_gfn) + i) | (mfn_x(mfn) + i), nr - i); ;
               order = ret - 1 )
         {
-            ret = clear_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i), order);
+            ret = clear_mmio_p2m_entry(d, gfn_x(start_gfn) + i,
+                                       mfn_add(mfn, i), order);
             if ( ret <= 0 )
                 break;
             ASSERT(ret <= order);
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index e43904e..b784e6c 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1074,7 +1074,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = map_mmio_regions(d, gfn, nr_mfns, mfn);
+            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
                        "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
@@ -1086,7 +1086,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:remove: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = unmap_mmio_regions(d, gfn, nr_mfns, mfn);
+            ret = unmap_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
             if ( ret < 0 && is_hardware_domain(current->domain) )
                 printk(XENLOG_ERR
                        "memory_map: error %ld removing dom%d access to [%lx,%lx]\n",
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 6374a5b..b4f9077 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -37,13 +37,13 @@ typedef enum {
  *  * the guest physical address space to map, starting from the machine
  *   * frame number mfn. */
 int map_mmio_regions(struct domain *d,
-                     unsigned long start_gfn,
+                     gfn_t start_gfn,
                      unsigned long nr,
-                     unsigned long mfn);
+                     mfn_t mfn);
 int unmap_mmio_regions(struct domain *d,
-                       unsigned long start_gfn,
+                       gfn_t start_gfn,
                        unsigned long nr,
-                       unsigned long mfn);
+                       mfn_t mfn);
 
 /*
  * Set access type for a region of gfns.
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (3 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 04/17] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 16:43   ` Andrew Cooper
  2016-06-29  9:39   ` Jan Beulich
  2016-06-28 16:17 ` [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN Julien Grall
                   ` (12 subsequent siblings)
  17 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini, Jan Beulich, Suravee Suthikulpanit

A variable containing a guest frame should be compared to INVALID_GFN
and not INVALID_GFN.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>

    Changes in v5:
        - Patch added
---
 xen/drivers/passthrough/amd/iommu_map.c | 2 +-
 xen/drivers/passthrough/x86/iommu.c     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 1b914ba..c758459 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
-    if ( gfn == INVALID_MFN )
+    if ( gfn == INVALID_GFN )
         return -EADDRNOTAVAIL;
     ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
 
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index a18a608..cd435d7 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
 
-            if ( gfn != INVALID_MFN )
+            if ( gfn != INVALID_GFN )
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (4 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 17:05   ` Andrew Cooper
  2016-07-06 10:27   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN Julien Grall
                   ` (11 subsequent siblings)
  17 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, sstabellini, Jun Nakajima, George Dunlap,
	Liu Jinsong, Christoph Egger, Tim Deegan, Julien Grall,
	Paul Durrant, Jan Beulich, Andrew Cooper, Mukesh Rathor

Also take the opportunity to convert arch/x86/debug.c to the typesafe
mfn.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
Cc: Christoph Egger <chegger@amazon.de>
Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>

    Changes in v5:
        - Patch added
---
 xen/arch/arm/p2m.c              |  4 ++--
 xen/arch/x86/cpu/mcheck/mce.c   |  2 +-
 xen/arch/x86/debug.c            | 50 ++++++++++++++++++++-------------------
 xen/arch/x86/hvm/hvm.c          |  6 ++---
 xen/arch/x86/hvm/viridian.c     |  6 ++---
 xen/arch/x86/hvm/vmx/vmx.c      |  2 +-
 xen/arch/x86/mm/guest_walk.c    |  4 ++--
 xen/arch/x86/mm/hap/hap.c       |  4 ++--
 xen/arch/x86/mm/p2m-ept.c       |  6 ++---
 xen/arch/x86/mm/p2m-pod.c       | 18 +++++++-------
 xen/arch/x86/mm/p2m-pt.c        | 18 +++++++-------
 xen/arch/x86/mm/p2m.c           | 52 ++++++++++++++++++++---------------------
 xen/arch/x86/mm/paging.c        | 12 +++++-----
 xen/arch/x86/mm/shadow/common.c | 44 +++++++++++++++++-----------------
 xen/arch/x86/mm/shadow/multi.c  | 36 ++++++++++++++--------------
 xen/common/domain.c             |  6 ++---
 xen/common/grant_table.c        |  6 ++---
 xen/include/xen/mm.h            |  2 +-
 18 files changed, 140 insertions(+), 138 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 34563bb..d690602 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1461,7 +1461,7 @@ int relinquish_p2m_mapping(struct domain *d)
     return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
-                              pfn_to_paddr(INVALID_MFN),
+                              pfn_to_paddr(mfn_x(INVALID_MFN)),
                               MATTR_MEM, 0, p2m_invalid,
                               d->arch.p2m.default_access);
 }
@@ -1476,7 +1476,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
     return apply_p2m_changes(d, CACHEFLUSH,
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
-                             pfn_to_paddr(INVALID_MFN),
+                             pfn_to_paddr(mfn_x(INVALID_MFN)),
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index edcbe48..2695b0c 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1455,7 +1455,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
                 gfn = PFN_DOWN(gaddr);
                 mfn = mfn_x(get_gfn(d, gfn, &t));
 
-                if ( mfn == INVALID_MFN )
+                if ( mfn == mfn_x(INVALID_MFN) )
                 {
                     put_gfn(d, gfn);
                     put_domain(d);
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 58cae22..3479f7c 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -43,11 +43,11 @@ typedef unsigned long dbgva_t;
 typedef unsigned char dbgbyte_t;
 
 /* Returns: mfn for the given (hvm guest) vaddr */
-static unsigned long 
+static mfn_t
 dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
                 unsigned long *gfn)
 {
-    unsigned long mfn;
+    mfn_t mfn;
     uint32_t pfec = PFEC_page_present;
     p2m_type_t gfntype;
 
@@ -60,16 +60,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
         return INVALID_MFN;
     }
 
-    mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
+    mfn = get_gfn(dp, *gfn, &gfntype);
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
         mfn = INVALID_MFN;
     }
     else
-        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n",
+              vaddr, dp->domain_id, mfn_x(mfn));
 
-    if ( mfn == INVALID_MFN )
+    if ( mfn_eq(mfn, INVALID_MFN) )
     {
         put_gfn(dp, *gfn);
         *gfn = INVALID_GFN;
@@ -91,7 +92,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
  *       mode.
  * Returns: mfn for the given (pv guest) vaddr 
  */
-static unsigned long 
+static mfn_t
 dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
 {
     l4_pgentry_t l4e, *l4t;
@@ -99,31 +100,31 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2_pgentry_t l2e, *l2t;
     l1_pgentry_t l1e, *l1t;
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
-    unsigned long mfn = cr3 >> PAGE_SHIFT;
+    mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
 
     DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
           cr3, pgd3val);
 
     if ( pgd3val == 0 )
     {
-        l4t = map_domain_page(_mfn(mfn));
+        l4t = map_domain_page(mfn);
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
-        mfn = l4e_get_pfn(l4e);
+        mfn = _mfn(l4e_get_pfn(l4e));
         DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+              l4_table_offset(vaddr), l4e, mfn_x(mfn));
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
             DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
 
-        l3t = map_domain_page(_mfn(mfn));
+        l3t = map_domain_page(mfn);
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
-        mfn = l3e_get_pfn(l3e);
+        mfn = _mfn(l3e_get_pfn(l3e));
         DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+              l3_table_offset(vaddr), l3e, mfn_x(mfn));
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -132,26 +133,26 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         }
     }
 
-    l2t = map_domain_page(_mfn(mfn));
+    l2t = map_domain_page(mfn);
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
-    mfn = l2e_get_pfn(l2e);
+    mfn = _mfn(l2e_get_pfn(l2e));
     DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+          l2e, mfn_x(mfn));
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
         DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
         return INVALID_MFN;
     }
-    l1t = map_domain_page(_mfn(mfn));
+    l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
-    mfn = l1e_get_pfn(l1e);
+    mfn = _mfn(l1e_get_pfn(l1e));
     DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+          l1e, mfn_x(mfn));
 
-    return mfn_valid(mfn) ? mfn : INVALID_MFN;
+    return mfn_valid(mfn_x(mfn)) ? mfn : INVALID_MFN;
 }
 
 /* Returns: number of bytes remaining to be copied */
@@ -163,23 +164,24 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
     {
         char *va;
         unsigned long addr = (unsigned long)gaddr;
-        unsigned long mfn, gfn = INVALID_GFN, pagecnt;
+        mfn_t mfn;
+        unsigned long gfn = INVALID_GFN, pagecnt;
 
         pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
 
         mfn = (has_hvm_container_domain(dp)
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
-        if ( mfn == INVALID_MFN ) 
+        if ( mfn_eq(mfn, INVALID_MFN) )
             break;
 
-        va = map_domain_page(_mfn(mfn));
+        va = map_domain_page(mfn);
         va = va + (addr & (PAGE_SIZE-1));
 
         if ( toaddr )
         {
             copy_from_user(va, buf, pagecnt);    /* va = buf */
-            paging_mark_dirty(dp, mfn);
+            paging_mark_dirty(dp, mfn_x(mfn));
         }
         else
         {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c89ab6e..a929e3b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1796,7 +1796,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
         p2m = hostp2m;
 
     /* Check access permissions first, then handle faults */
-    if ( mfn_x(mfn) != INVALID_MFN )
+    if ( !mfn_eq(mfn, INVALID_MFN) )
     {
         bool_t violation;
 
@@ -5299,8 +5299,8 @@ static int do_altp2m_op(
             rc = -EINVAL;
 
         if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
-             (mfn_x(get_gfn_query_unlocked(curr->domain,
-                    a.u.enable_notify.gfn, &p2mt)) == INVALID_MFN) )
+             (mfn_eq(get_gfn_query_unlocked(curr->domain,
+                     a.u.enable_notify.gfn, &p2mt), INVALID_MFN)) )
             return -EINVAL;
 
         vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 8253fd0..575acdd 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -196,7 +196,7 @@ static void enable_hypercall_page(struct domain *d)
         if ( page )
             put_page(page);
         gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
-                 page ? page_to_mfn(page) : INVALID_MFN);
+                 page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
         return;
     }
 
@@ -269,7 +269,7 @@ static void initialize_apic_assist(struct vcpu *v)
 
  fail:
     gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
-             page ? page_to_mfn(page) : INVALID_MFN);
+             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
 }
 
 static void teardown_apic_assist(struct vcpu *v)
@@ -349,7 +349,7 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
         if ( page )
             put_page(page);
         gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
-                 page ? page_to_mfn(page) : INVALID_MFN);
+                 page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
         return;
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9d4121e..888a6b1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2025,7 +2025,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
 
             mfn = get_gfn_query_unlocked(d, gfn_x(vcpu_altp2m(v).veinfo_gfn), &t);
 
-            if ( mfn_x(mfn) != INVALID_MFN )
+            if ( !mfn_eq(mfn, INVALID_MFN) )
                 __vmwrite(VIRT_EXCEPTION_INFO, mfn_x(mfn) << PAGE_SHIFT);
             else
                 v->arch.hvm_vmx.secondary_exec_control &=
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index e850502..868e909 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -281,7 +281,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
         start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
                      ((va >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
         gw->l1e = guest_l1e_from_gfn(start, flags);
-        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
+        gw->l2mfn = gw->l1mfn = INVALID_MFN;
         goto set_ad;
     }
 
@@ -356,7 +356,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
         start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
                      guest_l1_table_offset(va));
         gw->l1e = guest_l1e_from_gfn(start, flags);
-        gw->l1mfn = _mfn(INVALID_MFN);
+        gw->l1mfn = INVALID_MFN;
     } 
     else 
     {
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9c2cd49..3218fa2 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -430,7 +430,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
  oom:
     HAP_ERROR("out of memory building monitor pagetable\n");
     domain_crash(d);
-    return _mfn(INVALID_MFN);
+    return INVALID_MFN;
 }
 
 static void hap_destroy_monitor_table(struct vcpu* v, mfn_t mmfn)
@@ -509,7 +509,7 @@ int hap_enable(struct domain *d, u32 mode)
         }
 
         for ( i = 0; i < MAX_EPTP; i++ )
-            d->arch.altp2m_eptp[i] = INVALID_MFN;
+            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 7166c71..6d03736 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -50,7 +50,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
     int rc;
-    unsigned long oldmfn = INVALID_MFN;
+    unsigned long oldmfn = mfn_x(INVALID_MFN);
     bool_t check_foreign = (new.mfn != entryptr->mfn ||
                             new.sa_p2mt != entryptr->sa_p2mt);
 
@@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
 
     write_atomic(&entryptr->epte, new.epte);
 
-    if ( unlikely(oldmfn != INVALID_MFN) )
+    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
         put_page(mfn_to_page(oldmfn));
 
     rc = 0;
@@ -887,7 +887,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
     int i;
     int ret = 0;
     bool_t recalc = 0;
-    mfn_t mfn = _mfn(INVALID_MFN);
+    mfn_t mfn = INVALID_MFN;
     struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index b7ab169..f384589 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -559,7 +559,7 @@ p2m_pod_decrease_reservation(struct domain *d,
     {
         /* All PoD: Mark the whole region invalid and tell caller
          * we're done. */
-        p2m_set_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid,
+        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
                       p2m->default_access);
         p2m->pod.entry_count-=(1<<order);
         BUG_ON(p2m->pod.entry_count < 0);
@@ -602,7 +602,7 @@ p2m_pod_decrease_reservation(struct domain *d,
         n = 1UL << cur_order;
         if ( t == p2m_populate_on_demand )
         {
-            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
+            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m->pod.entry_count -= n;
             BUG_ON(p2m->pod.entry_count < 0);
@@ -624,7 +624,7 @@ p2m_pod_decrease_reservation(struct domain *d,
 
             page = mfn_to_page(mfn);
 
-            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
+            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m_tlb_flush_sync(p2m);
             for ( j = 0; j < n; ++j )
@@ -671,7 +671,7 @@ void p2m_pod_dump_data(struct domain *d)
 static int
 p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 {
-    mfn_t mfn, mfn0 = _mfn(INVALID_MFN);
+    mfn_t mfn, mfn0 = INVALID_MFN;
     p2m_type_t type, type0 = 0;
     unsigned long * map = NULL;
     int ret=0, reset = 0;
@@ -754,7 +754,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     }
 
     /* Try to remove the page, restoring old mapping if it fails. */
-    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_2M,
+    p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_2M,
                   p2m_populate_on_demand, p2m->default_access);
     p2m_tlb_flush_sync(p2m);
 
@@ -871,7 +871,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         }
 
         /* Try to remove the page, restoring old mapping if it fails. */
-        p2m_set_entry(p2m, gfns[i], _mfn(INVALID_MFN), PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
 
         /* See if the page was successfully unmapped.  (Allow one refcount
@@ -1073,7 +1073,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
          * NOTE: In a fine-grained p2m locking scenario this operation
          * may need to promote its locking from gfn->1g superpage
          */
-        p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
+        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
                       p2m_populate_on_demand, p2m->default_access);
         return 0;
     }
@@ -1157,7 +1157,7 @@ remap_and_retry:
      * need promoting the gfn lock from gfn->2M superpage */
     gfn_aligned = (gfn>>order)<<order;
     for(i=0; i<(1<<order); i++)
-        p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
     if ( tb_init_done )
     {
@@ -1215,7 +1215,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
     }
 
     /* Now, actually do the two-way mapping */
-    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order,
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
                        p2m_populate_on_demand, p2m->default_access);
     if ( rc == 0 )
     {
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 4980934..2b6e89e 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -511,7 +511,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * the intermediate one might be).
      */
     unsigned int flags, iommu_old_flags = 0;
-    unsigned long old_mfn = INVALID_MFN;
+    unsigned long old_mfn = mfn_x(INVALID_MFN);
 
     ASSERT(sve != 0);
 
@@ -764,7 +764,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
                      p2m->max_mapped_pfn )
                     break;
         }
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
 
     mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
@@ -777,7 +777,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
         if ( (l4e_get_flags(*l4e) & _PAGE_PRESENT) == 0 )
         {
             unmap_domain_page(l4e);
-            return _mfn(INVALID_MFN);
+            return INVALID_MFN;
         }
         mfn = _mfn(l4e_get_pfn(*l4e));
         recalc = needs_recalc(l4, *l4e);
@@ -805,7 +805,7 @@ pod_retry_l3:
                     *t = p2m_populate_on_demand;
             }
             unmap_domain_page(l3e);
-            return _mfn(INVALID_MFN);
+            return INVALID_MFN;
         }
         if ( flags & _PAGE_PSE )
         {
@@ -817,7 +817,7 @@ pod_retry_l3:
             unmap_domain_page(l3e);
 
             ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
-            return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
+            return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
         }
 
         mfn = _mfn(l3e_get_pfn(*l3e));
@@ -846,7 +846,7 @@ pod_retry_l2:
         }
     
         unmap_domain_page(l2e);
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
     if ( flags & _PAGE_PSE )
     {
@@ -856,7 +856,7 @@ pod_retry_l2:
         unmap_domain_page(l2e);
         
         ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
-        return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
+        return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
     }
 
     mfn = _mfn(l2e_get_pfn(*l2e));
@@ -885,14 +885,14 @@ pod_retry_l1:
         }
     
         unmap_domain_page(l1e);
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
     mfn = _mfn(l1e_get_pfn(*l1e));
     *t = recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
     unmap_domain_page(l1e);
 
     ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
-    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : _mfn(INVALID_MFN);
+    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
 }
 
 static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6258a5b..6f90510 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -388,7 +388,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
     if (unlikely((p2m_is_broken(*t))))
     {
         /* Return invalid_mfn to avoid caller's access */
-        mfn = _mfn(INVALID_MFN);
+        mfn = INVALID_MFN;
         if ( q & P2M_ALLOC )
             domain_crash(p2m->domain);
     }
@@ -493,7 +493,7 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
             rc = set_rc;
 
         gfn += 1ul << order;
-        if ( mfn_x(mfn) != INVALID_MFN )
+        if ( !mfn_eq(mfn, INVALID_MFN) )
             mfn = _mfn(mfn_x(mfn) + (1ul << order));
         todo -= 1ul << order;
     }
@@ -580,7 +580,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     /* Initialise physmap tables for slot zero. Other code assumes this. */
     p2m->defer_nested_flush = 1;
-    rc = p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+    rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K,
                        p2m_invalid, p2m->default_access);
     p2m->defer_nested_flush = 0;
     p2m_unlock(p2m);
@@ -670,7 +670,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
             ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
         }
     }
-    return p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid,
+    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
                          p2m->default_access);
 }
 
@@ -840,7 +840,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     {
         gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
                  gfn_x(gfn), mfn_x(mfn));
-        rc = p2m_set_entry(p2m, gfn_x(gfn), _mfn(INVALID_MFN), page_order,
+        rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order,
                            p2m_invalid, p2m->default_access);
         if ( rc == 0 )
         {
@@ -1107,7 +1107,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
     }
 
     /* Do not use mfn_valid() here as it will usually fail for MMIO pages. */
-    if ( (INVALID_MFN == mfn_x(actual_mfn)) || (t != p2m_mmio_direct) )
+    if ( (mfn_eq(actual_mfn, INVALID_MFN)) || (t != p2m_mmio_direct) )
     {
         gdprintk(XENLOG_ERR,
                  "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
@@ -1117,7 +1117,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
         gdprintk(XENLOG_WARNING,
                  "no mapping between mfn %08lx and gfn %08lx\n",
                  mfn_x(mfn), gfn);
-    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order, p2m_invalid,
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
                        p2m->default_access);
 
  out:
@@ -1146,7 +1146,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
     mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
     if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
     {
-        ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+        ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
                             p2m_invalid, p2m->default_access);
         gfn_unlock(p2m, gfn, 0);
     }
@@ -1316,7 +1316,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
         put_page(page);
 
     /* Remove mapping from p2m table */
-    ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+    ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
                         p2m_ram_paged, a);
 
     /* Clear content before returning the page to Xen */
@@ -1844,7 +1844,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= MAX_ALTP2M ||
-             d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN )
+             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
             return -EINVAL;
 
         ap2m = d->arch.altp2m_p2m[altp2m_idx];
@@ -1942,7 +1942,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
     mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
     gfn_unlock(p2m, gfn, 0);
 
-    if ( mfn_x(mfn) == INVALID_MFN )
+    if ( mfn_eq(mfn, INVALID_MFN) )
         return -ESRCH;
     
     if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
@@ -2288,7 +2288,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
         p2m = d->arch.altp2m_p2m[i];
@@ -2315,7 +2315,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
     {
         if ( idx != vcpu_altp2m(v).p2midx )
         {
@@ -2359,14 +2359,14 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
                               0, &page_order);
     __put_gfn(*ap2m, gfn_x(gfn));
 
-    if ( mfn_x(mfn) != INVALID_MFN )
+    if ( !mfn_eq(mfn, INVALID_MFN) )
         return 0;
 
     mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
                               P2M_ALLOC | P2M_UNSHARE, &page_order);
     __put_gfn(hp2m, gfn_x(gfn));
 
-    if ( mfn_x(mfn) == INVALID_MFN )
+    if ( mfn_eq(mfn, INVALID_MFN) )
         return 0;
 
     p2m_lock(*ap2m);
@@ -2404,7 +2404,7 @@ void p2m_flush_altp2m(struct domain *d)
         /* Uninit and reinit ept to force TLB shootdown */
         ept_p2m_uninit(d->arch.altp2m_p2m[i]);
         ept_p2m_init(d->arch.altp2m_p2m[i]);
-        d->arch.altp2m_eptp[i] = INVALID_MFN;
+        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
     }
 
     altp2m_list_unlock(d);
@@ -2431,7 +2431,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] == INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
     {
         p2m_init_altp2m_helper(d, idx);
         rc = 0;
@@ -2450,7 +2450,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] != INVALID_MFN )
+        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
         p2m_init_altp2m_helper(d, i);
@@ -2476,7 +2476,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
     {
         p2m = d->arch.altp2m_p2m[idx];
 
@@ -2486,7 +2486,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
             /* Uninit and reinit ept to force TLB shootdown */
             ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
             ept_p2m_init(d->arch.altp2m_p2m[idx]);
-            d->arch.altp2m_eptp[idx] = INVALID_MFN;
+            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
             rc = 0;
         }
     }
@@ -2510,7 +2510,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
     {
         for_each_vcpu( d, v )
             if ( idx != vcpu_altp2m(v).p2midx )
@@ -2541,7 +2541,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     unsigned int page_order;
     int rc = -EINVAL;
 
-    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN )
+    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
         return rc;
 
     hp2m = p2m_get_hostp2m(d);
@@ -2636,14 +2636,14 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
         p2m = d->arch.altp2m_p2m[i];
         m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
 
         /* Check for a dropped page that may impact this altp2m */
-        if ( mfn_x(mfn) == INVALID_MFN &&
+        if ( mfn_eq(mfn, INVALID_MFN) &&
              gfn_x(gfn) >= p2m->min_remapped_gfn &&
              gfn_x(gfn) <= p2m->max_remapped_gfn )
         {
@@ -2660,7 +2660,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
-                         d->arch.altp2m_eptp[i] == INVALID_MFN )
+                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
                         continue;
 
                     p2m = d->arch.altp2m_p2m[i];
@@ -2672,7 +2672,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 goto out;
             }
         }
-        else if ( mfn_x(m) != INVALID_MFN )
+        else if ( !mfn_eq(m, INVALID_MFN) )
             p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma);
 
         __put_gfn(p2m, gfn_x(gfn));
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 8219bb6..107fc8c 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -67,7 +67,7 @@ static mfn_t paging_new_log_dirty_page(struct domain *d)
     if ( unlikely(page == NULL) )
     {
         d->arch.paging.log_dirty.failed_allocs++;
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
 
     d->arch.paging.log_dirty.allocs++;
@@ -95,7 +95,7 @@ static mfn_t paging_new_log_dirty_node(struct domain *d)
         int i;
         mfn_t *node = map_domain_page(mfn);
         for ( i = 0; i < LOGDIRTY_NODE_ENTRIES; i++ )
-            node[i] = _mfn(INVALID_MFN);
+            node[i] = INVALID_MFN;
         unmap_domain_page(node);
     }
     return mfn;
@@ -167,7 +167,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
 
             unmap_domain_page(l2);
             paging_free_log_dirty_page(d, l3[i3]);
-            l3[i3] = _mfn(INVALID_MFN);
+            l3[i3] = INVALID_MFN;
 
             if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
             {
@@ -182,7 +182,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
         if ( rc )
             break;
         paging_free_log_dirty_page(d, l4[i4]);
-        l4[i4] = _mfn(INVALID_MFN);
+        l4[i4] = INVALID_MFN;
 
         if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
         {
@@ -198,7 +198,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
     if ( !rc )
     {
         paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
-        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
+        d->arch.paging.log_dirty.top = INVALID_MFN;
 
         ASSERT(d->arch.paging.log_dirty.allocs == 0);
         d->arch.paging.log_dirty.failed_allocs = 0;
@@ -660,7 +660,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags)
     /* This must be initialized separately from the rest of the
      * log-dirty init code as that can be called more than once and we
      * don't want to leak any active log-dirty bitmaps */
-    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
+    d->arch.paging.log_dirty.top = INVALID_MFN;
 
     /*
      * Shadow pagetables are the default, but we will use
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 226e32d..c51b370 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -88,10 +88,10 @@ void shadow_vcpu_init(struct vcpu *v)
 
     for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
     {
-        v->arch.paging.shadow.oos[i] = _mfn(INVALID_MFN);
-        v->arch.paging.shadow.oos_snapshot[i] = _mfn(INVALID_MFN);
+        v->arch.paging.shadow.oos[i] = INVALID_MFN;
+        v->arch.paging.shadow.oos_snapshot[i] = INVALID_MFN;
         for ( j = 0; j < SHADOW_OOS_FIXUPS; j++ )
-            v->arch.paging.shadow.oos_fixup[i].smfn[j] = _mfn(INVALID_MFN);
+            v->arch.paging.shadow.oos_fixup[i].smfn[j] = INVALID_MFN;
     }
 #endif
 
@@ -593,12 +593,12 @@ static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
     int i;
     for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
     {
-        if ( mfn_x(fixup->smfn[i]) != INVALID_MFN )
+        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
         {
             sh_remove_write_access_from_sl1p(d, gmfn,
                                              fixup->smfn[i],
                                              fixup->off[i]);
-            fixup->smfn[i] = _mfn(INVALID_MFN);
+            fixup->smfn[i] = INVALID_MFN;
         }
     }
 
@@ -636,7 +636,7 @@ void oos_fixup_add(struct domain *d, mfn_t gmfn,
 
             next = oos_fixup[idx].next;
 
-            if ( mfn_x(oos_fixup[idx].smfn[next]) != INVALID_MFN )
+            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
             {
                 TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
 
@@ -757,7 +757,7 @@ static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
     struct oos_fixup fixup = { .next = 0 };
 
     for (i = 0; i < SHADOW_OOS_FIXUPS; i++ )
-        fixup.smfn[i] = _mfn(INVALID_MFN);
+        fixup.smfn[i] = INVALID_MFN;
 
     idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
     oidx = idx;
@@ -807,7 +807,7 @@ static void oos_hash_remove(struct domain *d, mfn_t gmfn)
             idx = (idx + 1) % SHADOW_OOS_PAGES;
         if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
         {
-            oos[idx] = _mfn(INVALID_MFN);
+            oos[idx] = INVALID_MFN;
             return;
         }
     }
@@ -838,7 +838,7 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
 
     SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
     BUG();
-    return _mfn(INVALID_MFN);
+    return INVALID_MFN;
 }
 
 /* Pull a single guest page back into sync */
@@ -862,7 +862,7 @@ void sh_resync(struct domain *d, mfn_t gmfn)
         if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
         {
             _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
-            oos[idx] = _mfn(INVALID_MFN);
+            oos[idx] = INVALID_MFN;
             return;
         }
     }
@@ -914,7 +914,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
         {
             /* Write-protect and sync contents */
             _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-            oos[idx] = _mfn(INVALID_MFN);
+            oos[idx] = INVALID_MFN;
         }
 
  resync_others:
@@ -948,7 +948,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
             {
                 /* Write-protect and sync contents */
                 _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-                oos[idx] = _mfn(INVALID_MFN);
+                oos[idx] = INVALID_MFN;
             }
         }
     }
@@ -1784,7 +1784,7 @@ void *sh_emulate_map_dest(struct vcpu *v, unsigned long vaddr,
     if ( likely(((vaddr + bytes - 1) & PAGE_MASK) == (vaddr & PAGE_MASK)) )
     {
         /* Whole write fits on a single page. */
-        sh_ctxt->mfn[1] = _mfn(INVALID_MFN);
+        sh_ctxt->mfn[1] = INVALID_MFN;
         map = map_domain_page(sh_ctxt->mfn[0]) + (vaddr & ~PAGE_MASK);
     }
     else if ( !is_hvm_domain(d) )
@@ -2086,7 +2086,7 @@ mfn_t shadow_hash_lookup(struct domain *d, unsigned long n, unsigned int t)
     }
 
     perfc_incr(shadow_hash_lookup_miss);
-    return _mfn(INVALID_MFN);
+    return INVALID_MFN;
 }
 
 void shadow_hash_insert(struct domain *d, unsigned long n, unsigned int t,
@@ -2910,7 +2910,7 @@ void sh_reset_l3_up_pointers(struct vcpu *v)
     };
     static const unsigned int callback_mask = SHF_L3_64;
 
-    hash_vcpu_foreach(v, callback_mask, callbacks, _mfn(INVALID_MFN));
+    hash_vcpu_foreach(v, callback_mask, callbacks, INVALID_MFN);
 }
 
 
@@ -2940,7 +2940,7 @@ static void sh_update_paging_modes(struct vcpu *v)
 #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
-    if ( mfn_x(v->arch.paging.shadow.oos_snapshot[0]) == INVALID_MFN )
+    if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
@@ -3284,7 +3284,7 @@ void shadow_teardown(struct domain *d, int *preempted)
                 if ( mfn_valid(oos_snapshot[i]) )
                 {
                     shadow_free(d, oos_snapshot[i]);
-                    oos_snapshot[i] = _mfn(INVALID_MFN);
+                    oos_snapshot[i] = INVALID_MFN;
                 }
         }
 #endif /* OOS */
@@ -3449,7 +3449,7 @@ static int shadow_one_bit_disable(struct domain *d, u32 mode)
                     if ( mfn_valid(oos_snapshot[i]) )
                     {
                         shadow_free(d, oos_snapshot[i]);
-                        oos_snapshot[i] = _mfn(INVALID_MFN);
+                        oos_snapshot[i] = INVALID_MFN;
                     }
             }
 #endif /* OOS */
@@ -3744,7 +3744,7 @@ int shadow_track_dirty_vram(struct domain *d,
         memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
     else
     {
-        unsigned long map_mfn = INVALID_MFN;
+        unsigned long map_mfn = mfn_x(INVALID_MFN);
         void *map_sl1p = NULL;
 
         /* Iterate over VRAM to track dirty bits. */
@@ -3754,7 +3754,7 @@ int shadow_track_dirty_vram(struct domain *d,
             int dirty = 0;
             paddr_t sl1ma = dirty_vram->sl1ma[i];
 
-            if (mfn_x(mfn) == INVALID_MFN)
+            if ( !mfn_eq(mfn, INVALID_MFN) )
             {
                 dirty = 1;
             }
@@ -3830,7 +3830,7 @@ int shadow_track_dirty_vram(struct domain *d,
             for ( i = begin_pfn; i < end_pfn; i++ )
             {
                 mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
-                if ( mfn_x(mfn) != INVALID_MFN )
+                if ( !mfn_eq(mfn, INVALID_MFN) )
                     flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
             }
             dirty_vram->last_dirty = -1;
@@ -3968,7 +3968,7 @@ void shadow_audit_tables(struct vcpu *v)
         }
     }
 
-    hash_vcpu_foreach(v, mask, callbacks, _mfn(INVALID_MFN));
+    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN_T);
 }
 
 #endif /* Shadow audit */
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index dfe59a2..f892e2f 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -177,7 +177,7 @@ sh_walk_guest_tables(struct vcpu *v, unsigned long va, walk_t *gw,
 {
     return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
 #if GUEST_PAGING_LEVELS == 3 /* PAE */
-                             _mfn(INVALID_MFN),
+                             INVALID_MFN,
                              v->arch.paging.shadow.gl3e
 #else /* 32 or 64 */
                              pagetable_get_mfn(v->arch.guest_table),
@@ -336,32 +336,32 @@ static void sh_audit_gw(struct vcpu *v, walk_t *gw)
     if ( mfn_valid(gw->l4mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
                                                 SH_type_l4_shadow))) )
-        (void) sh_audit_l4_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
     if ( mfn_valid(gw->l3mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
                                                 SH_type_l3_shadow))) )
-        (void) sh_audit_l3_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
 #endif /* PAE or 64... */
     if ( mfn_valid(gw->l2mfn) )
     {
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
+            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
 #if GUEST_PAGING_LEVELS == 3
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2h_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
+            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
 #endif
     }
     if ( mfn_valid(gw->l1mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
                                                 SH_type_l1_shadow))) )
-        (void) sh_audit_l1_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
     else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
               && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
               && mfn_valid(
               (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
-        (void) sh_audit_fl1_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
 }
 
 #else
@@ -1752,7 +1752,7 @@ static shadow_l2e_t * shadow_get_and_create_l2e(struct vcpu *v,
 {
 #if GUEST_PAGING_LEVELS >= 4 /* 64bit... */
     struct domain *d = v->domain;
-    mfn_t sl3mfn = _mfn(INVALID_MFN);
+    mfn_t sl3mfn = INVALID_MFN;
     shadow_l3e_t *sl3e;
     if ( !mfn_valid(gw->l2mfn) ) return NULL; /* No guest page. */
     /* Get the l3e */
@@ -2158,7 +2158,7 @@ static int validate_gl4e(struct vcpu *v, void *new_ge, mfn_t sl4mfn, void *se)
     shadow_l4e_t new_sl4e;
     guest_l4e_t new_gl4e = *(guest_l4e_t *)new_ge;
     shadow_l4e_t *sl4p = se;
-    mfn_t sl3mfn = _mfn(INVALID_MFN);
+    mfn_t sl3mfn = INVALID_MFN;
     struct domain *d = v->domain;
     p2m_type_t p2mt;
     int result = 0;
@@ -2217,7 +2217,7 @@ static int validate_gl3e(struct vcpu *v, void *new_ge, mfn_t sl3mfn, void *se)
     shadow_l3e_t new_sl3e;
     guest_l3e_t new_gl3e = *(guest_l3e_t *)new_ge;
     shadow_l3e_t *sl3p = se;
-    mfn_t sl2mfn = _mfn(INVALID_MFN);
+    mfn_t sl2mfn = INVALID_MFN;
     p2m_type_t p2mt;
     int result = 0;
 
@@ -2250,7 +2250,7 @@ static int validate_gl2e(struct vcpu *v, void *new_ge, mfn_t sl2mfn, void *se)
     shadow_l2e_t new_sl2e;
     guest_l2e_t new_gl2e = *(guest_l2e_t *)new_ge;
     shadow_l2e_t *sl2p = se;
-    mfn_t sl1mfn = _mfn(INVALID_MFN);
+    mfn_t sl1mfn = INVALID_MFN;
     p2m_type_t p2mt;
     int result = 0;
 
@@ -2608,7 +2608,7 @@ static inline void check_for_early_unshadow(struct vcpu *v, mfn_t gmfn)
 static inline void reset_early_unshadow(struct vcpu *v)
 {
 #if SHADOW_OPTIMIZATIONS & SHOPT_EARLY_UNSHADOW
-    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = INVALID_MFN;
+    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = mfn_x(INVALID_MFN);
 #endif
 }
 
@@ -4105,10 +4105,10 @@ sh_update_cr3(struct vcpu *v, int do_locking)
                                            ? SH_type_l2h_shadow
                                            : SH_type_l2_shadow);
                 else
-                    sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
+                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
             }
             else
-                sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
+                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
         }
     }
 #elif GUEST_PAGING_LEVELS == 4
@@ -4531,7 +4531,7 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
 
         if ( fast_path ) {
             if ( pagetable_is_null(v->arch.shadow_table[i]) )
-                smfn = _mfn(INVALID_MFN);
+                smfn = INVALID_MFN;
             else
                 smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i]));
         }
@@ -4540,8 +4540,8 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
             /* retrieving the l2s */
             gmfn = get_gfn_query_unlocked(d, gfn_x(guest_l3e_get_gfn(gl3e[i])),
                                           &p2mt);
-            smfn = unlikely(mfn_x(gmfn) == INVALID_MFN)
-                   ? _mfn(INVALID_MFN)
+            smfn = unlikely(mfn_eq(gmfn, INVALID_MFN))
+                   ? INVALID_MFN
                    : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
         }
 
@@ -4846,7 +4846,7 @@ int sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
 {
     guest_l1e_t *gl1e, e;
     shadow_l1e_t *sl1e;
-    mfn_t gl1mfn = _mfn(INVALID_MFN);
+    mfn_t gl1mfn = INVALID_MFN;
     int f;
     int done = 0;
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 45273d4..42c07ee 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -117,7 +117,7 @@ static void vcpu_info_reset(struct vcpu *v)
     v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
                     ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
                     : &dummy_vcpu_info);
-    v->vcpu_info_mfn = INVALID_MFN;
+    v->vcpu_info_mfn = mfn_x(INVALID_MFN);
 }
 
 struct vcpu *alloc_vcpu(
@@ -1141,7 +1141,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
         return -EINVAL;
 
-    if ( v->vcpu_info_mfn != INVALID_MFN )
+    if ( v->vcpu_info_mfn != mfn_x(INVALID_MFN) )
         return -EINVAL;
 
     /* Run this command on yourself or on other offline VCPUS. */
@@ -1205,7 +1205,7 @@ void unmap_vcpu_info(struct vcpu *v)
 {
     unsigned long mfn;
 
-    if ( v->vcpu_info_mfn == INVALID_MFN )
+    if ( v->vcpu_info_mfn == mfn_x(INVALID_MFN) )
         return;
 
     mfn = v->vcpu_info_mfn;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 3f15543..ecace07 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -244,7 +244,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
                               (readonly) ? P2M_ALLOC : P2M_UNSHARE);
     if ( !(*page) )
     {
-        *frame = INVALID_MFN;
+        *frame = mfn_x(INVALID_MFN);
         if ( p2m_is_shared(p2mt) )
             return GNTST_eagain;
         if ( p2m_is_paging(p2mt) )
@@ -260,7 +260,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
     *page = mfn_valid(*frame) ? mfn_to_page(*frame) : NULL;
     if ( (!(*page)) || (!get_page(*page, rd)) )
     {
-        *frame = INVALID_MFN;
+        *frame = mfn_x(INVALID_MFN);
         *page = NULL;
         rc = GNTST_bad_page;
     }
@@ -1785,7 +1785,7 @@ gnttab_transfer(
             p2m_type_t __p2mt;
             mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &__p2mt));
             if ( p2m_is_shared(__p2mt) || !p2m_is_valid(__p2mt) )
-                mfn = INVALID_MFN;
+                mfn = mfn_x(INVALID_MFN);
         }
 #else
         mfn = mfn_x(gfn_to_mfn(d, _gfn(gop.mfn)));
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index afbb1a1..7f207ec 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -55,7 +55,7 @@
 
 TYPE_SAFE(unsigned long, mfn);
 #define PRI_mfn          "05lx"
-#define INVALID_MFN      (~0UL)
+#define INVALID_MFN      _mfn(~0UL)
 
 #ifndef mfn_t
 #define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (5 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 17:12   ` Andrew Cooper
  2016-07-06 10:28   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 08/17] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
                   ` (10 subsequent siblings)
  17 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, sstabellini, Feng Wu, Suravee Suthikulpanit,
	Jun Nakajima, Andrew Cooper, Tim Deegan, George Dunlap,
	Julien Grall, Paul Durrant, Jan Beulich, Mukesh Rathor,
	Boris Ostrovsky

Also take the opportunity to convert arch/x86/debug.c to the typesafe gfn.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Feng Wu <feng.wu@intel.com>

    Changes in v5:
        - Patch added
---
 xen/arch/arm/p2m.c                      |  4 ++--
 xen/arch/x86/debug.c                    | 18 +++++++++---------
 xen/arch/x86/domain.c                   |  2 +-
 xen/arch/x86/hvm/emulate.c              |  7 ++++---
 xen/arch/x86/hvm/hvm.c                  |  6 +++---
 xen/arch/x86/hvm/ioreq.c                |  8 ++++----
 xen/arch/x86/hvm/svm/nestedsvm.c        |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c              |  6 +++---
 xen/arch/x86/mm/altp2m.c                |  2 +-
 xen/arch/x86/mm/hap/guest_walk.c        | 10 +++++-----
 xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
 xen/arch/x86/mm/p2m-pod.c               |  6 +++---
 xen/arch/x86/mm/p2m.c                   | 18 +++++++++---------
 xen/arch/x86/mm/shadow/common.c         |  2 +-
 xen/arch/x86/mm/shadow/multi.c          |  2 +-
 xen/arch/x86/mm/shadow/private.h        |  2 +-
 xen/drivers/passthrough/amd/iommu_map.c |  2 +-
 xen/drivers/passthrough/vtd/iommu.c     |  4 ++--
 xen/drivers/passthrough/x86/iommu.c     |  2 +-
 xen/include/asm-x86/guest_pt.h          |  4 ++--
 xen/include/asm-x86/p2m.h               |  2 +-
 xen/include/xen/mm.h                    |  2 +-
 22 files changed, 57 insertions(+), 56 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d690602..c938dde 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -479,7 +479,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
     }
 
     /* If request to get default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         *access = memaccess[p2m->default_access];
         return 0;
@@ -1879,7 +1879,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     p2m->mem_access_enabled = true;
 
     /* If request to set default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         p2m->default_access = a;
         return 0;
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3479f7c..1ce0e89 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -44,8 +44,7 @@ typedef unsigned char dbgbyte_t;
 
 /* Returns: mfn for the given (hvm guest) vaddr */
 static mfn_t
-dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
-                unsigned long *gfn)
+dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr, gfn_t *gfn)
 {
     mfn_t mfn;
     uint32_t pfec = PFEC_page_present;
@@ -53,14 +52,14 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
 
     DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
 
-    *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
-    if ( *gfn == INVALID_GFN )
+    *gfn = _gfn(paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec));
+    if ( gfn_eq(*gfn, INVALID_GFN) )
     {
         DBGP2("kdb:bad gfn from gva_to_gfn\n");
         return INVALID_MFN;
     }
 
-    mfn = get_gfn(dp, *gfn, &gfntype);
+    mfn = get_gfn(dp, gfn_x(*gfn), &gfntype);
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
@@ -72,7 +71,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
 
     if ( mfn_eq(mfn, INVALID_MFN) )
     {
-        put_gfn(dp, *gfn);
+        put_gfn(dp, gfn_x(*gfn));
         *gfn = INVALID_GFN;
     }
 
@@ -165,7 +164,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
         char *va;
         unsigned long addr = (unsigned long)gaddr;
         mfn_t mfn;
-        unsigned long gfn = INVALID_GFN, pagecnt;
+        gfn_t gfn = INVALID_GFN;
+        unsigned long pagecnt;
 
         pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
 
@@ -189,8 +189,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
         }
 
         unmap_domain_page(va);
-        if ( gfn != INVALID_GFN )
-            put_gfn(dp, gfn);
+        if ( !gfn_eq(gfn, INVALID_GFN) )
+            put_gfn(dp, gfn_x(gfn));
 
         addr += pagecnt;
         buf += pagecnt;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index bb59247..c8c7e2d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -783,7 +783,7 @@ int arch_domain_soft_reset(struct domain *d)
      * gfn == INVALID_GFN indicates that the shared_info page was never mapped
      * to the domain's address space and there is nothing to replace.
      */
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
         goto exit_put_page;
 
     if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 855af4d..c55ad7b 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -455,7 +455,7 @@ static int hvmemul_linear_to_phys(
             return rc;
         pfn = _paddr >> PAGE_SHIFT;
     }
-    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == INVALID_GFN )
+    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) )
     {
         if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
             return X86EMUL_RETRY;
@@ -472,7 +472,8 @@ static int hvmemul_linear_to_phys(
         npfn = paging_gva_to_gfn(curr, addr, &pfec);
 
         /* Is it contiguous with the preceding PFNs? If not then we're done. */
-        if ( (npfn == INVALID_GFN) || (npfn != (pfn + (reverse ? -i : i))) )
+        if ( (npfn == gfn_x(INVALID_GFN)) ||
+             (npfn != (pfn + (reverse ? -i : i))) )
         {
             if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
                 return X86EMUL_RETRY;
@@ -480,7 +481,7 @@ static int hvmemul_linear_to_phys(
             if ( done == 0 )
             {
                 ASSERT(!reverse);
-                if ( npfn != INVALID_GFN )
+                if ( npfn != gfn_x(INVALID_GFN) )
                     return X86EMUL_UNHANDLEABLE;
                 hvm_inject_page_fault(pfec, addr & PAGE_MASK);
                 return X86EMUL_EXCEPTION;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a929e3b..b9ffce2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3039,7 +3039,7 @@ static enum hvm_copy_result __hvm_copy(
         if ( flags & HVMCOPY_virt )
         {
             gfn = paging_gva_to_gfn(curr, addr, &pfec);
-            if ( gfn == INVALID_GFN )
+            if ( gfn == gfn_x(INVALID_GFN) )
             {
                 if ( pfec & PFEC_page_paged )
                     return HVMCOPY_gfn_paged_out;
@@ -3154,7 +3154,7 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, int size)
         count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
 
         gfn = paging_gva_to_gfn(curr, addr, &pfec);
-        if ( gfn == INVALID_GFN )
+        if ( gfn == gfn_x(INVALID_GFN) )
         {
             if ( pfec & PFEC_page_paged )
                 return HVMCOPY_gfn_paged_out;
@@ -5298,7 +5298,7 @@ static int do_altp2m_op(
              a.u.enable_notify.vcpu_id != curr->vcpu_id )
             rc = -EINVAL;
 
-        if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
+        if ( (!gfn_eq(vcpu_altp2m(curr).veinfo_gfn,  INVALID_GFN)) ||
              (mfn_eq(get_gfn_query_unlocked(curr->domain,
                      a.u.enable_notify.gfn, &p2mt), INVALID_MFN)) )
             return -EINVAL;
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 7148ac4..d2245e2 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -204,7 +204,7 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
 {
     unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
 
-    if ( gmfn != INVALID_GFN )
+    if ( gmfn != gfn_x(INVALID_GFN) )
         set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
 }
 
@@ -420,7 +420,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
     if ( rc )
         return rc;
 
-    if ( bufioreq_pfn != INVALID_GFN )
+    if ( bufioreq_pfn != gfn_x(INVALID_GFN) )
         rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
 
     if ( rc )
@@ -434,8 +434,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
                                         bool_t handle_bufioreq)
 {
     struct domain *d = s->domain;
-    unsigned long ioreq_pfn = INVALID_GFN;
-    unsigned long bufioreq_pfn = INVALID_GFN;
+    unsigned long ioreq_pfn = gfn_x(INVALID_GFN);
+    unsigned long bufioreq_pfn = gfn_x(INVALID_GFN);
     int rc;
 
     if ( is_default )
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 9d2ac09..f9b38ab 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1200,7 +1200,7 @@ nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     /* Walk the guest-supplied NPT table, just as if it were a pagetable */
     gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
 
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
         return NESTEDHVM_PAGEFAULT_INJECT;
 
     *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 888a6b1..b203a04 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2057,13 +2057,13 @@ static int vmx_vcpu_emulate_vmfunc(struct cpu_user_regs *regs)
 static bool_t vmx_vcpu_emulate_ve(struct vcpu *v)
 {
     bool_t rc = 0, writable;
-    unsigned long gfn = gfn_x(vcpu_altp2m(v).veinfo_gfn);
+    gfn_t gfn = vcpu_altp2m(v).veinfo_gfn;
     ve_info_t *veinfo;
 
-    if ( gfn == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
         return 0;
 
-    veinfo = hvm_map_guest_frame_rw(gfn, 0, &writable);
+    veinfo = hvm_map_guest_frame_rw(gfn_x(gfn), 0, &writable);
     if ( !veinfo )
         return 0;
     if ( !writable || veinfo->semaphore != 0 )
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 10605c8..930bdc2 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -26,7 +26,7 @@ altp2m_vcpu_reset(struct vcpu *v)
     struct altp2mvcpu *av = &vcpu_altp2m(v);
 
     av->p2midx = INVALID_ALTP2M;
-    av->veinfo_gfn = _gfn(INVALID_GFN);
+    av->veinfo_gfn = INVALID_GFN;
 }
 
 void
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index d2716f9..1b1a15d 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -70,14 +70,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
         if ( top_page )
             put_page(top_page);
         p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
-        return INVALID_GFN;
+        return gfn_x(INVALID_GFN);
     }
     if ( p2m_is_shared(p2mt) )
     {
         pfec[0] = PFEC_page_shared;
         if ( top_page )
             put_page(top_page);
-        return INVALID_GFN;
+        return gfn_x(INVALID_GFN);
     }
     if ( !top_page )
     {
@@ -110,12 +110,12 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
             ASSERT(p2m_is_hostp2m(p2m));
             pfec[0] = PFEC_page_paged;
             p2m_mem_paging_populate(p2m->domain, gfn_x(gfn));
-            return INVALID_GFN;
+            return gfn_x(INVALID_GFN);
         }
         if ( p2m_is_shared(p2mt) )
         {
             pfec[0] = PFEC_page_shared;
-            return INVALID_GFN;
+            return gfn_x(INVALID_GFN);
         }
 
         if ( page_order )
@@ -147,7 +147,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
     if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
         pfec[0] &= ~PFEC_insn_fetch;
 
-    return INVALID_GFN;
+    return gfn_x(INVALID_GFN);
 }
 
 
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 94cf832..02b27b1 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -236,7 +236,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
     ept_walk_t gw;
     rwx_acc &= EPTE_RWX_MASK;
 
-    *l1gfn = INVALID_GFN;
+    *l1gfn = gfn_x(INVALID_GFN);
 
     rc = nept_walk_tables(v, l2ga, &gw);
     switch ( rc )
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index f384589..149f529 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1003,7 +1003,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
         unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
         unsigned long gfn = mrp->list[idx];
 
-        if ( gfn != INVALID_GFN )
+        if ( gfn != gfn_x(INVALID_GFN) )
         {
             if ( gfn & POD_LAST_SUPERPAGE )
             {
@@ -1020,7 +1020,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
             else
                 p2m_pod_zero_check(p2m, &gfn, 1);
 
-            mrp->list[idx] = INVALID_GFN;
+            mrp->list[idx] = gfn_x(INVALID_GFN);
         }
 
     } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
@@ -1031,7 +1031,7 @@ static void pod_eager_record(struct p2m_domain *p2m,
 {
     struct pod_mrp_list *mrp = &p2m->pod.mrp;
 
-    ASSERT(gfn != INVALID_GFN);
+    ASSERT(gfn != gfn_x(INVALID_GFN));
 
     mrp->list[mrp->idx++] =
         gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6f90510..90d3baa 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -76,7 +76,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
-        p2m->pod.mrp.list[i] = INVALID_GFN;
+        p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ret = ept_p2m_init(p2m);
@@ -1863,7 +1863,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     }
 
     /* If request to set default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         p2m->default_access = a;
         return 0;
@@ -1932,7 +1932,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
     };
 
     /* If request to get default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         *access = memaccess[p2m->default_access];
         return 0;
@@ -2113,8 +2113,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         mode = paging_get_nestedmode(v);
         l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
-        if ( l2_gfn == INVALID_GFN )
-            return INVALID_GFN;
+        if ( l2_gfn == gfn_x(INVALID_GFN) )
+            return gfn_x(INVALID_GFN);
 
         /* translate l2 guest gfn into l1 guest gfn */
         rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
@@ -2123,7 +2123,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
                                    !!(*pfec & PFEC_insn_fetch));
 
         if ( rv != NESTEDHVM_PAGEFAULT_DONE )
-            return INVALID_GFN;
+            return gfn_x(INVALID_GFN);
 
         /*
          * Sanity check that l1_gfn can be used properly as a 4K mapping, even
@@ -2415,7 +2415,7 @@ static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
     struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
     struct ept_data *ept;
 
-    p2m->min_remapped_gfn = INVALID_GFN;
+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
     p2m->max_remapped_gfn = 0;
     ept = &p2m->ept;
     ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
@@ -2551,7 +2551,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
 
     mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
 
-    if ( gfn_x(new_gfn) == INVALID_GFN )
+    if ( gfn_eq(new_gfn, INVALID_GFN) )
     {
         if ( mfn_valid(mfn) )
             p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
@@ -2613,7 +2613,7 @@ static void p2m_reset_altp2m(struct p2m_domain *p2m)
     /* Uninit and reinit ept to force TLB shootdown */
     ept_p2m_uninit(p2m);
     ept_p2m_init(p2m);
-    p2m->min_remapped_gfn = INVALID_GFN;
+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
     p2m->max_remapped_gfn = 0;
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index c51b370..e48ba7b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1708,7 +1708,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
 
     /* Translate the VA to a GFN. */
     gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
     {
         if ( is_hvm_vcpu(v) )
             hvm_inject_page_fault(pfec, vaddr);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index f892e2f..e54c8b7 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3660,7 +3660,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
          */
         if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
             pfec[0] &= ~PFEC_insn_fetch;
-        return INVALID_GFN;
+        return gfn_x(INVALID_GFN);
     }
     gfn = guest_walk_to_gfn(&gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index c424ad6..824796f 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -796,7 +796,7 @@ static inline unsigned long vtlb_lookup(struct vcpu *v,
                                         unsigned long va, uint32_t pfec)
 {
     unsigned long page_number = va >> PAGE_SHIFT;
-    unsigned long frame_number = INVALID_GFN;
+    unsigned long frame_number = gfn_x(INVALID_GFN);
     int i = vtlb_hash(page_number);
 
     spin_lock(&v->arch.paging.vtlb_lock);
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index c758459..b8c0a48 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
         return -EADDRNOTAVAIL;
     ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index f010612..c322b9f 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -611,7 +611,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( iommu_domid == -1 )
             continue;
 
-        if ( page_count != 1 || gfn == INVALID_GFN )
+        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
             rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
                                        0, flush_dev_iotlb);
         else
@@ -640,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
 
 static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
+    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
 }
 
 /* clear one page's page table */
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cd435d7..69cd6c5 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
 
-            if ( gfn != INVALID_GFN )
+            if ( gfn != gfn_x(INVALID_GFN) )
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index a8d980c..79ed4ff 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -32,7 +32,7 @@
 #error GUEST_PAGING_LEVELS not defined
 #endif
 
-#define VALID_GFN(m) (m != INVALID_GFN)
+#define VALID_GFN(m) (m != gfn_x(INVALID_GFN))
 
 static inline int
 valid_gfn(gfn_t m)
@@ -251,7 +251,7 @@ static inline gfn_t
 guest_walk_to_gfn(walk_t *gw)
 {
     if ( !(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT) )
-        return _gfn(INVALID_GFN);
+        return INVALID_GFN;
     return guest_l1e_get_gfn(gw->l1e);
 }
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 4ab3574..194020e 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -324,7 +324,7 @@ struct p2m_domain {
 #define NR_POD_MRP_ENTRIES 32
 
 /* Encode ORDER_2M superpage in top bit of GFN */
-#define POD_LAST_SUPERPAGE (INVALID_GFN & ~(INVALID_GFN >> 1))
+#define POD_LAST_SUPERPAGE (gfn_x(INVALID_GFN) & ~(gfn_x(INVALID_GFN) >> 1))
 
             unsigned long list[NR_POD_MRP_ENTRIES];
             unsigned int idx;
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 7f207ec..58bc0b8 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -84,7 +84,7 @@ static inline bool_t mfn_eq(mfn_t x, mfn_t y)
 
 TYPE_SAFE(unsigned long, gfn);
 #define PRI_gfn          "05lx"
-#define INVALID_GFN      (~0UL)
+#define INVALID_GFN      _gfn(~0UL)
 
 #ifndef gfn_t
 #define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 08/17] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (6 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:32   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The prototype and the declaration of p2m_lookup disagree on how the
function should be used. One expect a frame number whilst the other
an address.

Thankfully, everyone is using with an address today. However, most of
the callers have to convert a guest frame to an address. Modify
the interface to take a guest physical frame in parameter and return
a machine frame.

Whilst modifying the interface, use typesafe gfn and mfn for clarity
and catching possible misusage.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Use INVALID_MFN_T when possible
---
 xen/arch/arm/p2m.c        | 43 +++++++++++++++++++++++--------------------
 xen/arch/arm/traps.c      | 21 +++++++++++----------
 xen/include/asm-arm/p2m.h |  7 +++----
 3 files changed, 37 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c938dde..54a363a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -140,14 +140,15 @@ void flush_tlb_domain(struct domain *d)
 }
 
 /*
- * Lookup the MFN corresponding to a domain's PFN.
+ * Lookup the MFN corresponding to a domain's GFN.
  *
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
  */
-static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
+    const paddr_t paddr = pfn_to_paddr(gfn_x(gfn));
     const unsigned int offsets[4] = {
         zeroeth_table_offset(paddr),
         first_table_offset(paddr),
@@ -158,7 +159,7 @@ static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
         ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK
     };
     lpae_t pte, *map;
-    paddr_t maddr = INVALID_PADDR;
+    mfn_t mfn = INVALID_MFN;
     paddr_t mask = 0;
     p2m_type_t _t;
     unsigned int level, root_table;
@@ -216,21 +217,22 @@ static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
     {
         ASSERT(mask);
         ASSERT(pte.p2m.type != p2m_invalid);
-        maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
+        mfn = _mfn(paddr_to_pfn((pte.bits & PADDR_MASK & mask) |
+                                (paddr & ~mask)));
         *t = pte.p2m.type;
     }
 
 err:
-    return maddr;
+    return mfn;
 }
 
-paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
-    paddr_t ret;
+    mfn_t ret;
     struct p2m_domain *p2m = &d->arch.p2m;
 
     spin_lock(&p2m->lock);
-    ret = __p2m_lookup(d, paddr, t);
+    ret = __p2m_lookup(d, gfn, t);
     spin_unlock(&p2m->lock);
 
     return ret;
@@ -493,8 +495,9 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
          * No setting was found in the Radix tree. Check if the
          * entry exists in the page-tables.
          */
-        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
-        if ( INVALID_PADDR == maddr )
+        mfn_t mfn = __p2m_lookup(d, gfn, NULL);
+
+        if ( mfn_eq(mfn, INVALID_MFN) )
             return -ESRCH;
 
         /* If entry exists then its rwx. */
@@ -1483,8 +1486,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
 {
-    paddr_t p = p2m_lookup(d, pfn_to_paddr(gfn_x(gfn)), NULL);
-    return _mfn(p >> PAGE_SHIFT);
+    return p2m_lookup(d, gfn, NULL);
 }
 
 /*
@@ -1498,8 +1500,8 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
 {
     long rc;
     paddr_t ipa;
-    unsigned long maddr;
-    unsigned long mfn;
+    gfn_t gfn;
+    mfn_t mfn;
     xenmem_access_t xma;
     p2m_type_t t;
     struct page_info *page = NULL;
@@ -1508,11 +1510,13 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     if ( rc < 0 )
         goto err;
 
+    gfn = _gfn(paddr_to_pfn(ipa));
+
     /*
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
      */
-    rc = __p2m_get_mem_access(current->domain, _gfn(paddr_to_pfn(ipa)), &xma);
+    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
     if ( rc < 0 )
         goto err;
 
@@ -1561,12 +1565,11 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We had a mem_access permission limiting the access, but the page type
      * could also be limiting, so we need to check that as well.
      */
-    maddr = __p2m_lookup(current->domain, ipa, &t);
-    if ( maddr == INVALID_PADDR )
+    mfn = __p2m_lookup(current->domain, gfn, &t);
+    if ( mfn_eq(mfn, INVALID_MFN) )
         goto err;
 
-    mfn = maddr >> PAGE_SHIFT;
-    if ( !mfn_valid(mfn) )
+    if ( !mfn_valid(mfn_x(mfn)) )
         goto err;
 
     /*
@@ -1575,7 +1578,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     if ( t != p2m_ram_rw )
         goto err;
 
-    page = mfn_to_page(mfn);
+    page = mfn_to_page(mfn_x(mfn));
 
     if ( unlikely(!get_page(page, current->domain)) )
         page = NULL;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 44926ca..b653f61 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2319,14 +2319,16 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
 {
     register_t ttbcr = READ_SYSREG(TCR_EL1);
     uint64_t ttbr0 = READ_SYSREG64(TTBR0_EL1);
-    paddr_t paddr;
     uint32_t offset;
     uint32_t *first = NULL, *second = NULL;
+    mfn_t mfn;
+
+    mfn = p2m_lookup(d, _gfn(paddr_to_pfn(ttbr0)), NULL);
 
     printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr);
     printk("    TTBCR: 0x%08"PRIregister"\n", ttbcr);
     printk("    TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n",
-           ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL));
+           ttbr0, pfn_to_paddr(mfn_x(mfn)));
 
     if ( ttbcr & TTBCR_EAE )
     {
@@ -2339,32 +2341,31 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
         return;
     }
 
-    paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL);
-    if ( paddr == INVALID_PADDR )
+    if ( mfn_eq(mfn, INVALID_MFN) )
     {
         printk("Failed TTBR0 maddr lookup\n");
         goto done;
     }
-    first = map_domain_page(_mfn(paddr_to_pfn(paddr)));
+    first = map_domain_page(mfn);
 
     offset = addr >> (12+10);
     printk("1ST[0x%"PRIx32"] (0x%"PRIpaddr") = 0x%08"PRIx32"\n",
-           offset, paddr, first[offset]);
+           offset, pfn_to_paddr(mfn_x(mfn)), first[offset]);
     if ( !(first[offset] & 0x1) ||
          !(first[offset] & 0x2) )
         goto done;
 
-    paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL);
+    mfn = p2m_lookup(d, _gfn(paddr_to_pfn(first[offset])), NULL);
 
-    if ( paddr == INVALID_PADDR )
+    if ( mfn_eq(mfn, INVALID_MFN) )
     {
         printk("Failed L1 entry maddr lookup\n");
         goto done;
     }
-    second = map_domain_page(_mfn(paddr_to_pfn(paddr)));
+    second = map_domain_page(mfn);
     offset = (addr >> 12) & 0x3FF;
     printk("2ND[0x%"PRIx32"] (0x%"PRIpaddr") = 0x%08"PRIx32"\n",
-           offset, paddr, second[offset]);
+           offset, pfn_to_paddr(mfn_x(mfn)), second[offset]);
 
 done:
     if (second) unmap_domain_page(second);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 0d1e61e..f204482 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -135,8 +135,8 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
-/* Look up the MFN corresponding to a domain's PFN. */
-paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
+/* Look up the MFN corresponding to a domain's GFN. */
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
@@ -201,8 +201,7 @@ static inline struct page_info *get_page_from_gfn(
 {
     struct page_info *page;
     p2m_type_t p2mt;
-    paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt);
-    unsigned long mfn = maddr >> PAGE_SHIFT;
+    unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
 
     if (t)
         *t = p2mt;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (7 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 08/17] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:35   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

p2m_cache_flush is expecting GFNs in parameter and not MFNs. Rename
the variable to *gfn* and use typesafe to avoid possible misusage.

Also, modify the prototype of the function to describe the range
using the start and the number of GFNs. This will avoid to wonder
whether the end if inclusive or exclusive.

Note that the type of the parameters 'start' is changed from xen_pfn_t
(aka uint64_t) to gfn_t (aka unsigned long). This means that a truncation
will occur for ARM32. It is fine because it will always be encoded on 28
bits maximum (40 bits address).

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - This patch was originally called "xen/arm: p2m_cache_flush:
        Use the correct terminology and typesafe gfn"
        - Describe the range using the start and the number of GFNs.

    Changes in v3:
        - Add a word in the commit message about the truncation.

    Changes in v2:
        - Drop _gfn suffix
---
 xen/arch/arm/domctl.c     |  2 +-
 xen/arch/arm/p2m.c        | 11 ++++++-----
 xen/include/asm-arm/p2m.h |  2 +-
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 30453d8..f61f98a 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -30,7 +30,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( e < s )
             return -EINVAL;
 
-        return p2m_cache_flush(d, s, e);
+        return p2m_cache_flush(d, _gfn(s), domctl->u.cacheflush.nr_pfns);
     }
     case XEN_DOMCTL_bind_pt_irq:
     {
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 54a363a..1cfb62b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1469,16 +1469,17 @@ int relinquish_p2m_mapping(struct domain *d)
                               d->arch.p2m.default_access);
 }
 
-int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
+    gfn_t end = gfn_add(start, nr);
 
-    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
-    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
+    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
 
     return apply_p2m_changes(d, CACHEFLUSH,
-                             pfn_to_paddr(start_mfn),
-                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(gfn_x(start)),
+                             pfn_to_paddr(gfn_x(end)),
                              pfn_to_paddr(mfn_x(INVALID_MFN)),
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index f204482..8a96e68 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -139,7 +139,7 @@ void p2m_dump_info(struct domain *d);
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
 /* Clean & invalidate caches corresponding to a region of guest address space */
-int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (8 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:43   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 11/17] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini, Shannon Zhao

The parameter 'access' is used by memaccess to restrict temporarily the
permission. This parameter should not be used for other purpose (such
as restricting permanently the permission).

The type p2m_mmio_direct will map the region Read-Write and
non-executable. Note that this is already the current behavior with the
combination of the type and the access. So there is no functional
change.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
Cc: Shannon Zhao <shannon.zhao@linaro.org>

    This patch is a candidate for Xen 4.7. Currently this function is
    only used to map ACPI regions.

    I am wondering if we should introduce a new p2m type for it. And map
    this region RO (I am not sure why a guest would want to
    modify this region).

    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1cfb62b..fcc4513 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1231,7 +1231,7 @@ int map_regions_rw_cache(struct domain *d,
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
                              MATTR_MEM, 0, p2m_mmio_direct,
-                             p2m_access_rw);
+                             d->arch.p2m.default_access);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1244,7 +1244,7 @@ int unmap_regions_rw_cache(struct domain *d,
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
                              MATTR_MEM, 0, p2m_invalid,
-                             p2m_access_rw);
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 11/17] xen/arm: dom0_build: Remove dead code in allocate_memory
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (9 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:46   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 12/17] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The code to allocate memory when dom0 does not use direct mapping is
relying on the presence of memory node in the DT.

However, they are not present when booting using UEFI or when using
ACPI.

Rather than fixing the code, remove it because dom0 is always direct
memory mapped and therefore the code is never tested. Also add a
check to avoid disabling direct memory mapped and not implementing
the associated RAM bank allocation.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/domain_build.c | 58 ++++++---------------------------------------
 1 file changed, 7 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 49185f0..923f48a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -235,7 +235,7 @@ fail:
  * (as described above) we allow higher allocations and continue until
  * that runs out (or we have allocated sufficient dom0 memory).
  */
-static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
+static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
 {
     const unsigned int min_low_order =
         get_order_from_bytes(min_t(paddr_t, dom0_mem, MB(128)));
@@ -247,6 +247,12 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
     bool_t lowmem = is_32bit_domain(d);
     unsigned int bits;
 
+    /*
+     * TODO: Implement memory bank allocation when DOM0 is not direct
+     * mapped
+     */
+    BUG_ON(!dom0_11_mapping);
+
     printk("Allocating 1:1 mappings totalling %ldMB for dom0:\n",
            /* Don't want format this as PRIpaddr (16 digit hex) */
            (unsigned long)(kinfo->unassigned_mem >> 20));
@@ -343,56 +349,6 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
     }
 }
 
-static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
-{
-
-    struct dt_device_node *memory = NULL;
-    const void *reg;
-    u32 reg_len, reg_size;
-    unsigned int bank = 0;
-
-    if ( dom0_11_mapping )
-        return allocate_memory_11(d, kinfo);
-
-    while ( (memory = dt_find_node_by_type(memory, "memory")) )
-    {
-        int l;
-
-        dt_dprintk("memory node\n");
-
-        reg_size = dt_cells_to_size(dt_n_addr_cells(memory) + dt_n_size_cells(memory));
-
-        reg = dt_get_property(memory, "reg", &reg_len);
-        if ( reg == NULL )
-            panic("Memory node has no reg property");
-
-        for ( l = 0;
-              kinfo->unassigned_mem > 0 && l + reg_size <= reg_len
-                  && kinfo->mem.nr_banks < NR_MEM_BANKS;
-              l += reg_size )
-        {
-            paddr_t start, size;
-
-            if ( dt_device_get_address(memory, bank, &start, &size) )
-                panic("Unable to retrieve the bank %u for %s",
-                      bank, dt_node_full_name(memory));
-
-            if ( size > kinfo->unassigned_mem )
-                size = kinfo->unassigned_mem;
-
-            printk("Populate P2M %#"PRIx64"->%#"PRIx64"\n",
-                   start, start + size);
-            if ( p2m_populate_ram(d, start, start + size) < 0 )
-                panic("Failed to populate P2M");
-            kinfo->mem.bank[kinfo->mem.nr_banks].start = start;
-            kinfo->mem.bank[kinfo->mem.nr_banks].size = size;
-            kinfo->mem.nr_banks++;
-
-            kinfo->unassigned_mem -= size;
-        }
-    }
-}
-
 static int write_properties(struct domain *d, struct kernel_info *kinfo,
                             const struct dt_device_node *node)
 {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 12/17] xen/arm: p2m: Remove unused operation ALLOCATE
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (10 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 11/17] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:49   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The operation ALLOCATE is unused. If we ever need it, it could be
reimplemented with INSERT.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c        | 67 ++---------------------------------------------
 xen/include/asm-arm/p2m.h |  3 ---
 2 files changed, 2 insertions(+), 68 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index fcc4513..f11094e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -547,7 +547,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
 
 enum p2m_operation {
     INSERT,
-    ALLOCATE,
     REMOVE,
     RELINQUISH,
     CACHEFLUSH,
@@ -667,7 +666,6 @@ static int apply_one_level(struct domain *d,
 {
     const paddr_t level_size = level_sizes[level];
     const paddr_t level_mask = level_masks[level];
-    const paddr_t level_shift = level_shifts[level];
 
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte;
@@ -678,58 +676,6 @@ static int apply_one_level(struct domain *d,
 
     switch ( op )
     {
-    case ALLOCATE:
-        ASSERT(level < 3 || !p2m_valid(orig_pte));
-        ASSERT(*maddr == 0);
-
-        if ( p2m_valid(orig_pte) )
-            return P2M_ONE_DESCEND;
-
-        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
-           /* We only create superpages when mem_access is not in use. */
-             (level == 3 || (level < 3 && !p2m->mem_access_enabled)) )
-        {
-            struct page_info *page;
-
-            page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
-            if ( page )
-            {
-                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
-                if ( rc < 0 )
-                {
-                    free_domheap_page(page);
-                    return rc;
-                }
-
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
-                if ( level < 3 )
-                    pte.p2m.table = 0;
-                p2m_write_pte(entry, pte, flush_cache);
-                p2m->stats.mappings[level]++;
-
-                *addr += level_size;
-
-                return P2M_ONE_PROGRESS;
-            }
-            else if ( level == 3 )
-                return -ENOMEM;
-        }
-
-        /* L3 is always suitably aligned for mapping (handled, above) */
-        BUG_ON(level == 3);
-
-        /*
-         * If we get here then we failed to allocate a sufficiently
-         * large contiguous region for this level (which can't be
-         * L3) or mem_access is in use. Create a page table and
-         * continue to descend so we try smaller allocations.
-         */
-        rc = p2m_create_table(d, entry, 0, flush_cache);
-        if ( rc < 0 )
-            return rc;
-
-        return P2M_ONE_DESCEND;
-
     case INSERT:
         if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
            /*
@@ -1169,7 +1115,7 @@ static int apply_p2m_changes(struct domain *d,
         }
     }
 
-    if ( op == ALLOCATE || op == INSERT )
+    if ( op == INSERT )
     {
         p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
         p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
@@ -1197,7 +1143,7 @@ out:
 
     spin_unlock(&p2m->lock);
 
-    if ( rc < 0 && ( op == INSERT || op == ALLOCATE ) &&
+    if ( rc < 0 && ( op == INSERT ) &&
          addr != start_gpaddr )
     {
         BUG_ON(addr == end_gpaddr);
@@ -1212,15 +1158,6 @@ out:
     return rc;
 }
 
-int p2m_populate_ram(struct domain *d,
-                     paddr_t start,
-                     paddr_t end)
-{
-    return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, 0, p2m_ram_rw,
-                             d->arch.p2m.default_access);
-}
-
 int map_regions_rw_cache(struct domain *d,
                          unsigned long start_gfn,
                          unsigned long nr,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8a96e68..4752161 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -141,9 +141,6 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
-/* Setup p2m RAM mapping for domain d from start-end. */
-int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
-
 int map_regions_rw_cache(struct domain *d,
                          unsigned long start_gfn,
                          unsigned long nr_mfns,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region...
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (11 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 12/17] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 17:21   ` Andrew Cooper
  2016-06-28 16:17 ` [PATCH v5 14/17] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

to avoid mixing machine frame with guest frame. Also drop the prefix start_.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/mm.c         |  2 +-
 xen/arch/arm/p2m.c        | 10 +++++-----
 xen/include/asm-arm/p2m.h |  4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0e408f8..b5fc034 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1145,7 +1145,7 @@ int xenmem_add_to_physmap_one(
         if ( extra.res0 )
             return -EOPNOTSUPP;
 
-        rc = map_dev_mmio_region(d, gfn_x(gfn), 1, idx);
+        rc = map_dev_mmio_region(d, gfn, 1, _mfn(idx));
         return rc;
 
     default:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f11094e..5ffc3df 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1211,20 +1211,20 @@ int unmap_mmio_regions(struct domain *d,
 }
 
 int map_dev_mmio_region(struct domain *d,
-                        unsigned long start_gfn,
+                        gfn_t gfn,
                         unsigned long nr,
-                        unsigned long mfn)
+                        mfn_t mfn)
 {
     int res;
 
-    if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
+    if ( !(nr && iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + nr - 1)) )
         return 0;
 
-    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
+    res = map_mmio_regions(d, gfn, nr, mfn);
     if ( res < 0 )
     {
         printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",
-               mfn, mfn + nr - 1, d->domain_id);
+               mfn_x(mfn), mfn_x(mfn) + nr - 1, d->domain_id);
         return res;
     }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 4752161..8d29eda 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -152,9 +152,9 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long mfn);
 
 int map_dev_mmio_region(struct domain *d,
-                        unsigned long start_gfn,
+                        gfn_t gfn,
                         unsigned long nr,
-                        unsigned long mfn);
+                        mfn_t mfn);
 
 int guest_physmap_add_entry(struct domain *d,
                             gfn_t gfn,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 14/17] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache ...
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (12 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 10:54   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

to avoid mixing machine frame with guest frame. Also rename the
parameters of the function and drop pointless PAGE_MASK in the caller.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/domain_build.c |  8 ++++----
 xen/arch/arm/p2m.c          | 20 ++++++++++----------
 xen/include/asm-arm/p2m.h   | 12 ++++++------
 3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 923f48a..60db9e4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1522,9 +1522,9 @@ static void acpi_map_other_tables(struct domain *d)
         addr = acpi_gbl_root_table_list.tables[i].address;
         size = acpi_gbl_root_table_list.tables[i].length;
         res = map_regions_rw_cache(d,
-                                   paddr_to_pfn(addr & PAGE_MASK),
+                                   _gfn(paddr_to_pfn(addr)),
                                    DIV_ROUND_UP(size, PAGE_SIZE),
-                                   paddr_to_pfn(addr & PAGE_MASK));
+                                   _mfn(paddr_to_pfn(addr)));
         if ( res )
         {
              panic(XENLOG_ERR "Unable to map ACPI region 0x%"PRIx64
@@ -1878,9 +1878,9 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
 
     /* Map the EFI and ACPI tables to Dom0 */
     rc = map_regions_rw_cache(d,
-                              paddr_to_pfn(d->arch.efi_acpi_gpa),
+                              _gfn(paddr_to_pfn(d->arch.efi_acpi_gpa)),
                               PFN_UP(d->arch.efi_acpi_len),
-                              paddr_to_pfn(virt_to_maddr(d->arch.efi_acpi_table)));
+                              _mfn(paddr_to_pfn(virt_to_maddr(d->arch.efi_acpi_table))));
     if ( rc != 0 )
     {
         printk(XENLOG_ERR "Unable to map EFI/ACPI table 0x%"PRIx64
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5ffc3df..0fdd11f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1159,27 +1159,27 @@ out:
 }
 
 int map_regions_rw_cache(struct domain *d,
-                         unsigned long start_gfn,
+                         gfn_t gfn,
                          unsigned long nr,
-                         unsigned long mfn)
+                         mfn_t mfn)
 {
     return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(gfn)),
+                             pfn_to_paddr(gfn_x(gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_MEM, 0, p2m_mmio_direct,
                              d->arch.p2m.default_access);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
-                           unsigned long start_gfn,
+                           gfn_t gfn,
                            unsigned long nr,
-                           unsigned long mfn)
+                           mfn_t mfn)
 {
     return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(gfn)),
+                             pfn_to_paddr(gfn_x(gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8d29eda..6e258b9 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -142,14 +142,14 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
 int map_regions_rw_cache(struct domain *d,
-                         unsigned long start_gfn,
-                         unsigned long nr_mfns,
-                         unsigned long mfn);
+                         gfn_t gfn,
+                         unsigned long nr,
+                         mfn_t mfn);
 
 int unmap_regions_rw_cache(struct domain *d,
-                           unsigned long start_gfn,
-                           unsigned long nr_mfns,
-                           unsigned long mfn);
+                           gfn_t gfn,
+                           unsigned long nr,
+                           mfn_t mfn);
 
 int map_dev_mmio_region(struct domain *d,
                         gfn_t gfn,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (13 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 14/17] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-06-28 17:23   ` Andrew Cooper
  2016-07-06 10:59   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 16/17] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
                   ` (2 subsequent siblings)
  17 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

More the half of the arguments of INSERT and REMOVE are the same for
each callers. Simplify the callers of apply_p2m_changes by adding new
helpers which will fill common arguments with default values.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v5:
        - Add missing Signed-off-by

    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c | 70 ++++++++++++++++++++++++++++--------------------------
 1 file changed, 36 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0fdd11f..a5b584b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1158,17 +1158,40 @@ out:
     return rc;
 }
 
+static inline int p2m_insert_mapping(struct domain *d,
+                                     gfn_t start_gfn,
+                                     unsigned long nr,
+                                     mfn_t mfn,
+                                     int mattr, p2m_type_t t)
+{
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
+                             mattr, 0, t, d->arch.p2m.default_access);
+}
+
+static inline int p2m_remove_mapping(struct domain *d,
+                                     gfn_t start_gfn,
+                                     unsigned long nr,
+                                     mfn_t mfn)
+{
+    return apply_p2m_changes(d, REMOVE,
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
+                             /* arguments below not used when removing mapping */
+                             MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
+}
+
 int map_regions_rw_cache(struct domain *d,
                          gfn_t gfn,
                          unsigned long nr,
                          mfn_t mfn)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(gfn)),
-                             pfn_to_paddr(gfn_x(gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_MEM, 0, p2m_mmio_direct,
-                             d->arch.p2m.default_access);
+    return p2m_insert_mapping(d, gfn, nr, mfn,
+                              MATTR_MEM, p2m_mmio_direct);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1176,12 +1199,7 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long nr,
                            mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(gfn_x(gfn)),
-                             pfn_to_paddr(gfn_x(gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_MEM, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+    return p2m_remove_mapping(d, gfn, nr, mfn);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -1189,12 +1207,8 @@ int map_mmio_regions(struct domain *d,
                      unsigned long nr,
                      mfn_t mfn)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_DEV, 0, p2m_mmio_direct,
-                             d->arch.p2m.default_access);
+    return p2m_insert_mapping(d, start_gfn, nr, mfn,
+                              MATTR_MEM, p2m_mmio_direct);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -1202,12 +1216,7 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_DEV, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+    return p2m_remove_mapping(d, start_gfn, nr, mfn);
 }
 
 int map_dev_mmio_region(struct domain *d,
@@ -1237,22 +1246,15 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(gfn)),
-                             pfn_to_paddr(gfn_x(gfn) + (1 << page_order)),
-                             pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t,
-                             d->arch.p2m.default_access);
+    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
+                              MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                gfn_t gfn,
                                mfn_t mfn, unsigned int page_order)
 {
-    apply_p2m_changes(d, REMOVE,
-                      pfn_to_paddr(gfn_x(gfn)),
-                      pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
-                      pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
-                      d->arch.p2m.default_access);
+    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
 int p2m_alloc_table(struct domain *d)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 16/17] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (14 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 11:07   ` Stefano Stabellini
  2016-06-28 16:17 ` [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
  2016-06-28 18:32 ` [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Andrew Cooper
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/mm.c         |  2 +-
 xen/arch/arm/p2m.c        | 18 +++++++++---------
 xen/include/asm-arm/p2m.h |  4 ++--
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b5fc034..4e256c2 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1004,7 +1004,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return d->arch.p2m.max_mapped_gfn;
+    return gfn_x(d->arch.p2m.max_mapped_gfn);
 }
 
 void share_xen_page_with_guest(struct page_info *page,
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a5b584b..9fdc417 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -976,7 +976,7 @@ static int apply_p2m_changes(struct domain *d,
                  * This is set in preempt_count_limit.
                  *
                  */
-                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = _gfn(addr >> PAGE_SHIFT);
                 rc = -ERESTART;
                 goto out;
 
@@ -1117,8 +1117,8 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( op == INSERT )
     {
-        p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
-        p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
+        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, _gfn(egfn));
+        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, _gfn(sgfn));
     }
 
     rc = 0;
@@ -1383,8 +1383,8 @@ int p2m_init(struct domain *d)
 
     p2m->root = NULL;
 
-    p2m->max_mapped_gfn = 0;
-    p2m->lowest_mapped_gfn = ULONG_MAX;
+    p2m->max_mapped_gfn = _gfn(0);
+    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
     p2m->default_access = p2m_access_rwx;
     p2m->mem_access_enabled = false;
@@ -1401,8 +1401,8 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->lowest_mapped_gfn),
-                              pfn_to_paddr(p2m->max_mapped_gfn),
+                              pfn_to_paddr(gfn_x(p2m->lowest_mapped_gfn)),
+                              pfn_to_paddr(gfn_x(p2m->max_mapped_gfn)),
                               pfn_to_paddr(mfn_x(INVALID_MFN)),
                               MATTR_MEM, 0, p2m_invalid,
                               d->arch.p2m.default_access);
@@ -1413,8 +1413,8 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
     struct p2m_domain *p2m = &d->arch.p2m;
     gfn_t end = gfn_add(start, nr);
 
-    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
-    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
+    start = gfn_max(start, p2m->lowest_mapped_gfn);
+    end = gfn_min(end, p2m->max_mapped_gfn);
 
     return apply_p2m_changes(d, CACHEFLUSH,
                              pfn_to_paddr(gfn_x(start)),
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 6e258b9..34096bc 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -34,13 +34,13 @@ struct p2m_domain {
     /* Highest guest frame that's ever been mapped in the p2m
      * Only takes into account ram and foreign mapping
      */
-    unsigned long max_mapped_gfn;
+    gfn_t max_mapped_gfn;
 
     /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
      * preemptible manner this is update to track recall where to
      * resume the search. Apart from during teardown this can only
      * decrease. */
-    unsigned long lowest_mapped_gfn;
+    gfn_t lowest_mapped_gfn;
 
     /* Gather some statistics for information purposes only */
     struct {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (15 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 16/17] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
@ 2016-06-28 16:17 ` Julien Grall
  2016-07-06 11:06   ` Stefano Stabellini
  2016-06-28 18:32 ` [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Andrew Cooper
  17 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-06-28 16:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Most of the callers of apply_p2m_changes have a GFN, a MFN and the
number of frame to change in hand.

Rather than asking each caller to convert the frame to an address,
rework the interfaces to pass the GFN, MFN and the number of frame.

Note that it would be possible to do more clean-up in apply_p2m_changes,
but this will be done in a follow-up series.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c | 62 ++++++++++++++++++++++++------------------------------
 1 file changed, 28 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9fdc417..bb33a72 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -906,25 +906,26 @@ static void update_reference_mapping(struct page_info *page,
 
 static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
-                     paddr_t start_gpaddr,
-                     paddr_t end_gpaddr,
-                     paddr_t maddr,
+                     gfn_t sgfn,
+                     unsigned long nr,
+                     mfn_t smfn,
                      int mattr,
                      uint32_t mask,
                      p2m_type_t t,
                      p2m_access_t a)
 {
+    paddr_t start_gpaddr = pfn_to_paddr(gfn_x(sgfn));
+    paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
+    paddr_t maddr = pfn_to_paddr(mfn_x(smfn));
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
     struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
-    paddr_t addr, orig_maddr = maddr;
+    paddr_t addr;
     unsigned int level = 0;
     unsigned int cur_root_table = ~0;
     unsigned int cur_offset[4] = { ~0, ~0, ~0, ~0 };
     unsigned int count = 0;
-    const unsigned long sgfn = paddr_to_pfn(start_gpaddr),
-                        egfn = paddr_to_pfn(end_gpaddr);
     const unsigned int preempt_count_limit = (op == MEMACCESS) ? 1 : 0x2000;
     const bool_t preempt = !is_idle_vcpu(current);
     bool_t flush = false;
@@ -986,9 +987,9 @@ static int apply_p2m_changes(struct domain *d,
                  * Preempt setting mem_access permissions as required by XSA-89,
                  * if it's not the last iteration.
                  */
-                uint32_t progress = paddr_to_pfn(addr) - sgfn + 1;
+                uint32_t progress = paddr_to_pfn(addr) - gfn_x(sgfn) + 1;
 
-                if ( (egfn - sgfn) > progress && !(progress & mask) )
+                if ( nr > progress && !(progress & mask) )
                 {
                     rc = progress;
                     goto out;
@@ -1117,8 +1118,9 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( op == INSERT )
     {
-        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, _gfn(egfn));
-        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, _gfn(sgfn));
+        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
+                                      gfn_add(sgfn, nr));
+        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -1127,7 +1129,7 @@ out:
     if ( flush )
     {
         flush_tlb_domain(d);
-        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
         if ( !rc )
             rc = ret;
     }
@@ -1146,12 +1148,14 @@ out:
     if ( rc < 0 && ( op == INSERT ) &&
          addr != start_gpaddr )
     {
+        unsigned long gfn = paddr_to_pfn(addr);
+
         BUG_ON(addr == end_gpaddr);
         /*
          * addr keeps the address of the end of the last successfully-inserted
          * mapping.
          */
-        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
+        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
                           mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
@@ -1164,10 +1168,7 @@ static inline int p2m_insert_mapping(struct domain *d,
                                      mfn_t mfn,
                                      int mattr, p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
+    return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
                              mattr, 0, t, d->arch.p2m.default_access);
 }
 
@@ -1176,10 +1177,7 @@ static inline int p2m_remove_mapping(struct domain *d,
                                      unsigned long nr,
                                      mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
+    return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
                              /* arguments below not used when removing mapping */
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
@@ -1399,13 +1397,13 @@ err:
 int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
+    unsigned long nr;
 
-    return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(gfn_x(p2m->lowest_mapped_gfn)),
-                              pfn_to_paddr(gfn_x(p2m->max_mapped_gfn)),
-                              pfn_to_paddr(mfn_x(INVALID_MFN)),
-                              MATTR_MEM, 0, p2m_invalid,
-                              d->arch.p2m.default_access);
+    nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
+
+    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
+                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
@@ -1416,10 +1414,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
     start = gfn_max(start, p2m->lowest_mapped_gfn);
     end = gfn_min(end, p2m->max_mapped_gfn);
 
-    return apply_p2m_changes(d, CACHEFLUSH,
-                             pfn_to_paddr(gfn_x(start)),
-                             pfn_to_paddr(gfn_x(end)),
-                             pfn_to_paddr(mfn_x(INVALID_MFN)),
+    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
@@ -1828,10 +1823,9 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
         return 0;
     }
 
-    rc = apply_p2m_changes(d, MEMACCESS,
-                           pfn_to_paddr(gfn_x(gfn) + start),
-                           pfn_to_paddr(gfn_x(gfn) + nr),
-                           0, MATTR_MEM, mask, 0, a);
+    rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
+                           (nr - start), INVALID_MFN,
+                           MATTR_MEM, mask, 0, a);
     if ( rc < 0 )
         return rc;
     else if ( rc > 0 )
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  2016-06-28 16:17 ` [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
@ 2016-06-28 16:43   ` Andrew Cooper
  2016-06-28 16:47     ` Juergen Gross
  2016-06-29  9:39   ` Jan Beulich
  1 sibling, 1 reply; 49+ messages in thread
From: Andrew Cooper @ 2016-06-28 16:43 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, Suravee Suthikulpanit, Jan Beulich

On 28/06/16 17:17, Julien Grall wrote:
> A variable containing a guest frame should be compared to INVALID_GFN
> and not INVALID_GFN.
>
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

I suspect these (mis)uses predate my movement of INVALID_GFN from x86
paging code to common code.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  2016-06-28 16:43   ` Andrew Cooper
@ 2016-06-28 16:47     ` Juergen Gross
  2016-06-28 17:04       ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Juergen Gross @ 2016-06-28 16:47 UTC (permalink / raw)
  To: Andrew Cooper, Julien Grall, xen-devel
  Cc: sstabellini, Jan Beulich, Suravee Suthikulpanit

On 28/06/16 18:43, Andrew Cooper wrote:
> On 28/06/16 17:17, Julien Grall wrote:
>> A variable containing a guest frame should be compared to INVALID_GFN
>> and not INVALID_GFN.

I think the text should be changed? I'd expect one 'G' being replaced
bay 'M'. :-)


Juergen

>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> I suspect these (mis)uses predate my movement of INVALID_GFN from x86
> paging code to common code.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  2016-06-28 16:47     ` Juergen Gross
@ 2016-06-28 17:04       ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-28 17:04 UTC (permalink / raw)
  To: Juergen Gross, Andrew Cooper, xen-devel
  Cc: sstabellini, Jan Beulich, Suravee Suthikulpanit



On 28/06/16 17:47, Juergen Gross wrote:
> On 28/06/16 18:43, Andrew Cooper wrote:
>> On 28/06/16 17:17, Julien Grall wrote:
>>> A variable containing a guest frame should be compared to INVALID_GFN
>>> and not INVALID_GFN.
>
> I think the text should be changed? I'd expect one 'G' being replaced
> bay 'M'. :-)

Correct, the second should be replaced by a 'M'. Thank you for spotting it.

Cheers,

>>>
>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> I suspect these (mis)uses predate my movement of INVALID_GFN from x86
>> paging code to common code.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN
  2016-06-28 16:17 ` [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN Julien Grall
@ 2016-06-28 17:05   ` Andrew Cooper
  2016-07-06 10:27   ` Stefano Stabellini
  1 sibling, 0 replies; 49+ messages in thread
From: Andrew Cooper @ 2016-06-28 17:05 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Kevin Tian, sstabellini, Jun Nakajima, George Dunlap,
	Liu Jinsong, Christoph Egger, Tim Deegan, Paul Durrant,
	Jan Beulich, Mukesh Rathor

On 28/06/16 17:17, Julien Grall wrote:
> @@ -60,16 +60,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>          return INVALID_MFN;
>      }
>  
> -    mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
> +    mfn = get_gfn(dp, *gfn, &gfntype);
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>          mfn = INVALID_MFN;
>      }
>      else
> -        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n",

This should be PRImfn rather than assuming %lx is the correct format
identifier for mfn_t.

Similarly throughout the patch.

> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6258a5b..6f90510 100644
> @@ -493,7 +493,7 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>              rc = set_rc;
>  
>          gfn += 1ul << order;
> -        if ( mfn_x(mfn) != INVALID_MFN )
> +        if ( !mfn_eq(mfn, INVALID_MFN) )
>              mfn = _mfn(mfn_x(mfn) + (1ul << order));

This could turn into mfn_add(mfn, 1ul << order), if you fancy fixing it up.

> @@ -1107,7 +1107,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
>      }
>  
>      /* Do not use mfn_valid() here as it will usually fail for MMIO pages. */
> -    if ( (INVALID_MFN == mfn_x(actual_mfn)) || (t != p2m_mmio_direct) )
> +    if ( (mfn_eq(actual_mfn, INVALID_MFN)) || (t != p2m_mmio_direct) )

You can drop the brackets around mfn_eq().

> @@ -838,7 +838,7 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
>  
>      SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
>      BUG();
> -    return _mfn(INVALID_MFN);
> +    return INVALID_MFN;

You should be able to get away with deleting this return.  Now that
BUG() is properly annotated with unreachable(), the compiler shouldn't
warn about this exit path from the function.

Otherwise, no major concerns.  Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN
  2016-06-28 16:17 ` [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN Julien Grall
@ 2016-06-28 17:12   ` Andrew Cooper
  2016-07-06 10:28   ` Stefano Stabellini
  1 sibling, 0 replies; 49+ messages in thread
From: Andrew Cooper @ 2016-06-28 17:12 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Kevin Tian, sstabellini, Feng Wu, Jan Beulich, Jun Nakajima,
	Tim Deegan, George Dunlap, Paul Durrant, Suravee Suthikulpanit,
	Mukesh Rathor, Boris Ostrovsky

On 28/06/16 17:17, Julien Grall wrote:
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index a929e3b..b9ffce2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5298,7 +5298,7 @@ static int do_altp2m_op(
>               a.u.enable_notify.vcpu_id != curr->vcpu_id )
>              rc = -EINVAL;
>  
> -        if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> +        if ( (!gfn_eq(vcpu_altp2m(curr).veinfo_gfn,  INVALID_GFN)) ||

You can drop a set of brackets here, and the double space before
INVALID_GFN.

Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region...
  2016-06-28 16:17 ` [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
@ 2016-06-28 17:21   ` Andrew Cooper
  2016-06-29 12:20     ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Andrew Cooper @ 2016-06-28 17:21 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini

On 28/06/16 17:17, Julien Grall wrote:
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index f11094e..5ffc3df 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1211,20 +1211,20 @@ int unmap_mmio_regions(struct domain *d,
>  }
>  
>  int map_dev_mmio_region(struct domain *d,
> -                        unsigned long start_gfn,
> +                        gfn_t gfn,
>                          unsigned long nr,
> -                        unsigned long mfn)
> +                        mfn_t mfn)
>  {
>      int res;
>  
> -    if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
> +    if ( !(nr && iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + nr - 1)) )
>          return 0;
>  
> -    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
> +    res = map_mmio_regions(d, gfn, nr, mfn);
>      if ( res < 0 )
>      {
>          printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",

%PRImfn

I would also recommend qualifying what is being mapped, so "to map mfns
[...".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping
  2016-06-28 16:17 ` [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
@ 2016-06-28 17:23   ` Andrew Cooper
  2016-07-06 10:59   ` Stefano Stabellini
  1 sibling, 0 replies; 49+ messages in thread
From: Andrew Cooper @ 2016-06-28 17:23 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini

On 28/06/16 17:17, Julien Grall wrote:
>  void guest_physmap_remove_page(struct domain *d,
>                                 gfn_t gfn,
>                                 mfn_t mfn, unsigned int page_order)
>  {
> -    apply_p2m_changes(d, REMOVE,
> -                      pfn_to_paddr(gfn_x(gfn)),
> -                      pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
> -                      pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
> -                      d->arch.p2m.default_access);
> +    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);

As you are changing these, it should be 1ul << page_order to avoid the
potential of overflow.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn
  2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (16 preceding siblings ...)
  2016-06-28 16:17 ` [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
@ 2016-06-28 18:32 ` Andrew Cooper
  2016-06-29  9:24   ` Jan Beulich
  17 siblings, 1 reply; 49+ messages in thread
From: Andrew Cooper @ 2016-06-28 18:32 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Wei Liu, Kevin Tian, sstabellini, Feng Wu, Jun Nakajima,
	George Dunlap, Liu Jinsong, Christoph Egger, Ian Jackson,
	Tim Deegan, Paul Durrant, Shannon Zhao, Jan Beulich,
	Suravee Suthikulpanit, Boris Ostrovsky, Mukesh Rathor

On 28/06/16 17:17, Julien Grall wrote:
> Julien Grall (17):
>   xen: Use typesafe gfn/mfn in guest_physmap_* helpers
>   xen: Use typesafe gfn in xenmem_add_to_physmap_one
>   xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the
>     typesafe gfn

Committed patches 1-3.

Thanks,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn
  2016-06-28 18:32 ` [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Andrew Cooper
@ 2016-06-29  9:24   ` Jan Beulich
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Beulich @ 2016-06-29  9:24 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tim Deegan, Kevin Tian, sstabellini, Wei Liu, Jun Nakajima,
	GeorgeDunlap, Liu Jinsong, Christoph Egger, Ian Jackson,
	xen-devel, Julien Grall, PaulDurrant, Shannon Zhao,
	Suravee Suthikulpanit, Feng Wu, Boris Ostrovsky, Mukesh Rathor

>>> On 28.06.16 at 20:32, <andrew.cooper3@citrix.com> wrote:
> On 28/06/16 17:17, Julien Grall wrote:
>> Julien Grall (17):
>>   xen: Use typesafe gfn/mfn in guest_physmap_* helpers
>>   xen: Use typesafe gfn in xenmem_add_to_physmap_one
>>   xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the
>>     typesafe gfn
> 
> Committed patches 1-3.

Well, ignoring the formal need for George's ack on patch 1 I could
have done this days ago.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  2016-06-28 16:17 ` [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
  2016-06-28 16:43   ` Andrew Cooper
@ 2016-06-29  9:39   ` Jan Beulich
  1 sibling, 0 replies; 49+ messages in thread
From: Jan Beulich @ 2016-06-29  9:39 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, Suravee Suthikulpanit, xen-devel

>>> On 28.06.16 at 18:17, <julien.grall@arm.com> wrote:
> A variable containing a guest frame should be compared to INVALID_GFN
> and not INVALID_GFN.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
(with the suggested correction to the typo in the description)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region...
  2016-06-28 17:21   ` Andrew Cooper
@ 2016-06-29 12:20     ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-06-29 12:20 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: sstabellini

Hi Andrew,

On 28/06/2016 18:21, Andrew Cooper wrote:
> On 28/06/16 17:17, Julien Grall wrote:
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index f11094e..5ffc3df 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1211,20 +1211,20 @@ int unmap_mmio_regions(struct domain *d,
>>  }
>>
>>  int map_dev_mmio_region(struct domain *d,
>> -                        unsigned long start_gfn,
>> +                        gfn_t gfn,
>>                          unsigned long nr,
>> -                        unsigned long mfn)
>> +                        mfn_t mfn)
>>  {
>>      int res;
>>
>> -    if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
>> +    if ( !(nr && iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + nr - 1)) )
>>          return 0;
>>
>> -    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
>> +    res = map_mmio_regions(d, gfn, nr, mfn);
>>      if ( res < 0 )
>>      {
>>          printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",
>
> %PRImfn
>
> I would also recommend qualifying what is being mapped, so "to map mfns
> [...".

Good idea, I will modify it in the next version.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 04/17] xen: Use the typesafe mfn and gfn in map_mmio_regions...
  2016-06-28 16:17 ` [PATCH v5 04/17] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
@ 2016-07-06 10:23   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:23 UTC (permalink / raw)
  To: Julien Grall
  Cc: Tim Deegan, sstabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, xen-devel, Jan Beulich

On Tue, 28 Jun 2016, Julien Grall wrote:
> to avoid mixing machine frame with guest frame.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
>     Changes in v3:
>         - Use mfn_add when it is possible
>         - Add Jan's acked-by
> ---
>  xen/arch/arm/domain_build.c      |  4 ++--
>  xen/arch/arm/gic-v2.c            |  4 ++--
>  xen/arch/arm/p2m.c               | 22 +++++++++++-----------
>  xen/arch/arm/platforms/exynos5.c |  8 ++++----
>  xen/arch/arm/platforms/omap5.c   | 16 ++++++++--------
>  xen/arch/arm/vgic-v2.c           |  4 ++--
>  xen/arch/x86/mm/p2m.c            | 18 ++++++++++--------
>  xen/common/domctl.c              |  4 ++--
>  xen/include/xen/p2m-common.h     |  8 ++++----
>  9 files changed, 45 insertions(+), 43 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9035486..49185f0 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1036,9 +1036,9 @@ static int map_range_to_domain(const struct dt_device_node *dev,
>      if ( need_mapping )
>      {
>          res = map_mmio_regions(d,
> -                               paddr_to_pfn(addr),
> +                               _gfn(paddr_to_pfn(addr)),
>                                 DIV_ROUND_UP(len, PAGE_SIZE),
> -                               paddr_to_pfn(addr));
> +                               _mfn(paddr_to_pfn(addr)));
>          if ( res < 0 )
>          {
>              printk(XENLOG_ERR "Unable to map 0x%"PRIx64
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 4e2f4c7..3893ece 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -601,9 +601,9 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
>                 d->domain_id, v2m_data->addr, v2m_data->size,
>                 v2m_data->spi_start, v2m_data->nr_spis);
>  
> -        ret = map_mmio_regions(d, paddr_to_pfn(v2m_data->addr),
> +        ret = map_mmio_regions(d, _gfn(paddr_to_pfn(v2m_data->addr)),
>                              DIV_ROUND_UP(v2m_data->size, PAGE_SIZE),
> -                            paddr_to_pfn(v2m_data->addr));
> +                            _mfn(paddr_to_pfn(v2m_data->addr)));
>          if ( ret )
>          {
>              printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 0395a40..34563bb 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1245,27 +1245,27 @@ int unmap_regions_rw_cache(struct domain *d,
>  }
>  
>  int map_mmio_regions(struct domain *d,
> -                     unsigned long start_gfn,
> +                     gfn_t start_gfn,
>                       unsigned long nr,
> -                     unsigned long mfn)
> +                     mfn_t mfn)
>  {
>      return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(start_gfn),
> -                             pfn_to_paddr(start_gfn + nr),
> -                             pfn_to_paddr(mfn),
> +                             pfn_to_paddr(gfn_x(start_gfn)),
> +                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> +                             pfn_to_paddr(mfn_x(mfn)),
>                               MATTR_DEV, 0, p2m_mmio_direct,
>                               d->arch.p2m.default_access);
>  }
>  
>  int unmap_mmio_regions(struct domain *d,
> -                       unsigned long start_gfn,
> +                       gfn_t start_gfn,
>                         unsigned long nr,
> -                       unsigned long mfn)
> +                       mfn_t mfn)
>  {
>      return apply_p2m_changes(d, REMOVE,
> -                             pfn_to_paddr(start_gfn),
> -                             pfn_to_paddr(start_gfn + nr),
> -                             pfn_to_paddr(mfn),
> +                             pfn_to_paddr(gfn_x(start_gfn)),
> +                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> +                             pfn_to_paddr(mfn_x(mfn)),
>                               MATTR_DEV, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
>  }
> @@ -1280,7 +1280,7 @@ int map_dev_mmio_region(struct domain *d,
>      if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
>          return 0;
>  
> -    res = map_mmio_regions(d, start_gfn, nr, mfn);
> +    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
>      if ( res < 0 )
>      {
>          printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",
> diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
> index bf4964d..c43934f 100644
> --- a/xen/arch/arm/platforms/exynos5.c
> +++ b/xen/arch/arm/platforms/exynos5.c
> @@ -83,12 +83,12 @@ static int exynos5_init_time(void)
>  static int exynos5250_specific_mapping(struct domain *d)
>  {
>      /* Map the chip ID */
> -    map_mmio_regions(d, paddr_to_pfn(EXYNOS5_PA_CHIPID), 1,
> -                     paddr_to_pfn(EXYNOS5_PA_CHIPID));
> +    map_mmio_regions(d, _gfn(paddr_to_pfn(EXYNOS5_PA_CHIPID)), 1,
> +                     _mfn(paddr_to_pfn(EXYNOS5_PA_CHIPID)));
>  
>      /* Map the PWM region */
> -    map_mmio_regions(d, paddr_to_pfn(EXYNOS5_PA_TIMER), 2,
> -                     paddr_to_pfn(EXYNOS5_PA_TIMER));
> +    map_mmio_regions(d, _gfn(paddr_to_pfn(EXYNOS5_PA_TIMER)), 2,
> +                     _mfn(paddr_to_pfn(EXYNOS5_PA_TIMER)));
>  
>      return 0;
>  }
> diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
> index a49ba62..539588e 100644
> --- a/xen/arch/arm/platforms/omap5.c
> +++ b/xen/arch/arm/platforms/omap5.c
> @@ -102,20 +102,20 @@ static int omap5_init_time(void)
>  static int omap5_specific_mapping(struct domain *d)
>  {
>      /* Map the PRM module */
> -    map_mmio_regions(d, paddr_to_pfn(OMAP5_PRM_BASE), 2,
> -                     paddr_to_pfn(OMAP5_PRM_BASE));
> +    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_PRM_BASE)), 2,
> +                     _mfn(paddr_to_pfn(OMAP5_PRM_BASE)));
>  
>      /* Map the PRM_MPU */
> -    map_mmio_regions(d, paddr_to_pfn(OMAP5_PRCM_MPU_BASE), 1,
> -                     paddr_to_pfn(OMAP5_PRCM_MPU_BASE));
> +    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_PRCM_MPU_BASE)), 1,
> +                     _mfn(paddr_to_pfn(OMAP5_PRCM_MPU_BASE)));
>  
>      /* Map the Wakeup Gen */
> -    map_mmio_regions(d, paddr_to_pfn(OMAP5_WKUPGEN_BASE), 1,
> -                     paddr_to_pfn(OMAP5_WKUPGEN_BASE));
> +    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_WKUPGEN_BASE)), 1,
> +                     _mfn(paddr_to_pfn(OMAP5_WKUPGEN_BASE)));
>  
>      /* Map the on-chip SRAM */
> -    map_mmio_regions(d, paddr_to_pfn(OMAP5_SRAM_PA), 32,
> -                     paddr_to_pfn(OMAP5_SRAM_PA));
> +    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_SRAM_PA)), 32,
> +                     _mfn(paddr_to_pfn(OMAP5_SRAM_PA)));
>  
>      return 0;
>  }
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 9adb4a9..cbe61cf 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -688,8 +688,8 @@ static int vgic_v2_domain_init(struct domain *d)
>       * Map the gic virtual cpu interface in the gic cpu interface
>       * region of the guest.
>       */
> -    ret = map_mmio_regions(d, paddr_to_pfn(cbase), csize / PAGE_SIZE,
> -                           paddr_to_pfn(vbase));
> +    ret = map_mmio_regions(d, _gfn(paddr_to_pfn(cbase)), csize / PAGE_SIZE,
> +                           _mfn(paddr_to_pfn(vbase)));
>      if ( ret )
>          return ret;
>  
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 16733a4..6258a5b 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2214,9 +2214,9 @@ static unsigned int mmio_order(const struct domain *d,
>  #define MAP_MMIO_MAX_ITER 64 /* pretty arbitrary */
>  
>  int map_mmio_regions(struct domain *d,
> -                     unsigned long start_gfn,
> +                     gfn_t start_gfn,
>                       unsigned long nr,
> -                     unsigned long mfn)
> +                     mfn_t mfn)
>  {
>      int ret = 0;
>      unsigned long i;
> @@ -2229,10 +2229,11 @@ int map_mmio_regions(struct domain *d,
>            i += 1UL << order, ++iter )
>      {
>          /* OR'ing gfn and mfn values will return an order suitable to both. */
> -        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
> +        for ( order = mmio_order(d, (gfn_x(start_gfn) + i) | (mfn_x(mfn) + i), nr - i); ;
>                order = ret - 1 )
>          {
> -            ret = set_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i), order,
> +            ret = set_mmio_p2m_entry(d, gfn_x(start_gfn) + i,
> +                                     mfn_add(mfn, i), order,
>                                       p2m_get_hostp2m(d)->default_access);
>              if ( ret <= 0 )
>                  break;
> @@ -2246,9 +2247,9 @@ int map_mmio_regions(struct domain *d,
>  }
>  
>  int unmap_mmio_regions(struct domain *d,
> -                       unsigned long start_gfn,
> +                       gfn_t start_gfn,
>                         unsigned long nr,
> -                       unsigned long mfn)
> +                       mfn_t mfn)
>  {
>      int ret = 0;
>      unsigned long i;
> @@ -2261,10 +2262,11 @@ int unmap_mmio_regions(struct domain *d,
>            i += 1UL << order, ++iter )
>      {
>          /* OR'ing gfn and mfn values will return an order suitable to both. */
> -        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
> +        for ( order = mmio_order(d, (gfn_x(start_gfn) + i) | (mfn_x(mfn) + i), nr - i); ;
>                order = ret - 1 )
>          {
> -            ret = clear_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i), order);
> +            ret = clear_mmio_p2m_entry(d, gfn_x(start_gfn) + i,
> +                                       mfn_add(mfn, i), order);
>              if ( ret <= 0 )
>                  break;
>              ASSERT(ret <= order);
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index e43904e..b784e6c 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -1074,7 +1074,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>                     "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>                     d->domain_id, gfn, mfn, nr_mfns);
>  
> -            ret = map_mmio_regions(d, gfn, nr_mfns, mfn);
> +            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
>              if ( ret < 0 )
>                  printk(XENLOG_G_WARNING
>                         "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
> @@ -1086,7 +1086,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>                     "memory_map:remove: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>                     d->domain_id, gfn, mfn, nr_mfns);
>  
> -            ret = unmap_mmio_regions(d, gfn, nr_mfns, mfn);
> +            ret = unmap_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
>              if ( ret < 0 && is_hardware_domain(current->domain) )
>                  printk(XENLOG_ERR
>                         "memory_map: error %ld removing dom%d access to [%lx,%lx]\n",
> diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> index 6374a5b..b4f9077 100644
> --- a/xen/include/xen/p2m-common.h
> +++ b/xen/include/xen/p2m-common.h
> @@ -37,13 +37,13 @@ typedef enum {
>   *  * the guest physical address space to map, starting from the machine
>   *   * frame number mfn. */
>  int map_mmio_regions(struct domain *d,
> -                     unsigned long start_gfn,
> +                     gfn_t start_gfn,
>                       unsigned long nr,
> -                     unsigned long mfn);
> +                     mfn_t mfn);
>  int unmap_mmio_regions(struct domain *d,
> -                       unsigned long start_gfn,
> +                       gfn_t start_gfn,
>                         unsigned long nr,
> -                       unsigned long mfn);
> +                       mfn_t mfn);
>  
>  /*
>   * Set access type for a region of gfns.
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN
  2016-06-28 16:17 ` [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN Julien Grall
  2016-06-28 17:05   ` Andrew Cooper
@ 2016-07-06 10:27   ` Stefano Stabellini
  1 sibling, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:27 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, sstabellini, Jun Nakajima, George Dunlap,
	Liu Jinsong, Christoph Egger, Tim Deegan, xen-devel,
	Paul Durrant, Jan Beulich, Andrew Cooper, Mukesh Rathor

On Tue, 28 Jun 2016, Julien Grall wrote:
> Also take the opportunity to convert arch/x86/debug.c to the typesafe
> mfn.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
> Cc: Christoph Egger <chegger@amazon.de>
> Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> 
>     Changes in v5:
>         - Patch added
> ---
>  xen/arch/arm/p2m.c              |  4 ++--

For the ARM bits:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



>  xen/arch/x86/cpu/mcheck/mce.c   |  2 +-
>  xen/arch/x86/debug.c            | 50 ++++++++++++++++++++-------------------
>  xen/arch/x86/hvm/hvm.c          |  6 ++---
>  xen/arch/x86/hvm/viridian.c     |  6 ++---
>  xen/arch/x86/hvm/vmx/vmx.c      |  2 +-
>  xen/arch/x86/mm/guest_walk.c    |  4 ++--
>  xen/arch/x86/mm/hap/hap.c       |  4 ++--
>  xen/arch/x86/mm/p2m-ept.c       |  6 ++---
>  xen/arch/x86/mm/p2m-pod.c       | 18 +++++++-------
>  xen/arch/x86/mm/p2m-pt.c        | 18 +++++++-------
>  xen/arch/x86/mm/p2m.c           | 52 ++++++++++++++++++++---------------------
>  xen/arch/x86/mm/paging.c        | 12 +++++-----
>  xen/arch/x86/mm/shadow/common.c | 44 +++++++++++++++++-----------------
>  xen/arch/x86/mm/shadow/multi.c  | 36 ++++++++++++++--------------
>  xen/common/domain.c             |  6 ++---
>  xen/common/grant_table.c        |  6 ++---
>  xen/include/xen/mm.h            |  2 +-
>  18 files changed, 140 insertions(+), 138 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 34563bb..d690602 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1461,7 +1461,7 @@ int relinquish_p2m_mapping(struct domain *d)
>      return apply_p2m_changes(d, RELINQUISH,
>                                pfn_to_paddr(p2m->lowest_mapped_gfn),
>                                pfn_to_paddr(p2m->max_mapped_gfn),
> -                              pfn_to_paddr(INVALID_MFN),
> +                              pfn_to_paddr(mfn_x(INVALID_MFN)),
>                                MATTR_MEM, 0, p2m_invalid,
>                                d->arch.p2m.default_access);
>  }
> @@ -1476,7 +1476,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
>      return apply_p2m_changes(d, CACHEFLUSH,
>                               pfn_to_paddr(start_mfn),
>                               pfn_to_paddr(end_mfn),
> -                             pfn_to_paddr(INVALID_MFN),
> +                             pfn_to_paddr(mfn_x(INVALID_MFN)),
>                               MATTR_MEM, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
>  }
> diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
> index edcbe48..2695b0c 100644
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -1455,7 +1455,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
>                  gfn = PFN_DOWN(gaddr);
>                  mfn = mfn_x(get_gfn(d, gfn, &t));
>  
> -                if ( mfn == INVALID_MFN )
> +                if ( mfn == mfn_x(INVALID_MFN) )
>                  {
>                      put_gfn(d, gfn);
>                      put_domain(d);
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 58cae22..3479f7c 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -43,11 +43,11 @@ typedef unsigned long dbgva_t;
>  typedef unsigned char dbgbyte_t;
>  
>  /* Returns: mfn for the given (hvm guest) vaddr */
> -static unsigned long 
> +static mfn_t
>  dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>                  unsigned long *gfn)
>  {
> -    unsigned long mfn;
> +    mfn_t mfn;
>      uint32_t pfec = PFEC_page_present;
>      p2m_type_t gfntype;
>  
> @@ -60,16 +60,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>          return INVALID_MFN;
>      }
>  
> -    mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
> +    mfn = get_gfn(dp, *gfn, &gfntype);
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>          mfn = INVALID_MFN;
>      }
>      else
> -        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n",
> +              vaddr, dp->domain_id, mfn_x(mfn));
>  
> -    if ( mfn == INVALID_MFN )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>      {
>          put_gfn(dp, *gfn);
>          *gfn = INVALID_GFN;
> @@ -91,7 +92,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>   *       mode.
>   * Returns: mfn for the given (pv guest) vaddr 
>   */
> -static unsigned long 
> +static mfn_t
>  dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
>  {
>      l4_pgentry_t l4e, *l4t;
> @@ -99,31 +100,31 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
>      l2_pgentry_t l2e, *l2t;
>      l1_pgentry_t l1e, *l1t;
>      unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
> -    unsigned long mfn = cr3 >> PAGE_SHIFT;
> +    mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
>  
>      DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
>            cr3, pgd3val);
>  
>      if ( pgd3val == 0 )
>      {
> -        l4t = map_domain_page(_mfn(mfn));
> +        l4t = map_domain_page(mfn);
>          l4e = l4t[l4_table_offset(vaddr)];
>          unmap_domain_page(l4t);
> -        mfn = l4e_get_pfn(l4e);
> +        mfn = _mfn(l4e_get_pfn(l4e));
>          DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
> -              l4_table_offset(vaddr), l4e, mfn);
> +              l4_table_offset(vaddr), l4e, mfn_x(mfn));
>          if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
>          {
>              DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
>              return INVALID_MFN;
>          }
>  
> -        l3t = map_domain_page(_mfn(mfn));
> +        l3t = map_domain_page(mfn);
>          l3e = l3t[l3_table_offset(vaddr)];
>          unmap_domain_page(l3t);
> -        mfn = l3e_get_pfn(l3e);
> +        mfn = _mfn(l3e_get_pfn(l3e));
>          DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
> -              l3_table_offset(vaddr), l3e, mfn);
> +              l3_table_offset(vaddr), l3e, mfn_x(mfn));
>          if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
>               (l3e_get_flags(l3e) & _PAGE_PSE) )
>          {
> @@ -132,26 +133,26 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
>          }
>      }
>  
> -    l2t = map_domain_page(_mfn(mfn));
> +    l2t = map_domain_page(mfn);
>      l2e = l2t[l2_table_offset(vaddr)];
>      unmap_domain_page(l2t);
> -    mfn = l2e_get_pfn(l2e);
> +    mfn = _mfn(l2e_get_pfn(l2e));
>      DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
> -          l2e, mfn);
> +          l2e, mfn_x(mfn));
>      if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
>           (l2e_get_flags(l2e) & _PAGE_PSE) )
>      {
>          DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
>          return INVALID_MFN;
>      }
> -    l1t = map_domain_page(_mfn(mfn));
> +    l1t = map_domain_page(mfn);
>      l1e = l1t[l1_table_offset(vaddr)];
>      unmap_domain_page(l1t);
> -    mfn = l1e_get_pfn(l1e);
> +    mfn = _mfn(l1e_get_pfn(l1e));
>      DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
> -          l1e, mfn);
> +          l1e, mfn_x(mfn));
>  
> -    return mfn_valid(mfn) ? mfn : INVALID_MFN;
> +    return mfn_valid(mfn_x(mfn)) ? mfn : INVALID_MFN;
>  }
>  
>  /* Returns: number of bytes remaining to be copied */
> @@ -163,23 +164,24 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>      {
>          char *va;
>          unsigned long addr = (unsigned long)gaddr;
> -        unsigned long mfn, gfn = INVALID_GFN, pagecnt;
> +        mfn_t mfn;
> +        unsigned long gfn = INVALID_GFN, pagecnt;
>  
>          pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
>  
>          mfn = (has_hvm_container_domain(dp)
>                 ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>                 : dbg_pv_va2mfn(addr, dp, pgd3));
> -        if ( mfn == INVALID_MFN ) 
> +        if ( mfn_eq(mfn, INVALID_MFN) )
>              break;
>  
> -        va = map_domain_page(_mfn(mfn));
> +        va = map_domain_page(mfn);
>          va = va + (addr & (PAGE_SIZE-1));
>  
>          if ( toaddr )
>          {
>              copy_from_user(va, buf, pagecnt);    /* va = buf */
> -            paging_mark_dirty(dp, mfn);
> +            paging_mark_dirty(dp, mfn_x(mfn));
>          }
>          else
>          {
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c89ab6e..a929e3b 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1796,7 +1796,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>          p2m = hostp2m;
>  
>      /* Check access permissions first, then handle faults */
> -    if ( mfn_x(mfn) != INVALID_MFN )
> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>      {
>          bool_t violation;
>  
> @@ -5299,8 +5299,8 @@ static int do_altp2m_op(
>              rc = -EINVAL;
>  
>          if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> -             (mfn_x(get_gfn_query_unlocked(curr->domain,
> -                    a.u.enable_notify.gfn, &p2mt)) == INVALID_MFN) )
> +             (mfn_eq(get_gfn_query_unlocked(curr->domain,
> +                     a.u.enable_notify.gfn, &p2mt), INVALID_MFN)) )
>              return -EINVAL;
>  
>          vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 8253fd0..575acdd 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -196,7 +196,7 @@ static void enable_hypercall_page(struct domain *d)
>          if ( page )
>              put_page(page);
>          gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> -                 page ? page_to_mfn(page) : INVALID_MFN);
> +                 page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
>          return;
>      }
>  
> @@ -269,7 +269,7 @@ static void initialize_apic_assist(struct vcpu *v)
>  
>   fail:
>      gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> -             page ? page_to_mfn(page) : INVALID_MFN);
> +             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
>  }
>  
>  static void teardown_apic_assist(struct vcpu *v)
> @@ -349,7 +349,7 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
>          if ( page )
>              put_page(page);
>          gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> -                 page ? page_to_mfn(page) : INVALID_MFN);
> +                 page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
>          return;
>      }
>  
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 9d4121e..888a6b1 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2025,7 +2025,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
>  
>              mfn = get_gfn_query_unlocked(d, gfn_x(vcpu_altp2m(v).veinfo_gfn), &t);
>  
> -            if ( mfn_x(mfn) != INVALID_MFN )
> +            if ( !mfn_eq(mfn, INVALID_MFN) )
>                  __vmwrite(VIRT_EXCEPTION_INFO, mfn_x(mfn) << PAGE_SHIFT);
>              else
>                  v->arch.hvm_vmx.secondary_exec_control &=
> diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
> index e850502..868e909 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -281,7 +281,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>          start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
>                       ((va >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
>          gw->l1e = guest_l1e_from_gfn(start, flags);
> -        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
> +        gw->l2mfn = gw->l1mfn = INVALID_MFN;
>          goto set_ad;
>      }
>  
> @@ -356,7 +356,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>          start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
>                       guest_l1_table_offset(va));
>          gw->l1e = guest_l1e_from_gfn(start, flags);
> -        gw->l1mfn = _mfn(INVALID_MFN);
> +        gw->l1mfn = INVALID_MFN;
>      } 
>      else 
>      {
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index 9c2cd49..3218fa2 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -430,7 +430,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
>   oom:
>      HAP_ERROR("out of memory building monitor pagetable\n");
>      domain_crash(d);
> -    return _mfn(INVALID_MFN);
> +    return INVALID_MFN;
>  }
>  
>  static void hap_destroy_monitor_table(struct vcpu* v, mfn_t mmfn)
> @@ -509,7 +509,7 @@ int hap_enable(struct domain *d, u32 mode)
>          }
>  
>          for ( i = 0; i < MAX_EPTP; i++ )
> -            d->arch.altp2m_eptp[i] = INVALID_MFN;
> +            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
>  
>          for ( i = 0; i < MAX_ALTP2M; i++ )
>          {
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 7166c71..6d03736 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -50,7 +50,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>                                    int level)
>  {
>      int rc;
> -    unsigned long oldmfn = INVALID_MFN;
> +    unsigned long oldmfn = mfn_x(INVALID_MFN);
>      bool_t check_foreign = (new.mfn != entryptr->mfn ||
>                              new.sa_p2mt != entryptr->sa_p2mt);
>  
> @@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>  
>      write_atomic(&entryptr->epte, new.epte);
>  
> -    if ( unlikely(oldmfn != INVALID_MFN) )
> +    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
>          put_page(mfn_to_page(oldmfn));
>  
>      rc = 0;
> @@ -887,7 +887,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>      int i;
>      int ret = 0;
>      bool_t recalc = 0;
> -    mfn_t mfn = _mfn(INVALID_MFN);
> +    mfn_t mfn = INVALID_MFN;
>      struct ept_data *ept = &p2m->ept;
>  
>      *t = p2m_mmio_dm;
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index b7ab169..f384589 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -559,7 +559,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>      {
>          /* All PoD: Mark the whole region invalid and tell caller
>           * we're done. */
> -        p2m_set_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid,
> +        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
>                        p2m->default_access);
>          p2m->pod.entry_count-=(1<<order);
>          BUG_ON(p2m->pod.entry_count < 0);
> @@ -602,7 +602,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>          n = 1UL << cur_order;
>          if ( t == p2m_populate_on_demand )
>          {
> -            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> +            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
>                            p2m_invalid, p2m->default_access);
>              p2m->pod.entry_count -= n;
>              BUG_ON(p2m->pod.entry_count < 0);
> @@ -624,7 +624,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>  
>              page = mfn_to_page(mfn);
>  
> -            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> +            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
>                            p2m_invalid, p2m->default_access);
>              p2m_tlb_flush_sync(p2m);
>              for ( j = 0; j < n; ++j )
> @@ -671,7 +671,7 @@ void p2m_pod_dump_data(struct domain *d)
>  static int
>  p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>  {
> -    mfn_t mfn, mfn0 = _mfn(INVALID_MFN);
> +    mfn_t mfn, mfn0 = INVALID_MFN;
>      p2m_type_t type, type0 = 0;
>      unsigned long * map = NULL;
>      int ret=0, reset = 0;
> @@ -754,7 +754,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>      }
>  
>      /* Try to remove the page, restoring old mapping if it fails. */
> -    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_2M,
> +    p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_2M,
>                    p2m_populate_on_demand, p2m->default_access);
>      p2m_tlb_flush_sync(p2m);
>  
> @@ -871,7 +871,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>          }
>  
>          /* Try to remove the page, restoring old mapping if it fails. */
> -        p2m_set_entry(p2m, gfns[i], _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
>                        p2m_populate_on_demand, p2m->default_access);
>  
>          /* See if the page was successfully unmapped.  (Allow one refcount
> @@ -1073,7 +1073,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>           * NOTE: In a fine-grained p2m locking scenario this operation
>           * may need to promote its locking from gfn->1g superpage
>           */
> -        p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
> +        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
>                        p2m_populate_on_demand, p2m->default_access);
>          return 0;
>      }
> @@ -1157,7 +1157,7 @@ remap_and_retry:
>       * need promoting the gfn lock from gfn->2M superpage */
>      gfn_aligned = (gfn>>order)<<order;
>      for(i=0; i<(1<<order); i++)
> -        p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
>                        p2m_populate_on_demand, p2m->default_access);
>      if ( tb_init_done )
>      {
> @@ -1215,7 +1215,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
>      }
>  
>      /* Now, actually do the two-way mapping */
> -    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order,
> +    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
>                         p2m_populate_on_demand, p2m->default_access);
>      if ( rc == 0 )
>      {
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 4980934..2b6e89e 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -511,7 +511,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>       * the intermediate one might be).
>       */
>      unsigned int flags, iommu_old_flags = 0;
> -    unsigned long old_mfn = INVALID_MFN;
> +    unsigned long old_mfn = mfn_x(INVALID_MFN);
>  
>      ASSERT(sve != 0);
>  
> @@ -764,7 +764,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
>                       p2m->max_mapped_pfn )
>                      break;
>          }
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>      }
>  
>      mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
> @@ -777,7 +777,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
>          if ( (l4e_get_flags(*l4e) & _PAGE_PRESENT) == 0 )
>          {
>              unmap_domain_page(l4e);
> -            return _mfn(INVALID_MFN);
> +            return INVALID_MFN;
>          }
>          mfn = _mfn(l4e_get_pfn(*l4e));
>          recalc = needs_recalc(l4, *l4e);
> @@ -805,7 +805,7 @@ pod_retry_l3:
>                      *t = p2m_populate_on_demand;
>              }
>              unmap_domain_page(l3e);
> -            return _mfn(INVALID_MFN);
> +            return INVALID_MFN;
>          }
>          if ( flags & _PAGE_PSE )
>          {
> @@ -817,7 +817,7 @@ pod_retry_l3:
>              unmap_domain_page(l3e);
>  
>              ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
> -            return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
> +            return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
>          }
>  
>          mfn = _mfn(l3e_get_pfn(*l3e));
> @@ -846,7 +846,7 @@ pod_retry_l2:
>          }
>      
>          unmap_domain_page(l2e);
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>      }
>      if ( flags & _PAGE_PSE )
>      {
> @@ -856,7 +856,7 @@ pod_retry_l2:
>          unmap_domain_page(l2e);
>          
>          ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
> -        return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
> +        return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
>      }
>  
>      mfn = _mfn(l2e_get_pfn(*l2e));
> @@ -885,14 +885,14 @@ pod_retry_l1:
>          }
>      
>          unmap_domain_page(l1e);
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>      }
>      mfn = _mfn(l1e_get_pfn(*l1e));
>      *t = recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
>      unmap_domain_page(l1e);
>  
>      ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
> -    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : _mfn(INVALID_MFN);
> +    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
>  }
>  
>  static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6258a5b..6f90510 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -388,7 +388,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
>      if (unlikely((p2m_is_broken(*t))))
>      {
>          /* Return invalid_mfn to avoid caller's access */
> -        mfn = _mfn(INVALID_MFN);
> +        mfn = INVALID_MFN;
>          if ( q & P2M_ALLOC )
>              domain_crash(p2m->domain);
>      }
> @@ -493,7 +493,7 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>              rc = set_rc;
>  
>          gfn += 1ul << order;
> -        if ( mfn_x(mfn) != INVALID_MFN )
> +        if ( !mfn_eq(mfn, INVALID_MFN) )
>              mfn = _mfn(mfn_x(mfn) + (1ul << order));
>          todo -= 1ul << order;
>      }
> @@ -580,7 +580,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
>  
>      /* Initialise physmap tables for slot zero. Other code assumes this. */
>      p2m->defer_nested_flush = 1;
> -    rc = p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +    rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K,
>                         p2m_invalid, p2m->default_access);
>      p2m->defer_nested_flush = 0;
>      p2m_unlock(p2m);
> @@ -670,7 +670,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
>              ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
>          }
>      }
> -    return p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid,
> +    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
>                           p2m->default_access);
>  }
>  
> @@ -840,7 +840,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
>      {
>          gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
>                   gfn_x(gfn), mfn_x(mfn));
> -        rc = p2m_set_entry(p2m, gfn_x(gfn), _mfn(INVALID_MFN), page_order,
> +        rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order,
>                             p2m_invalid, p2m->default_access);
>          if ( rc == 0 )
>          {
> @@ -1107,7 +1107,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
>      }
>  
>      /* Do not use mfn_valid() here as it will usually fail for MMIO pages. */
> -    if ( (INVALID_MFN == mfn_x(actual_mfn)) || (t != p2m_mmio_direct) )
> +    if ( (mfn_eq(actual_mfn, INVALID_MFN)) || (t != p2m_mmio_direct) )
>      {
>          gdprintk(XENLOG_ERR,
>                   "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
> @@ -1117,7 +1117,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
>          gdprintk(XENLOG_WARNING,
>                   "no mapping between mfn %08lx and gfn %08lx\n",
>                   mfn_x(mfn), gfn);
> -    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order, p2m_invalid,
> +    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
>                         p2m->default_access);
>  
>   out:
> @@ -1146,7 +1146,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
>      mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
>      if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
>      {
> -        ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +        ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
>                              p2m_invalid, p2m->default_access);
>          gfn_unlock(p2m, gfn, 0);
>      }
> @@ -1316,7 +1316,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
>          put_page(page);
>  
>      /* Remove mapping from p2m table */
> -    ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +    ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
>                          p2m_ram_paged, a);
>  
>      /* Clear content before returning the page to Xen */
> @@ -1844,7 +1844,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>      if ( altp2m_idx )
>      {
>          if ( altp2m_idx >= MAX_ALTP2M ||
> -             d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN )
> +             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
>              return -EINVAL;
>  
>          ap2m = d->arch.altp2m_p2m[altp2m_idx];
> @@ -1942,7 +1942,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
>      mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
>      gfn_unlock(p2m, gfn, 0);
>  
> -    if ( mfn_x(mfn) == INVALID_MFN )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>          return -ESRCH;
>      
>      if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> @@ -2288,7 +2288,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
>  
>      for ( i = 0; i < MAX_ALTP2M; i++ )
>      {
> -        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
> +        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>              continue;
>  
>          p2m = d->arch.altp2m_p2m[i];
> @@ -2315,7 +2315,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
>  
>      altp2m_list_lock(d);
>  
> -    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>      {
>          if ( idx != vcpu_altp2m(v).p2midx )
>          {
> @@ -2359,14 +2359,14 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
>                                0, &page_order);
>      __put_gfn(*ap2m, gfn_x(gfn));
>  
> -    if ( mfn_x(mfn) != INVALID_MFN )
> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>          return 0;
>  
>      mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
>                                P2M_ALLOC | P2M_UNSHARE, &page_order);
>      __put_gfn(hp2m, gfn_x(gfn));
>  
> -    if ( mfn_x(mfn) == INVALID_MFN )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>          return 0;
>  
>      p2m_lock(*ap2m);
> @@ -2404,7 +2404,7 @@ void p2m_flush_altp2m(struct domain *d)
>          /* Uninit and reinit ept to force TLB shootdown */
>          ept_p2m_uninit(d->arch.altp2m_p2m[i]);
>          ept_p2m_init(d->arch.altp2m_p2m[i]);
> -        d->arch.altp2m_eptp[i] = INVALID_MFN;
> +        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
>      }
>  
>      altp2m_list_unlock(d);
> @@ -2431,7 +2431,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>  
>      altp2m_list_lock(d);
>  
> -    if ( d->arch.altp2m_eptp[idx] == INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
>      {
>          p2m_init_altp2m_helper(d, idx);
>          rc = 0;
> @@ -2450,7 +2450,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
>  
>      for ( i = 0; i < MAX_ALTP2M; i++ )
>      {
> -        if ( d->arch.altp2m_eptp[i] != INVALID_MFN )
> +        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
>              continue;
>  
>          p2m_init_altp2m_helper(d, i);
> @@ -2476,7 +2476,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>  
>      altp2m_list_lock(d);
>  
> -    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>      {
>          p2m = d->arch.altp2m_p2m[idx];
>  
> @@ -2486,7 +2486,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>              /* Uninit and reinit ept to force TLB shootdown */
>              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>              ept_p2m_init(d->arch.altp2m_p2m[idx]);
> -            d->arch.altp2m_eptp[idx] = INVALID_MFN;
> +            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>              rc = 0;
>          }
>      }
> @@ -2510,7 +2510,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
>  
>      altp2m_list_lock(d);
>  
> -    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>      {
>          for_each_vcpu( d, v )
>              if ( idx != vcpu_altp2m(v).p2midx )
> @@ -2541,7 +2541,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>      unsigned int page_order;
>      int rc = -EINVAL;
>  
> -    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN )
> +    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
>          return rc;
>  
>      hp2m = p2m_get_hostp2m(d);
> @@ -2636,14 +2636,14 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>  
>      for ( i = 0; i < MAX_ALTP2M; i++ )
>      {
> -        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
> +        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>              continue;
>  
>          p2m = d->arch.altp2m_p2m[i];
>          m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
>  
>          /* Check for a dropped page that may impact this altp2m */
> -        if ( mfn_x(mfn) == INVALID_MFN &&
> +        if ( mfn_eq(mfn, INVALID_MFN) &&
>               gfn_x(gfn) >= p2m->min_remapped_gfn &&
>               gfn_x(gfn) <= p2m->max_remapped_gfn )
>          {
> @@ -2660,7 +2660,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>                  for ( i = 0; i < MAX_ALTP2M; i++ )
>                  {
>                      if ( i == last_reset_idx ||
> -                         d->arch.altp2m_eptp[i] == INVALID_MFN )
> +                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>                          continue;
>  
>                      p2m = d->arch.altp2m_p2m[i];
> @@ -2672,7 +2672,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>                  goto out;
>              }
>          }
> -        else if ( mfn_x(m) != INVALID_MFN )
> +        else if ( !mfn_eq(m, INVALID_MFN) )
>              p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma);
>  
>          __put_gfn(p2m, gfn_x(gfn));
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index 8219bb6..107fc8c 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -67,7 +67,7 @@ static mfn_t paging_new_log_dirty_page(struct domain *d)
>      if ( unlikely(page == NULL) )
>      {
>          d->arch.paging.log_dirty.failed_allocs++;
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>      }
>  
>      d->arch.paging.log_dirty.allocs++;
> @@ -95,7 +95,7 @@ static mfn_t paging_new_log_dirty_node(struct domain *d)
>          int i;
>          mfn_t *node = map_domain_page(mfn);
>          for ( i = 0; i < LOGDIRTY_NODE_ENTRIES; i++ )
> -            node[i] = _mfn(INVALID_MFN);
> +            node[i] = INVALID_MFN;
>          unmap_domain_page(node);
>      }
>      return mfn;
> @@ -167,7 +167,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
>  
>              unmap_domain_page(l2);
>              paging_free_log_dirty_page(d, l3[i3]);
> -            l3[i3] = _mfn(INVALID_MFN);
> +            l3[i3] = INVALID_MFN;
>  
>              if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
>              {
> @@ -182,7 +182,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
>          if ( rc )
>              break;
>          paging_free_log_dirty_page(d, l4[i4]);
> -        l4[i4] = _mfn(INVALID_MFN);
> +        l4[i4] = INVALID_MFN;
>  
>          if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
>          {
> @@ -198,7 +198,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
>      if ( !rc )
>      {
>          paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
> -        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> +        d->arch.paging.log_dirty.top = INVALID_MFN;
>  
>          ASSERT(d->arch.paging.log_dirty.allocs == 0);
>          d->arch.paging.log_dirty.failed_allocs = 0;
> @@ -660,7 +660,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags)
>      /* This must be initialized separately from the rest of the
>       * log-dirty init code as that can be called more than once and we
>       * don't want to leak any active log-dirty bitmaps */
> -    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> +    d->arch.paging.log_dirty.top = INVALID_MFN;
>  
>      /*
>       * Shadow pagetables are the default, but we will use
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 226e32d..c51b370 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -88,10 +88,10 @@ void shadow_vcpu_init(struct vcpu *v)
>  
>      for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
>      {
> -        v->arch.paging.shadow.oos[i] = _mfn(INVALID_MFN);
> -        v->arch.paging.shadow.oos_snapshot[i] = _mfn(INVALID_MFN);
> +        v->arch.paging.shadow.oos[i] = INVALID_MFN;
> +        v->arch.paging.shadow.oos_snapshot[i] = INVALID_MFN;
>          for ( j = 0; j < SHADOW_OOS_FIXUPS; j++ )
> -            v->arch.paging.shadow.oos_fixup[i].smfn[j] = _mfn(INVALID_MFN);
> +            v->arch.paging.shadow.oos_fixup[i].smfn[j] = INVALID_MFN;
>      }
>  #endif
>  
> @@ -593,12 +593,12 @@ static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
>      int i;
>      for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
>      {
> -        if ( mfn_x(fixup->smfn[i]) != INVALID_MFN )
> +        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
>          {
>              sh_remove_write_access_from_sl1p(d, gmfn,
>                                               fixup->smfn[i],
>                                               fixup->off[i]);
> -            fixup->smfn[i] = _mfn(INVALID_MFN);
> +            fixup->smfn[i] = INVALID_MFN;
>          }
>      }
>  
> @@ -636,7 +636,7 @@ void oos_fixup_add(struct domain *d, mfn_t gmfn,
>  
>              next = oos_fixup[idx].next;
>  
> -            if ( mfn_x(oos_fixup[idx].smfn[next]) != INVALID_MFN )
> +            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
>              {
>                  TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
>  
> @@ -757,7 +757,7 @@ static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
>      struct oos_fixup fixup = { .next = 0 };
>  
>      for (i = 0; i < SHADOW_OOS_FIXUPS; i++ )
> -        fixup.smfn[i] = _mfn(INVALID_MFN);
> +        fixup.smfn[i] = INVALID_MFN;
>  
>      idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
>      oidx = idx;
> @@ -807,7 +807,7 @@ static void oos_hash_remove(struct domain *d, mfn_t gmfn)
>              idx = (idx + 1) % SHADOW_OOS_PAGES;
>          if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
>          {
> -            oos[idx] = _mfn(INVALID_MFN);
> +            oos[idx] = INVALID_MFN;
>              return;
>          }
>      }
> @@ -838,7 +838,7 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
>  
>      SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
>      BUG();
> -    return _mfn(INVALID_MFN);
> +    return INVALID_MFN;
>  }
>  
>  /* Pull a single guest page back into sync */
> @@ -862,7 +862,7 @@ void sh_resync(struct domain *d, mfn_t gmfn)
>          if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
>          {
>              _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
> -            oos[idx] = _mfn(INVALID_MFN);
> +            oos[idx] = INVALID_MFN;
>              return;
>          }
>      }
> @@ -914,7 +914,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
>          {
>              /* Write-protect and sync contents */
>              _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
> -            oos[idx] = _mfn(INVALID_MFN);
> +            oos[idx] = INVALID_MFN;
>          }
>  
>   resync_others:
> @@ -948,7 +948,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
>              {
>                  /* Write-protect and sync contents */
>                  _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
> -                oos[idx] = _mfn(INVALID_MFN);
> +                oos[idx] = INVALID_MFN;
>              }
>          }
>      }
> @@ -1784,7 +1784,7 @@ void *sh_emulate_map_dest(struct vcpu *v, unsigned long vaddr,
>      if ( likely(((vaddr + bytes - 1) & PAGE_MASK) == (vaddr & PAGE_MASK)) )
>      {
>          /* Whole write fits on a single page. */
> -        sh_ctxt->mfn[1] = _mfn(INVALID_MFN);
> +        sh_ctxt->mfn[1] = INVALID_MFN;
>          map = map_domain_page(sh_ctxt->mfn[0]) + (vaddr & ~PAGE_MASK);
>      }
>      else if ( !is_hvm_domain(d) )
> @@ -2086,7 +2086,7 @@ mfn_t shadow_hash_lookup(struct domain *d, unsigned long n, unsigned int t)
>      }
>  
>      perfc_incr(shadow_hash_lookup_miss);
> -    return _mfn(INVALID_MFN);
> +    return INVALID_MFN;
>  }
>  
>  void shadow_hash_insert(struct domain *d, unsigned long n, unsigned int t,
> @@ -2910,7 +2910,7 @@ void sh_reset_l3_up_pointers(struct vcpu *v)
>      };
>      static const unsigned int callback_mask = SHF_L3_64;
>  
> -    hash_vcpu_foreach(v, callback_mask, callbacks, _mfn(INVALID_MFN));
> +    hash_vcpu_foreach(v, callback_mask, callbacks, INVALID_MFN);
>  }
>  
>  
> @@ -2940,7 +2940,7 @@ static void sh_update_paging_modes(struct vcpu *v)
>  #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */
>  
>  #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
> -    if ( mfn_x(v->arch.paging.shadow.oos_snapshot[0]) == INVALID_MFN )
> +    if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
>      {
>          int i;
>          for(i = 0; i < SHADOW_OOS_PAGES; i++)
> @@ -3284,7 +3284,7 @@ void shadow_teardown(struct domain *d, int *preempted)
>                  if ( mfn_valid(oos_snapshot[i]) )
>                  {
>                      shadow_free(d, oos_snapshot[i]);
> -                    oos_snapshot[i] = _mfn(INVALID_MFN);
> +                    oos_snapshot[i] = INVALID_MFN;
>                  }
>          }
>  #endif /* OOS */
> @@ -3449,7 +3449,7 @@ static int shadow_one_bit_disable(struct domain *d, u32 mode)
>                      if ( mfn_valid(oos_snapshot[i]) )
>                      {
>                          shadow_free(d, oos_snapshot[i]);
> -                        oos_snapshot[i] = _mfn(INVALID_MFN);
> +                        oos_snapshot[i] = INVALID_MFN;
>                      }
>              }
>  #endif /* OOS */
> @@ -3744,7 +3744,7 @@ int shadow_track_dirty_vram(struct domain *d,
>          memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
>      else
>      {
> -        unsigned long map_mfn = INVALID_MFN;
> +        unsigned long map_mfn = mfn_x(INVALID_MFN);
>          void *map_sl1p = NULL;
>  
>          /* Iterate over VRAM to track dirty bits. */
> @@ -3754,7 +3754,7 @@ int shadow_track_dirty_vram(struct domain *d,
>              int dirty = 0;
>              paddr_t sl1ma = dirty_vram->sl1ma[i];
>  
> -            if (mfn_x(mfn) == INVALID_MFN)
> +            if ( !mfn_eq(mfn, INVALID_MFN) )
>              {
>                  dirty = 1;
>              }
> @@ -3830,7 +3830,7 @@ int shadow_track_dirty_vram(struct domain *d,
>              for ( i = begin_pfn; i < end_pfn; i++ )
>              {
>                  mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
> -                if ( mfn_x(mfn) != INVALID_MFN )
> +                if ( !mfn_eq(mfn, INVALID_MFN) )
>                      flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
>              }
>              dirty_vram->last_dirty = -1;
> @@ -3968,7 +3968,7 @@ void shadow_audit_tables(struct vcpu *v)
>          }
>      }
>  
> -    hash_vcpu_foreach(v, mask, callbacks, _mfn(INVALID_MFN));
> +    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN_T);
>  }
>  
>  #endif /* Shadow audit */
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index dfe59a2..f892e2f 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -177,7 +177,7 @@ sh_walk_guest_tables(struct vcpu *v, unsigned long va, walk_t *gw,
>  {
>      return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
>  #if GUEST_PAGING_LEVELS == 3 /* PAE */
> -                             _mfn(INVALID_MFN),
> +                             INVALID_MFN,
>                               v->arch.paging.shadow.gl3e
>  #else /* 32 or 64 */
>                               pagetable_get_mfn(v->arch.guest_table),
> @@ -336,32 +336,32 @@ static void sh_audit_gw(struct vcpu *v, walk_t *gw)
>      if ( mfn_valid(gw->l4mfn)
>           && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
>                                                  SH_type_l4_shadow))) )
> -        (void) sh_audit_l4_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
>      if ( mfn_valid(gw->l3mfn)
>           && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
>                                                  SH_type_l3_shadow))) )
> -        (void) sh_audit_l3_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
>  #endif /* PAE or 64... */
>      if ( mfn_valid(gw->l2mfn) )
>      {
>          if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
>                                                   SH_type_l2_shadow))) )
> -            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
> +            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
>  #if GUEST_PAGING_LEVELS == 3
>          if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
>                                                   SH_type_l2h_shadow))) )
> -            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
> +            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
>  #endif
>      }
>      if ( mfn_valid(gw->l1mfn)
>           && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
>                                                  SH_type_l1_shadow))) )
> -        (void) sh_audit_l1_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
>      else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
>                && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
>                && mfn_valid(
>                (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
> -        (void) sh_audit_fl1_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
>  }
>  
>  #else
> @@ -1752,7 +1752,7 @@ static shadow_l2e_t * shadow_get_and_create_l2e(struct vcpu *v,
>  {
>  #if GUEST_PAGING_LEVELS >= 4 /* 64bit... */
>      struct domain *d = v->domain;
> -    mfn_t sl3mfn = _mfn(INVALID_MFN);
> +    mfn_t sl3mfn = INVALID_MFN;
>      shadow_l3e_t *sl3e;
>      if ( !mfn_valid(gw->l2mfn) ) return NULL; /* No guest page. */
>      /* Get the l3e */
> @@ -2158,7 +2158,7 @@ static int validate_gl4e(struct vcpu *v, void *new_ge, mfn_t sl4mfn, void *se)
>      shadow_l4e_t new_sl4e;
>      guest_l4e_t new_gl4e = *(guest_l4e_t *)new_ge;
>      shadow_l4e_t *sl4p = se;
> -    mfn_t sl3mfn = _mfn(INVALID_MFN);
> +    mfn_t sl3mfn = INVALID_MFN;
>      struct domain *d = v->domain;
>      p2m_type_t p2mt;
>      int result = 0;
> @@ -2217,7 +2217,7 @@ static int validate_gl3e(struct vcpu *v, void *new_ge, mfn_t sl3mfn, void *se)
>      shadow_l3e_t new_sl3e;
>      guest_l3e_t new_gl3e = *(guest_l3e_t *)new_ge;
>      shadow_l3e_t *sl3p = se;
> -    mfn_t sl2mfn = _mfn(INVALID_MFN);
> +    mfn_t sl2mfn = INVALID_MFN;
>      p2m_type_t p2mt;
>      int result = 0;
>  
> @@ -2250,7 +2250,7 @@ static int validate_gl2e(struct vcpu *v, void *new_ge, mfn_t sl2mfn, void *se)
>      shadow_l2e_t new_sl2e;
>      guest_l2e_t new_gl2e = *(guest_l2e_t *)new_ge;
>      shadow_l2e_t *sl2p = se;
> -    mfn_t sl1mfn = _mfn(INVALID_MFN);
> +    mfn_t sl1mfn = INVALID_MFN;
>      p2m_type_t p2mt;
>      int result = 0;
>  
> @@ -2608,7 +2608,7 @@ static inline void check_for_early_unshadow(struct vcpu *v, mfn_t gmfn)
>  static inline void reset_early_unshadow(struct vcpu *v)
>  {
>  #if SHADOW_OPTIMIZATIONS & SHOPT_EARLY_UNSHADOW
> -    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = INVALID_MFN;
> +    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = mfn_x(INVALID_MFN);
>  #endif
>  }
>  
> @@ -4105,10 +4105,10 @@ sh_update_cr3(struct vcpu *v, int do_locking)
>                                             ? SH_type_l2h_shadow
>                                             : SH_type_l2_shadow);
>                  else
> -                    sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
> +                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
>              }
>              else
> -                sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
> +                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
>          }
>      }
>  #elif GUEST_PAGING_LEVELS == 4
> @@ -4531,7 +4531,7 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
>  
>          if ( fast_path ) {
>              if ( pagetable_is_null(v->arch.shadow_table[i]) )
> -                smfn = _mfn(INVALID_MFN);
> +                smfn = INVALID_MFN;
>              else
>                  smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i]));
>          }
> @@ -4540,8 +4540,8 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
>              /* retrieving the l2s */
>              gmfn = get_gfn_query_unlocked(d, gfn_x(guest_l3e_get_gfn(gl3e[i])),
>                                            &p2mt);
> -            smfn = unlikely(mfn_x(gmfn) == INVALID_MFN)
> -                   ? _mfn(INVALID_MFN)
> +            smfn = unlikely(mfn_eq(gmfn, INVALID_MFN))
> +                   ? INVALID_MFN
>                     : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
>          }
>  
> @@ -4846,7 +4846,7 @@ int sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
>  {
>      guest_l1e_t *gl1e, e;
>      shadow_l1e_t *sl1e;
> -    mfn_t gl1mfn = _mfn(INVALID_MFN);
> +    mfn_t gl1mfn = INVALID_MFN;
>      int f;
>      int done = 0;
>  
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 45273d4..42c07ee 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -117,7 +117,7 @@ static void vcpu_info_reset(struct vcpu *v)
>      v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
>                      ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
>                      : &dummy_vcpu_info);
> -    v->vcpu_info_mfn = INVALID_MFN;
> +    v->vcpu_info_mfn = mfn_x(INVALID_MFN);
>  }
>  
>  struct vcpu *alloc_vcpu(
> @@ -1141,7 +1141,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
>      if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
>          return -EINVAL;
>  
> -    if ( v->vcpu_info_mfn != INVALID_MFN )
> +    if ( v->vcpu_info_mfn != mfn_x(INVALID_MFN) )
>          return -EINVAL;
>  
>      /* Run this command on yourself or on other offline VCPUS. */
> @@ -1205,7 +1205,7 @@ void unmap_vcpu_info(struct vcpu *v)
>  {
>      unsigned long mfn;
>  
> -    if ( v->vcpu_info_mfn == INVALID_MFN )
> +    if ( v->vcpu_info_mfn == mfn_x(INVALID_MFN) )
>          return;
>  
>      mfn = v->vcpu_info_mfn;
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 3f15543..ecace07 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -244,7 +244,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
>                                (readonly) ? P2M_ALLOC : P2M_UNSHARE);
>      if ( !(*page) )
>      {
> -        *frame = INVALID_MFN;
> +        *frame = mfn_x(INVALID_MFN);
>          if ( p2m_is_shared(p2mt) )
>              return GNTST_eagain;
>          if ( p2m_is_paging(p2mt) )
> @@ -260,7 +260,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
>      *page = mfn_valid(*frame) ? mfn_to_page(*frame) : NULL;
>      if ( (!(*page)) || (!get_page(*page, rd)) )
>      {
> -        *frame = INVALID_MFN;
> +        *frame = mfn_x(INVALID_MFN);
>          *page = NULL;
>          rc = GNTST_bad_page;
>      }
> @@ -1785,7 +1785,7 @@ gnttab_transfer(
>              p2m_type_t __p2mt;
>              mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &__p2mt));
>              if ( p2m_is_shared(__p2mt) || !p2m_is_valid(__p2mt) )
> -                mfn = INVALID_MFN;
> +                mfn = mfn_x(INVALID_MFN);
>          }
>  #else
>          mfn = mfn_x(gfn_to_mfn(d, _gfn(gop.mfn)));
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index afbb1a1..7f207ec 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -55,7 +55,7 @@
>  
>  TYPE_SAFE(unsigned long, mfn);
>  #define PRI_mfn          "05lx"
> -#define INVALID_MFN      (~0UL)
> +#define INVALID_MFN      _mfn(~0UL)
>  
>  #ifndef mfn_t
>  #define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN
  2016-06-28 16:17 ` [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN Julien Grall
  2016-06-28 17:12   ` Andrew Cooper
@ 2016-07-06 10:28   ` Stefano Stabellini
  1 sibling, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:28 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, sstabellini, Feng Wu, Suravee Suthikulpanit,
	Jun Nakajima, Andrew Cooper, Tim Deegan, xen-devel,
	George Dunlap, Paul Durrant, Jan Beulich, Mukesh Rathor,
	Boris Ostrovsky

On Tue, 28 Jun 2016, Julien Grall wrote:
> Also take the opportunity to convert arch/x86/debug.c to the typesafe gfn.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

For the ARM bits

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Feng Wu <feng.wu@intel.com>
> 
>     Changes in v5:
>         - Patch added
> ---
>  xen/arch/arm/p2m.c                      |  4 ++--
>  xen/arch/x86/debug.c                    | 18 +++++++++---------
>  xen/arch/x86/domain.c                   |  2 +-
>  xen/arch/x86/hvm/emulate.c              |  7 ++++---
>  xen/arch/x86/hvm/hvm.c                  |  6 +++---
>  xen/arch/x86/hvm/ioreq.c                |  8 ++++----
>  xen/arch/x86/hvm/svm/nestedsvm.c        |  2 +-
>  xen/arch/x86/hvm/vmx/vmx.c              |  6 +++---
>  xen/arch/x86/mm/altp2m.c                |  2 +-
>  xen/arch/x86/mm/hap/guest_walk.c        | 10 +++++-----
>  xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
>  xen/arch/x86/mm/p2m-pod.c               |  6 +++---
>  xen/arch/x86/mm/p2m.c                   | 18 +++++++++---------
>  xen/arch/x86/mm/shadow/common.c         |  2 +-
>  xen/arch/x86/mm/shadow/multi.c          |  2 +-
>  xen/arch/x86/mm/shadow/private.h        |  2 +-
>  xen/drivers/passthrough/amd/iommu_map.c |  2 +-
>  xen/drivers/passthrough/vtd/iommu.c     |  4 ++--
>  xen/drivers/passthrough/x86/iommu.c     |  2 +-
>  xen/include/asm-x86/guest_pt.h          |  4 ++--
>  xen/include/asm-x86/p2m.h               |  2 +-
>  xen/include/xen/mm.h                    |  2 +-
>  22 files changed, 57 insertions(+), 56 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d690602..c938dde 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -479,7 +479,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>      }
>  
>      /* If request to get default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>      {
>          *access = memaccess[p2m->default_access];
>          return 0;
> @@ -1879,7 +1879,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>      p2m->mem_access_enabled = true;
>  
>      /* If request to set default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>      {
>          p2m->default_access = a;
>          return 0;
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3479f7c..1ce0e89 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -44,8 +44,7 @@ typedef unsigned char dbgbyte_t;
>  
>  /* Returns: mfn for the given (hvm guest) vaddr */
>  static mfn_t
> -dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> -                unsigned long *gfn)
> +dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr, gfn_t *gfn)
>  {
>      mfn_t mfn;
>      uint32_t pfec = PFEC_page_present;
> @@ -53,14 +52,14 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>  
>      DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
>  
> -    *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
> -    if ( *gfn == INVALID_GFN )
> +    *gfn = _gfn(paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec));
> +    if ( gfn_eq(*gfn, INVALID_GFN) )
>      {
>          DBGP2("kdb:bad gfn from gva_to_gfn\n");
>          return INVALID_MFN;
>      }
>  
> -    mfn = get_gfn(dp, *gfn, &gfntype);
> +    mfn = get_gfn(dp, gfn_x(*gfn), &gfntype);
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> @@ -72,7 +71,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>  
>      if ( mfn_eq(mfn, INVALID_MFN) )
>      {
> -        put_gfn(dp, *gfn);
> +        put_gfn(dp, gfn_x(*gfn));
>          *gfn = INVALID_GFN;
>      }
>  
> @@ -165,7 +164,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>          char *va;
>          unsigned long addr = (unsigned long)gaddr;
>          mfn_t mfn;
> -        unsigned long gfn = INVALID_GFN, pagecnt;
> +        gfn_t gfn = INVALID_GFN;
> +        unsigned long pagecnt;
>  
>          pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
>  
> @@ -189,8 +189,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>          }
>  
>          unmap_domain_page(va);
> -        if ( gfn != INVALID_GFN )
> -            put_gfn(dp, gfn);
> +        if ( !gfn_eq(gfn, INVALID_GFN) )
> +            put_gfn(dp, gfn_x(gfn));
>  
>          addr += pagecnt;
>          buf += pagecnt;
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index bb59247..c8c7e2d 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -783,7 +783,7 @@ int arch_domain_soft_reset(struct domain *d)
>       * gfn == INVALID_GFN indicates that the shared_info page was never mapped
>       * to the domain's address space and there is nothing to replace.
>       */
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>          goto exit_put_page;
>  
>      if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 855af4d..c55ad7b 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -455,7 +455,7 @@ static int hvmemul_linear_to_phys(
>              return rc;
>          pfn = _paddr >> PAGE_SHIFT;
>      }
> -    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == INVALID_GFN )
> +    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) )
>      {
>          if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
>              return X86EMUL_RETRY;
> @@ -472,7 +472,8 @@ static int hvmemul_linear_to_phys(
>          npfn = paging_gva_to_gfn(curr, addr, &pfec);
>  
>          /* Is it contiguous with the preceding PFNs? If not then we're done. */
> -        if ( (npfn == INVALID_GFN) || (npfn != (pfn + (reverse ? -i : i))) )
> +        if ( (npfn == gfn_x(INVALID_GFN)) ||
> +             (npfn != (pfn + (reverse ? -i : i))) )
>          {
>              if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
>                  return X86EMUL_RETRY;
> @@ -480,7 +481,7 @@ static int hvmemul_linear_to_phys(
>              if ( done == 0 )
>              {
>                  ASSERT(!reverse);
> -                if ( npfn != INVALID_GFN )
> +                if ( npfn != gfn_x(INVALID_GFN) )
>                      return X86EMUL_UNHANDLEABLE;
>                  hvm_inject_page_fault(pfec, addr & PAGE_MASK);
>                  return X86EMUL_EXCEPTION;
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index a929e3b..b9ffce2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3039,7 +3039,7 @@ static enum hvm_copy_result __hvm_copy(
>          if ( flags & HVMCOPY_virt )
>          {
>              gfn = paging_gva_to_gfn(curr, addr, &pfec);
> -            if ( gfn == INVALID_GFN )
> +            if ( gfn == gfn_x(INVALID_GFN) )
>              {
>                  if ( pfec & PFEC_page_paged )
>                      return HVMCOPY_gfn_paged_out;
> @@ -3154,7 +3154,7 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, int size)
>          count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
>  
>          gfn = paging_gva_to_gfn(curr, addr, &pfec);
> -        if ( gfn == INVALID_GFN )
> +        if ( gfn == gfn_x(INVALID_GFN) )
>          {
>              if ( pfec & PFEC_page_paged )
>                  return HVMCOPY_gfn_paged_out;
> @@ -5298,7 +5298,7 @@ static int do_altp2m_op(
>               a.u.enable_notify.vcpu_id != curr->vcpu_id )
>              rc = -EINVAL;
>  
> -        if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> +        if ( (!gfn_eq(vcpu_altp2m(curr).veinfo_gfn,  INVALID_GFN)) ||
>               (mfn_eq(get_gfn_query_unlocked(curr->domain,
>                       a.u.enable_notify.gfn, &p2mt), INVALID_MFN)) )
>              return -EINVAL;
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 7148ac4..d2245e2 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -204,7 +204,7 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>  {
>      unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>  
> -    if ( gmfn != INVALID_GFN )
> +    if ( gmfn != gfn_x(INVALID_GFN) )
>          set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>  }
>  
> @@ -420,7 +420,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
>      if ( rc )
>          return rc;
>  
> -    if ( bufioreq_pfn != INVALID_GFN )
> +    if ( bufioreq_pfn != gfn_x(INVALID_GFN) )
>          rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
>  
>      if ( rc )
> @@ -434,8 +434,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
>                                          bool_t handle_bufioreq)
>  {
>      struct domain *d = s->domain;
> -    unsigned long ioreq_pfn = INVALID_GFN;
> -    unsigned long bufioreq_pfn = INVALID_GFN;
> +    unsigned long ioreq_pfn = gfn_x(INVALID_GFN);
> +    unsigned long bufioreq_pfn = gfn_x(INVALID_GFN);
>      int rc;
>  
>      if ( is_default )
> diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
> index 9d2ac09..f9b38ab 100644
> --- a/xen/arch/x86/hvm/svm/nestedsvm.c
> +++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> @@ -1200,7 +1200,7 @@ nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>      /* Walk the guest-supplied NPT table, just as if it were a pagetable */
>      gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
>  
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>          return NESTEDHVM_PAGEFAULT_INJECT;
>  
>      *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 888a6b1..b203a04 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2057,13 +2057,13 @@ static int vmx_vcpu_emulate_vmfunc(struct cpu_user_regs *regs)
>  static bool_t vmx_vcpu_emulate_ve(struct vcpu *v)
>  {
>      bool_t rc = 0, writable;
> -    unsigned long gfn = gfn_x(vcpu_altp2m(v).veinfo_gfn);
> +    gfn_t gfn = vcpu_altp2m(v).veinfo_gfn;
>      ve_info_t *veinfo;
>  
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>          return 0;
>  
> -    veinfo = hvm_map_guest_frame_rw(gfn, 0, &writable);
> +    veinfo = hvm_map_guest_frame_rw(gfn_x(gfn), 0, &writable);
>      if ( !veinfo )
>          return 0;
>      if ( !writable || veinfo->semaphore != 0 )
> diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
> index 10605c8..930bdc2 100644
> --- a/xen/arch/x86/mm/altp2m.c
> +++ b/xen/arch/x86/mm/altp2m.c
> @@ -26,7 +26,7 @@ altp2m_vcpu_reset(struct vcpu *v)
>      struct altp2mvcpu *av = &vcpu_altp2m(v);
>  
>      av->p2midx = INVALID_ALTP2M;
> -    av->veinfo_gfn = _gfn(INVALID_GFN);
> +    av->veinfo_gfn = INVALID_GFN;
>  }
>  
>  void
> diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
> index d2716f9..1b1a15d 100644
> --- a/xen/arch/x86/mm/hap/guest_walk.c
> +++ b/xen/arch/x86/mm/hap/guest_walk.c
> @@ -70,14 +70,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>          if ( top_page )
>              put_page(top_page);
>          p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
> -        return INVALID_GFN;
> +        return gfn_x(INVALID_GFN);
>      }
>      if ( p2m_is_shared(p2mt) )
>      {
>          pfec[0] = PFEC_page_shared;
>          if ( top_page )
>              put_page(top_page);
> -        return INVALID_GFN;
> +        return gfn_x(INVALID_GFN);
>      }
>      if ( !top_page )
>      {
> @@ -110,12 +110,12 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>              ASSERT(p2m_is_hostp2m(p2m));
>              pfec[0] = PFEC_page_paged;
>              p2m_mem_paging_populate(p2m->domain, gfn_x(gfn));
> -            return INVALID_GFN;
> +            return gfn_x(INVALID_GFN);
>          }
>          if ( p2m_is_shared(p2mt) )
>          {
>              pfec[0] = PFEC_page_shared;
> -            return INVALID_GFN;
> +            return gfn_x(INVALID_GFN);
>          }
>  
>          if ( page_order )
> @@ -147,7 +147,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>      if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
>          pfec[0] &= ~PFEC_insn_fetch;
>  
> -    return INVALID_GFN;
> +    return gfn_x(INVALID_GFN);
>  }
>  
>  
> diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
> index 94cf832..02b27b1 100644
> --- a/xen/arch/x86/mm/hap/nested_ept.c
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -236,7 +236,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
>      ept_walk_t gw;
>      rwx_acc &= EPTE_RWX_MASK;
>  
> -    *l1gfn = INVALID_GFN;
> +    *l1gfn = gfn_x(INVALID_GFN);
>  
>      rc = nept_walk_tables(v, l2ga, &gw);
>      switch ( rc )
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index f384589..149f529 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1003,7 +1003,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
>          unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
>          unsigned long gfn = mrp->list[idx];
>  
> -        if ( gfn != INVALID_GFN )
> +        if ( gfn != gfn_x(INVALID_GFN) )
>          {
>              if ( gfn & POD_LAST_SUPERPAGE )
>              {
> @@ -1020,7 +1020,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
>              else
>                  p2m_pod_zero_check(p2m, &gfn, 1);
>  
> -            mrp->list[idx] = INVALID_GFN;
> +            mrp->list[idx] = gfn_x(INVALID_GFN);
>          }
>  
>      } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
> @@ -1031,7 +1031,7 @@ static void pod_eager_record(struct p2m_domain *p2m,
>  {
>      struct pod_mrp_list *mrp = &p2m->pod.mrp;
>  
> -    ASSERT(gfn != INVALID_GFN);
> +    ASSERT(gfn != gfn_x(INVALID_GFN));
>  
>      mrp->list[mrp->idx++] =
>          gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6f90510..90d3baa 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -76,7 +76,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
>      p2m->np2m_base = P2M_BASE_EADDR;
>  
>      for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
> -        p2m->pod.mrp.list[i] = INVALID_GFN;
> +        p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
>  
>      if ( hap_enabled(d) && cpu_has_vmx )
>          ret = ept_p2m_init(p2m);
> @@ -1863,7 +1863,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>      }
>  
>      /* If request to set default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>      {
>          p2m->default_access = a;
>          return 0;
> @@ -1932,7 +1932,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
>      };
>  
>      /* If request to get default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>      {
>          *access = memaccess[p2m->default_access];
>          return 0;
> @@ -2113,8 +2113,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
>          mode = paging_get_nestedmode(v);
>          l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>  
> -        if ( l2_gfn == INVALID_GFN )
> -            return INVALID_GFN;
> +        if ( l2_gfn == gfn_x(INVALID_GFN) )
> +            return gfn_x(INVALID_GFN);
>  
>          /* translate l2 guest gfn into l1 guest gfn */
>          rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
> @@ -2123,7 +2123,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
>                                     !!(*pfec & PFEC_insn_fetch));
>  
>          if ( rv != NESTEDHVM_PAGEFAULT_DONE )
> -            return INVALID_GFN;
> +            return gfn_x(INVALID_GFN);
>  
>          /*
>           * Sanity check that l1_gfn can be used properly as a 4K mapping, even
> @@ -2415,7 +2415,7 @@ static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>      struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>      struct ept_data *ept;
>  
> -    p2m->min_remapped_gfn = INVALID_GFN;
> +    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
>      p2m->max_remapped_gfn = 0;
>      ept = &p2m->ept;
>      ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> @@ -2551,7 +2551,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>  
>      mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
>  
> -    if ( gfn_x(new_gfn) == INVALID_GFN )
> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
>      {
>          if ( mfn_valid(mfn) )
>              p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
> @@ -2613,7 +2613,7 @@ static void p2m_reset_altp2m(struct p2m_domain *p2m)
>      /* Uninit and reinit ept to force TLB shootdown */
>      ept_p2m_uninit(p2m);
>      ept_p2m_init(p2m);
> -    p2m->min_remapped_gfn = INVALID_GFN;
> +    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
>      p2m->max_remapped_gfn = 0;
>  }
>  
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index c51b370..e48ba7b 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -1708,7 +1708,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
>  
>      /* Translate the VA to a GFN. */
>      gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>      {
>          if ( is_hvm_vcpu(v) )
>              hvm_inject_page_fault(pfec, vaddr);
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index f892e2f..e54c8b7 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -3660,7 +3660,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
>           */
>          if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
>              pfec[0] &= ~PFEC_insn_fetch;
> -        return INVALID_GFN;
> +        return gfn_x(INVALID_GFN);
>      }
>      gfn = guest_walk_to_gfn(&gw);
>  
> diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
> index c424ad6..824796f 100644
> --- a/xen/arch/x86/mm/shadow/private.h
> +++ b/xen/arch/x86/mm/shadow/private.h
> @@ -796,7 +796,7 @@ static inline unsigned long vtlb_lookup(struct vcpu *v,
>                                          unsigned long va, uint32_t pfec)
>  {
>      unsigned long page_number = va >> PAGE_SHIFT;
> -    unsigned long frame_number = INVALID_GFN;
> +    unsigned long frame_number = gfn_x(INVALID_GFN);
>      int i = vtlb_hash(page_number);
>  
>      spin_lock(&v->arch.paging.vtlb_lock);
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> index c758459..b8c0a48 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
>      unsigned long old_root_mfn;
>      struct domain_iommu *hd = dom_iommu(d);
>  
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>          return -EADDRNOTAVAIL;
>      ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
>  
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index f010612..c322b9f 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -611,7 +611,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
>          if ( iommu_domid == -1 )
>              continue;
>  
> -        if ( page_count != 1 || gfn == INVALID_GFN )
> +        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
>              rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
>                                         0, flush_dev_iotlb);
>          else
> @@ -640,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
>  
>  static int __must_check iommu_flush_iotlb_all(struct domain *d)
>  {
> -    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
> +    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
>  }
>  
>  /* clear one page's page table */
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index cd435d7..69cd6c5 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
>              unsigned long mfn = page_to_mfn(page);
>              unsigned long gfn = mfn_to_gmfn(d, mfn);
>  
> -            if ( gfn != INVALID_GFN )
> +            if ( gfn != gfn_x(INVALID_GFN) )
>              {
>                  ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
>                  BUG_ON(SHARED_M2P(gfn));
> diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
> index a8d980c..79ed4ff 100644
> --- a/xen/include/asm-x86/guest_pt.h
> +++ b/xen/include/asm-x86/guest_pt.h
> @@ -32,7 +32,7 @@
>  #error GUEST_PAGING_LEVELS not defined
>  #endif
>  
> -#define VALID_GFN(m) (m != INVALID_GFN)
> +#define VALID_GFN(m) (m != gfn_x(INVALID_GFN))
>  
>  static inline int
>  valid_gfn(gfn_t m)
> @@ -251,7 +251,7 @@ static inline gfn_t
>  guest_walk_to_gfn(walk_t *gw)
>  {
>      if ( !(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT) )
> -        return _gfn(INVALID_GFN);
> +        return INVALID_GFN;
>      return guest_l1e_get_gfn(gw->l1e);
>  }
>  
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 4ab3574..194020e 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -324,7 +324,7 @@ struct p2m_domain {
>  #define NR_POD_MRP_ENTRIES 32
>  
>  /* Encode ORDER_2M superpage in top bit of GFN */
> -#define POD_LAST_SUPERPAGE (INVALID_GFN & ~(INVALID_GFN >> 1))
> +#define POD_LAST_SUPERPAGE (gfn_x(INVALID_GFN) & ~(gfn_x(INVALID_GFN) >> 1))
>  
>              unsigned long list[NR_POD_MRP_ENTRIES];
>              unsigned int idx;
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 7f207ec..58bc0b8 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -84,7 +84,7 @@ static inline bool_t mfn_eq(mfn_t x, mfn_t y)
>  
>  TYPE_SAFE(unsigned long, gfn);
>  #define PRI_gfn          "05lx"
> -#define INVALID_GFN      (~0UL)
> +#define INVALID_GFN      _gfn(~0UL)
>  
>  #ifndef gfn_t
>  #define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 08/17] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn
  2016-06-28 16:17 ` [PATCH v5 08/17] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
@ 2016-07-06 10:32   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:32 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> The prototype and the declaration of p2m_lookup disagree on how the
> function should be used. One expect a frame number whilst the other
> an address.
> 
> Thankfully, everyone is using with an address today. However, most of
> the callers have to convert a guest frame to an address. Modify
> the interface to take a guest physical frame in parameter and return
> a machine frame.
> 
> Whilst modifying the interface, use typesafe gfn and mfn for clarity
> and catching possible misusage.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Use INVALID_MFN_T when possible
> ---
>  xen/arch/arm/p2m.c        | 43 +++++++++++++++++++++++--------------------
>  xen/arch/arm/traps.c      | 21 +++++++++++----------
>  xen/include/asm-arm/p2m.h |  7 +++----
>  3 files changed, 37 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c938dde..54a363a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -140,14 +140,15 @@ void flush_tlb_domain(struct domain *d)
>  }
>  
>  /*
> - * Lookup the MFN corresponding to a domain's PFN.
> + * Lookup the MFN corresponding to a domain's GFN.
>   *
>   * There are no processor functions to do a stage 2 only lookup therefore we
>   * do a a software walk.
>   */
> -static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
> +static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
> +    const paddr_t paddr = pfn_to_paddr(gfn_x(gfn));
>      const unsigned int offsets[4] = {
>          zeroeth_table_offset(paddr),
>          first_table_offset(paddr),
> @@ -158,7 +159,7 @@ static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
>          ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK
>      };
>      lpae_t pte, *map;
> -    paddr_t maddr = INVALID_PADDR;
> +    mfn_t mfn = INVALID_MFN;
>      paddr_t mask = 0;
>      p2m_type_t _t;
>      unsigned int level, root_table;
> @@ -216,21 +217,22 @@ static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
>      {
>          ASSERT(mask);
>          ASSERT(pte.p2m.type != p2m_invalid);
> -        maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
> +        mfn = _mfn(paddr_to_pfn((pte.bits & PADDR_MASK & mask) |
> +                                (paddr & ~mask)));
>          *t = pte.p2m.type;
>      }
>  
>  err:
> -    return maddr;
> +    return mfn;
>  }
>  
> -paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
> +mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>  {
> -    paddr_t ret;
> +    mfn_t ret;
>      struct p2m_domain *p2m = &d->arch.p2m;
>  
>      spin_lock(&p2m->lock);
> -    ret = __p2m_lookup(d, paddr, t);
> +    ret = __p2m_lookup(d, gfn, t);
>      spin_unlock(&p2m->lock);
>  
>      return ret;
> @@ -493,8 +495,9 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>           * No setting was found in the Radix tree. Check if the
>           * entry exists in the page-tables.
>           */
> -        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
> -        if ( INVALID_PADDR == maddr )
> +        mfn_t mfn = __p2m_lookup(d, gfn, NULL);
> +
> +        if ( mfn_eq(mfn, INVALID_MFN) )
>              return -ESRCH;
>  
>          /* If entry exists then its rwx. */
> @@ -1483,8 +1486,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
>  
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>  {
> -    paddr_t p = p2m_lookup(d, pfn_to_paddr(gfn_x(gfn)), NULL);
> -    return _mfn(p >> PAGE_SHIFT);
> +    return p2m_lookup(d, gfn, NULL);
>  }
>  
>  /*
> @@ -1498,8 +1500,8 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>  {
>      long rc;
>      paddr_t ipa;
> -    unsigned long maddr;
> -    unsigned long mfn;
> +    gfn_t gfn;
> +    mfn_t mfn;
>      xenmem_access_t xma;
>      p2m_type_t t;
>      struct page_info *page = NULL;
> @@ -1508,11 +1510,13 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>      if ( rc < 0 )
>          goto err;
>  
> +    gfn = _gfn(paddr_to_pfn(ipa));
> +
>      /*
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
>       */
> -    rc = __p2m_get_mem_access(current->domain, _gfn(paddr_to_pfn(ipa)), &xma);
> +    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
>      if ( rc < 0 )
>          goto err;
>  
> @@ -1561,12 +1565,11 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       * We had a mem_access permission limiting the access, but the page type
>       * could also be limiting, so we need to check that as well.
>       */
> -    maddr = __p2m_lookup(current->domain, ipa, &t);
> -    if ( maddr == INVALID_PADDR )
> +    mfn = __p2m_lookup(current->domain, gfn, &t);
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>          goto err;
>  
> -    mfn = maddr >> PAGE_SHIFT;
> -    if ( !mfn_valid(mfn) )
> +    if ( !mfn_valid(mfn_x(mfn)) )
>          goto err;
>  
>      /*
> @@ -1575,7 +1578,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>      if ( t != p2m_ram_rw )
>          goto err;
>  
> -    page = mfn_to_page(mfn);
> +    page = mfn_to_page(mfn_x(mfn));
>  
>      if ( unlikely(!get_page(page, current->domain)) )
>          page = NULL;
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 44926ca..b653f61 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2319,14 +2319,16 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
>  {
>      register_t ttbcr = READ_SYSREG(TCR_EL1);
>      uint64_t ttbr0 = READ_SYSREG64(TTBR0_EL1);
> -    paddr_t paddr;
>      uint32_t offset;
>      uint32_t *first = NULL, *second = NULL;
> +    mfn_t mfn;
> +
> +    mfn = p2m_lookup(d, _gfn(paddr_to_pfn(ttbr0)), NULL);
>  
>      printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr);
>      printk("    TTBCR: 0x%08"PRIregister"\n", ttbcr);
>      printk("    TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n",
> -           ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL));
> +           ttbr0, pfn_to_paddr(mfn_x(mfn)));
>  
>      if ( ttbcr & TTBCR_EAE )
>      {
> @@ -2339,32 +2341,31 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
>          return;
>      }
>  
> -    paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL);
> -    if ( paddr == INVALID_PADDR )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>      {
>          printk("Failed TTBR0 maddr lookup\n");
>          goto done;
>      }
> -    first = map_domain_page(_mfn(paddr_to_pfn(paddr)));
> +    first = map_domain_page(mfn);
>  
>      offset = addr >> (12+10);
>      printk("1ST[0x%"PRIx32"] (0x%"PRIpaddr") = 0x%08"PRIx32"\n",
> -           offset, paddr, first[offset]);
> +           offset, pfn_to_paddr(mfn_x(mfn)), first[offset]);
>      if ( !(first[offset] & 0x1) ||
>           !(first[offset] & 0x2) )
>          goto done;
>  
> -    paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL);
> +    mfn = p2m_lookup(d, _gfn(paddr_to_pfn(first[offset])), NULL);
>  
> -    if ( paddr == INVALID_PADDR )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>      {
>          printk("Failed L1 entry maddr lookup\n");
>          goto done;
>      }
> -    second = map_domain_page(_mfn(paddr_to_pfn(paddr)));
> +    second = map_domain_page(mfn);
>      offset = (addr >> 12) & 0x3FF;
>      printk("2ND[0x%"PRIx32"] (0x%"PRIpaddr") = 0x%08"PRIx32"\n",
> -           offset, paddr, second[offset]);
> +           offset, pfn_to_paddr(mfn_x(mfn)), second[offset]);
>  
>  done:
>      if (second) unmap_domain_page(second);
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 0d1e61e..f204482 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -135,8 +135,8 @@ void p2m_restore_state(struct vcpu *n);
>  /* Print debugging/statistial info about a domain's p2m */
>  void p2m_dump_info(struct domain *d);
>  
> -/* Look up the MFN corresponding to a domain's PFN. */
> -paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
> +/* Look up the MFN corresponding to a domain's GFN. */
> +mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>  
>  /* Clean & invalidate caches corresponding to a region of guest address space */
>  int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
> @@ -201,8 +201,7 @@ static inline struct page_info *get_page_from_gfn(
>  {
>      struct page_info *page;
>      p2m_type_t p2mt;
> -    paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt);
> -    unsigned long mfn = maddr >> PAGE_SHIFT;
> +    unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
>  
>      if (t)
>          *t = p2mt;
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn
  2016-06-28 16:17 ` [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
@ 2016-07-06 10:35   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:35 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> p2m_cache_flush is expecting GFNs in parameter and not MFNs. Rename
> the variable to *gfn* and use typesafe to avoid possible misusage.
> 
> Also, modify the prototype of the function to describe the range
> using the start and the number of GFNs. This will avoid to wonder
> whether the end if inclusive or exclusive.
> 
> Note that the type of the parameters 'start' is changed from xen_pfn_t
> (aka uint64_t) to gfn_t (aka unsigned long). This means that a truncation
> will occur for ARM32. It is fine because it will always be encoded on 28
> bits maximum (40 bits address).
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - This patch was originally called "xen/arm: p2m_cache_flush:
>         Use the correct terminology and typesafe gfn"
>         - Describe the range using the start and the number of GFNs.
> 
>     Changes in v3:
>         - Add a word in the commit message about the truncation.
> 
>     Changes in v2:
>         - Drop _gfn suffix
> ---
>  xen/arch/arm/domctl.c     |  2 +-
>  xen/arch/arm/p2m.c        | 11 ++++++-----
>  xen/include/asm-arm/p2m.h |  2 +-
>  3 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 30453d8..f61f98a 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -30,7 +30,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>          if ( e < s )
>              return -EINVAL;
>  
> -        return p2m_cache_flush(d, s, e);
> +        return p2m_cache_flush(d, _gfn(s), domctl->u.cacheflush.nr_pfns);
>      }
>      case XEN_DOMCTL_bind_pt_irq:
>      {
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 54a363a..1cfb62b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1469,16 +1469,17 @@ int relinquish_p2m_mapping(struct domain *d)
>                                d->arch.p2m.default_access);
>  }
>  
> -int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> +int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
> +    gfn_t end = gfn_add(start, nr);
>  
> -    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> -    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
> +    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
>  
>      return apply_p2m_changes(d, CACHEFLUSH,
> -                             pfn_to_paddr(start_mfn),
> -                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(gfn_x(start)),
> +                             pfn_to_paddr(gfn_x(end)),
>                               pfn_to_paddr(mfn_x(INVALID_MFN)),
>                               MATTR_MEM, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index f204482..8a96e68 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -139,7 +139,7 @@ void p2m_dump_info(struct domain *d);
>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>  
>  /* Clean & invalidate caches corresponding to a region of guest address space */
> -int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
> +int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>  
>  /* Setup p2m RAM mapping for domain d from start-end. */
>  int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-06-28 16:17 ` [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
@ 2016-07-06 10:43   ` Stefano Stabellini
  2016-07-06 11:10     ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:43 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, Shannon Zhao, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> The parameter 'access' is used by memaccess to restrict temporarily the
> permission. This parameter should not be used for other purpose (such
> as restricting permanently the permission).
> 
> The type p2m_mmio_direct will map the region Read-Write and
> non-executable. Note that this is already the current behavior with the
> combination of the type and the access. So there is no functional
> change.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

I could be mistaken, but isn't default_access actually p2m_access_rwx?


> ---
> Cc: Shannon Zhao <shannon.zhao@linaro.org>
> 
>     This patch is a candidate for Xen 4.7. Currently this function is
>     only used to map ACPI regions.
> 
>     I am wondering if we should introduce a new p2m type for it. And map
>     this region RO (I am not sure why a guest would want to
>     modify this region).
> 
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/p2m.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 1cfb62b..fcc4513 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1231,7 +1231,7 @@ int map_regions_rw_cache(struct domain *d,
>                               pfn_to_paddr(start_gfn + nr),
>                               pfn_to_paddr(mfn),
>                               MATTR_MEM, 0, p2m_mmio_direct,
> -                             p2m_access_rw);
> +                             d->arch.p2m.default_access);
>  }
>  
>  int unmap_regions_rw_cache(struct domain *d,
> @@ -1244,7 +1244,7 @@ int unmap_regions_rw_cache(struct domain *d,
>                               pfn_to_paddr(start_gfn + nr),
>                               pfn_to_paddr(mfn),
>                               MATTR_MEM, 0, p2m_invalid,
> -                             p2m_access_rw);
> +                             d->arch.p2m.default_access);
>  }
>  
>  int map_mmio_regions(struct domain *d,
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 11/17] xen/arm: dom0_build: Remove dead code in allocate_memory
  2016-06-28 16:17 ` [PATCH v5 11/17] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
@ 2016-07-06 10:46   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:46 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> The code to allocate memory when dom0 does not use direct mapping is
> relying on the presence of memory node in the DT.
> 
> However, they are not present when booting using UEFI or when using
> ACPI.
> 
> Rather than fixing the code, remove it because dom0 is always direct
> memory mapped and therefore the code is never tested. Also add a
> check to avoid disabling direct memory mapped and not implementing
> the associated RAM bank allocation.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/domain_build.c | 58 ++++++---------------------------------------
>  1 file changed, 7 insertions(+), 51 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 49185f0..923f48a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -235,7 +235,7 @@ fail:
>   * (as described above) we allow higher allocations and continue until
>   * that runs out (or we have allocated sufficient dom0 memory).
>   */
> -static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
> +static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
>  {
>      const unsigned int min_low_order =
>          get_order_from_bytes(min_t(paddr_t, dom0_mem, MB(128)));
> @@ -247,6 +247,12 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>      bool_t lowmem = is_32bit_domain(d);
>      unsigned int bits;
>  
> +    /*
> +     * TODO: Implement memory bank allocation when DOM0 is not direct
> +     * mapped
> +     */
> +    BUG_ON(!dom0_11_mapping);
> +
>      printk("Allocating 1:1 mappings totalling %ldMB for dom0:\n",
>             /* Don't want format this as PRIpaddr (16 digit hex) */
>             (unsigned long)(kinfo->unassigned_mem >> 20));
> @@ -343,56 +349,6 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>      }
>  }
>  
> -static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
> -{
> -
> -    struct dt_device_node *memory = NULL;
> -    const void *reg;
> -    u32 reg_len, reg_size;
> -    unsigned int bank = 0;
> -
> -    if ( dom0_11_mapping )
> -        return allocate_memory_11(d, kinfo);
> -
> -    while ( (memory = dt_find_node_by_type(memory, "memory")) )
> -    {
> -        int l;
> -
> -        dt_dprintk("memory node\n");
> -
> -        reg_size = dt_cells_to_size(dt_n_addr_cells(memory) + dt_n_size_cells(memory));
> -
> -        reg = dt_get_property(memory, "reg", &reg_len);
> -        if ( reg == NULL )
> -            panic("Memory node has no reg property");
> -
> -        for ( l = 0;
> -              kinfo->unassigned_mem > 0 && l + reg_size <= reg_len
> -                  && kinfo->mem.nr_banks < NR_MEM_BANKS;
> -              l += reg_size )
> -        {
> -            paddr_t start, size;
> -
> -            if ( dt_device_get_address(memory, bank, &start, &size) )
> -                panic("Unable to retrieve the bank %u for %s",
> -                      bank, dt_node_full_name(memory));
> -
> -            if ( size > kinfo->unassigned_mem )
> -                size = kinfo->unassigned_mem;
> -
> -            printk("Populate P2M %#"PRIx64"->%#"PRIx64"\n",
> -                   start, start + size);
> -            if ( p2m_populate_ram(d, start, start + size) < 0 )
> -                panic("Failed to populate P2M");
> -            kinfo->mem.bank[kinfo->mem.nr_banks].start = start;
> -            kinfo->mem.bank[kinfo->mem.nr_banks].size = size;
> -            kinfo->mem.nr_banks++;
> -
> -            kinfo->unassigned_mem -= size;
> -        }
> -    }
> -}
> -
>  static int write_properties(struct domain *d, struct kernel_info *kinfo,
>                              const struct dt_device_node *node)
>  {
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 12/17] xen/arm: p2m: Remove unused operation ALLOCATE
  2016-06-28 16:17 ` [PATCH v5 12/17] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
@ 2016-07-06 10:49   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:49 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> The operation ALLOCATE is unused. If we ever need it, it could be
> reimplemented with INSERT.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/p2m.c        | 67 ++---------------------------------------------
>  xen/include/asm-arm/p2m.h |  3 ---
>  2 files changed, 2 insertions(+), 68 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index fcc4513..f11094e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -547,7 +547,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
>  
>  enum p2m_operation {
>      INSERT,
> -    ALLOCATE,
>      REMOVE,
>      RELINQUISH,
>      CACHEFLUSH,
> @@ -667,7 +666,6 @@ static int apply_one_level(struct domain *d,
>  {
>      const paddr_t level_size = level_sizes[level];
>      const paddr_t level_mask = level_masks[level];
> -    const paddr_t level_shift = level_shifts[level];
>  
>      struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t pte;
> @@ -678,58 +676,6 @@ static int apply_one_level(struct domain *d,
>  
>      switch ( op )
>      {
> -    case ALLOCATE:
> -        ASSERT(level < 3 || !p2m_valid(orig_pte));
> -        ASSERT(*maddr == 0);
> -
> -        if ( p2m_valid(orig_pte) )
> -            return P2M_ONE_DESCEND;
> -
> -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
> -           /* We only create superpages when mem_access is not in use. */
> -             (level == 3 || (level < 3 && !p2m->mem_access_enabled)) )
> -        {
> -            struct page_info *page;
> -
> -            page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
> -            if ( page )
> -            {
> -                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
> -                if ( rc < 0 )
> -                {
> -                    free_domheap_page(page);
> -                    return rc;
> -                }
> -
> -                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
> -                if ( level < 3 )
> -                    pte.p2m.table = 0;
> -                p2m_write_pte(entry, pte, flush_cache);
> -                p2m->stats.mappings[level]++;
> -
> -                *addr += level_size;
> -
> -                return P2M_ONE_PROGRESS;
> -            }
> -            else if ( level == 3 )
> -                return -ENOMEM;
> -        }
> -
> -        /* L3 is always suitably aligned for mapping (handled, above) */
> -        BUG_ON(level == 3);
> -
> -        /*
> -         * If we get here then we failed to allocate a sufficiently
> -         * large contiguous region for this level (which can't be
> -         * L3) or mem_access is in use. Create a page table and
> -         * continue to descend so we try smaller allocations.
> -         */
> -        rc = p2m_create_table(d, entry, 0, flush_cache);
> -        if ( rc < 0 )
> -            return rc;
> -
> -        return P2M_ONE_DESCEND;
> -
>      case INSERT:
>          if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
>             /*
> @@ -1169,7 +1115,7 @@ static int apply_p2m_changes(struct domain *d,
>          }
>      }
>  
> -    if ( op == ALLOCATE || op == INSERT )
> +    if ( op == INSERT )
>      {
>          p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
>          p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
> @@ -1197,7 +1143,7 @@ out:
>  
>      spin_unlock(&p2m->lock);
>  
> -    if ( rc < 0 && ( op == INSERT || op == ALLOCATE ) &&
> +    if ( rc < 0 && ( op == INSERT ) &&
>           addr != start_gpaddr )
>      {
>          BUG_ON(addr == end_gpaddr);
> @@ -1212,15 +1158,6 @@ out:
>      return rc;
>  }
>  
> -int p2m_populate_ram(struct domain *d,
> -                     paddr_t start,
> -                     paddr_t end)
> -{
> -    return apply_p2m_changes(d, ALLOCATE, start, end,
> -                             0, MATTR_MEM, 0, p2m_ram_rw,
> -                             d->arch.p2m.default_access);
> -}
> -
>  int map_regions_rw_cache(struct domain *d,
>                           unsigned long start_gfn,
>                           unsigned long nr,
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 8a96e68..4752161 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -141,9 +141,6 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>  /* Clean & invalidate caches corresponding to a region of guest address space */
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>  
> -/* Setup p2m RAM mapping for domain d from start-end. */
> -int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
> -
>  int map_regions_rw_cache(struct domain *d,
>                           unsigned long start_gfn,
>                           unsigned long nr_mfns,
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 14/17] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache ...
  2016-06-28 16:17 ` [PATCH v5 14/17] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
@ 2016-07-06 10:54   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:54 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> to avoid mixing machine frame with guest frame. Also rename the
> parameters of the function and drop pointless PAGE_MASK in the caller.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/domain_build.c |  8 ++++----
>  xen/arch/arm/p2m.c          | 20 ++++++++++----------
>  xen/include/asm-arm/p2m.h   | 12 ++++++------
>  3 files changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 923f48a..60db9e4 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1522,9 +1522,9 @@ static void acpi_map_other_tables(struct domain *d)
>          addr = acpi_gbl_root_table_list.tables[i].address;
>          size = acpi_gbl_root_table_list.tables[i].length;
>          res = map_regions_rw_cache(d,
> -                                   paddr_to_pfn(addr & PAGE_MASK),
> +                                   _gfn(paddr_to_pfn(addr)),
>                                     DIV_ROUND_UP(size, PAGE_SIZE),
> -                                   paddr_to_pfn(addr & PAGE_MASK));
> +                                   _mfn(paddr_to_pfn(addr)));
>          if ( res )
>          {
>               panic(XENLOG_ERR "Unable to map ACPI region 0x%"PRIx64
> @@ -1878,9 +1878,9 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
>  
>      /* Map the EFI and ACPI tables to Dom0 */
>      rc = map_regions_rw_cache(d,
> -                              paddr_to_pfn(d->arch.efi_acpi_gpa),
> +                              _gfn(paddr_to_pfn(d->arch.efi_acpi_gpa)),
>                                PFN_UP(d->arch.efi_acpi_len),
> -                              paddr_to_pfn(virt_to_maddr(d->arch.efi_acpi_table)));
> +                              _mfn(paddr_to_pfn(virt_to_maddr(d->arch.efi_acpi_table))));
>      if ( rc != 0 )
>      {
>          printk(XENLOG_ERR "Unable to map EFI/ACPI table 0x%"PRIx64
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 5ffc3df..0fdd11f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1159,27 +1159,27 @@ out:
>  }
>  
>  int map_regions_rw_cache(struct domain *d,
> -                         unsigned long start_gfn,
> +                         gfn_t gfn,
>                           unsigned long nr,
> -                         unsigned long mfn)
> +                         mfn_t mfn)
>  {
>      return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(start_gfn),
> -                             pfn_to_paddr(start_gfn + nr),
> -                             pfn_to_paddr(mfn),
> +                             pfn_to_paddr(gfn_x(gfn)),
> +                             pfn_to_paddr(gfn_x(gfn) + nr),
> +                             pfn_to_paddr(mfn_x(mfn)),
>                               MATTR_MEM, 0, p2m_mmio_direct,
>                               d->arch.p2m.default_access);
>  }
>  
>  int unmap_regions_rw_cache(struct domain *d,
> -                           unsigned long start_gfn,
> +                           gfn_t gfn,
>                             unsigned long nr,
> -                           unsigned long mfn)
> +                           mfn_t mfn)
>  {
>      return apply_p2m_changes(d, REMOVE,
> -                             pfn_to_paddr(start_gfn),
> -                             pfn_to_paddr(start_gfn + nr),
> -                             pfn_to_paddr(mfn),
> +                             pfn_to_paddr(gfn_x(gfn)),
> +                             pfn_to_paddr(gfn_x(gfn) + nr),
> +                             pfn_to_paddr(mfn_x(mfn)),
>                               MATTR_MEM, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
>  }
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 8d29eda..6e258b9 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -142,14 +142,14 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>  
>  int map_regions_rw_cache(struct domain *d,
> -                         unsigned long start_gfn,
> -                         unsigned long nr_mfns,
> -                         unsigned long mfn);
> +                         gfn_t gfn,
> +                         unsigned long nr,
> +                         mfn_t mfn);
>  
>  int unmap_regions_rw_cache(struct domain *d,
> -                           unsigned long start_gfn,
> -                           unsigned long nr_mfns,
> -                           unsigned long mfn);
> +                           gfn_t gfn,
> +                           unsigned long nr,
> +                           mfn_t mfn);
>  
>  int map_dev_mmio_region(struct domain *d,
>                          gfn_t gfn,
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping
  2016-06-28 16:17 ` [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
  2016-06-28 17:23   ` Andrew Cooper
@ 2016-07-06 10:59   ` Stefano Stabellini
  2016-07-06 11:20     ` Julien Grall
  1 sibling, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 10:59 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> More the half of the arguments of INSERT and REMOVE are the same for
> each callers. Simplify the callers of apply_p2m_changes by adding new
> helpers which will fill common arguments with default values.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

I don't see much value in this patch. It looks good because it is before
"Rework the interface of apply_p2m_changes and use typesafe" in this
series (therefore eliminates a bunch of temporary casts), but otherwise
I don't think it would be much of an improvement.


> ---
>     Changes in v5:
>         - Add missing Signed-off-by
> 
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/p2m.c | 70 ++++++++++++++++++++++++++++--------------------------
>  1 file changed, 36 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 0fdd11f..a5b584b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1158,17 +1158,40 @@ out:
>      return rc;
>  }
>  
> +static inline int p2m_insert_mapping(struct domain *d,
> +                                     gfn_t start_gfn,
> +                                     unsigned long nr,
> +                                     mfn_t mfn,
> +                                     int mattr, p2m_type_t t)
> +{
> +    return apply_p2m_changes(d, INSERT,
> +                             pfn_to_paddr(gfn_x(start_gfn)),
> +                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> +                             pfn_to_paddr(mfn_x(mfn)),
> +                             mattr, 0, t, d->arch.p2m.default_access);
> +}
> +
> +static inline int p2m_remove_mapping(struct domain *d,
> +                                     gfn_t start_gfn,
> +                                     unsigned long nr,
> +                                     mfn_t mfn)
> +{
> +    return apply_p2m_changes(d, REMOVE,
> +                             pfn_to_paddr(gfn_x(start_gfn)),
> +                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> +                             pfn_to_paddr(mfn_x(mfn)),
> +                             /* arguments below not used when removing mapping */
> +                             MATTR_MEM, 0, p2m_invalid,
> +                             d->arch.p2m.default_access);
> +}
> +
>  int map_regions_rw_cache(struct domain *d,
>                           gfn_t gfn,
>                           unsigned long nr,
>                           mfn_t mfn)
>  {
> -    return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(gfn_x(gfn)),
> -                             pfn_to_paddr(gfn_x(gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> -                             MATTR_MEM, 0, p2m_mmio_direct,
> -                             d->arch.p2m.default_access);
> +    return p2m_insert_mapping(d, gfn, nr, mfn,
> +                              MATTR_MEM, p2m_mmio_direct);
>  }
>  
>  int unmap_regions_rw_cache(struct domain *d,
> @@ -1176,12 +1199,7 @@ int unmap_regions_rw_cache(struct domain *d,
>                             unsigned long nr,
>                             mfn_t mfn)
>  {
> -    return apply_p2m_changes(d, REMOVE,
> -                             pfn_to_paddr(gfn_x(gfn)),
> -                             pfn_to_paddr(gfn_x(gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> -                             MATTR_MEM, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +    return p2m_remove_mapping(d, gfn, nr, mfn);
>  }
>  
>  int map_mmio_regions(struct domain *d,
> @@ -1189,12 +1207,8 @@ int map_mmio_regions(struct domain *d,
>                       unsigned long nr,
>                       mfn_t mfn)
>  {
> -    return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(gfn_x(start_gfn)),
> -                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> -                             MATTR_DEV, 0, p2m_mmio_direct,
> -                             d->arch.p2m.default_access);
> +    return p2m_insert_mapping(d, start_gfn, nr, mfn,
> +                              MATTR_MEM, p2m_mmio_direct);
>  }
>  
>  int unmap_mmio_regions(struct domain *d,
> @@ -1202,12 +1216,7 @@ int unmap_mmio_regions(struct domain *d,
>                         unsigned long nr,
>                         mfn_t mfn)
>  {
> -    return apply_p2m_changes(d, REMOVE,
> -                             pfn_to_paddr(gfn_x(start_gfn)),
> -                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> -                             MATTR_DEV, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +    return p2m_remove_mapping(d, start_gfn, nr, mfn);
>  }
>  
>  int map_dev_mmio_region(struct domain *d,
> @@ -1237,22 +1246,15 @@ int guest_physmap_add_entry(struct domain *d,
>                              unsigned long page_order,
>                              p2m_type_t t)
>  {
> -    return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(gfn_x(gfn)),
> -                             pfn_to_paddr(gfn_x(gfn) + (1 << page_order)),
> -                             pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t,
> -                             d->arch.p2m.default_access);
> +    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
> +                              MATTR_MEM, t);
>  }
>  
>  void guest_physmap_remove_page(struct domain *d,
>                                 gfn_t gfn,
>                                 mfn_t mfn, unsigned int page_order)
>  {
> -    apply_p2m_changes(d, REMOVE,
> -                      pfn_to_paddr(gfn_x(gfn)),
> -                      pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
> -                      pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
> -                      d->arch.p2m.default_access);
> +    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>  }
>  
>  int p2m_alloc_table(struct domain *d)
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe
  2016-06-28 16:17 ` [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
@ 2016-07-06 11:06   ` Stefano Stabellini
  2016-07-06 11:56     ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 11:06 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> Most of the callers of apply_p2m_changes have a GFN, a MFN and the
> number of frame to change in hand.
> 
> Rather than asking each caller to convert the frame to an address,
> rework the interfaces to pass the GFN, MFN and the number of frame.
> 
> Note that it would be possible to do more clean-up in apply_p2m_changes,
> but this will be done in a follow-up series.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/p2m.c | 62 ++++++++++++++++++++++++------------------------------
>  1 file changed, 28 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 9fdc417..bb33a72 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -906,25 +906,26 @@ static void update_reference_mapping(struct page_info *page,
>  
>  static int apply_p2m_changes(struct domain *d,
>                       enum p2m_operation op,
> -                     paddr_t start_gpaddr,
> -                     paddr_t end_gpaddr,
> -                     paddr_t maddr,
> +                     gfn_t sgfn,
> +                     unsigned long nr,
> +                     mfn_t smfn,
>                       int mattr,
>                       uint32_t mask,
>                       p2m_type_t t,
>                       p2m_access_t a)
>  {
> +    paddr_t start_gpaddr = pfn_to_paddr(gfn_x(sgfn));
> +    paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
> +    paddr_t maddr = pfn_to_paddr(mfn_x(smfn));
>      int rc, ret;
>      struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
>      struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
> -    paddr_t addr, orig_maddr = maddr;
> +    paddr_t addr;
>      unsigned int level = 0;
>      unsigned int cur_root_table = ~0;
>      unsigned int cur_offset[4] = { ~0, ~0, ~0, ~0 };
>      unsigned int count = 0;
> -    const unsigned long sgfn = paddr_to_pfn(start_gpaddr),
> -                        egfn = paddr_to_pfn(end_gpaddr);
>      const unsigned int preempt_count_limit = (op == MEMACCESS) ? 1 : 0x2000;
>      const bool_t preempt = !is_idle_vcpu(current);
>      bool_t flush = false;
> @@ -986,9 +987,9 @@ static int apply_p2m_changes(struct domain *d,
>                   * Preempt setting mem_access permissions as required by XSA-89,
>                   * if it's not the last iteration.
>                   */
> -                uint32_t progress = paddr_to_pfn(addr) - sgfn + 1;
> +                uint32_t progress = paddr_to_pfn(addr) - gfn_x(sgfn) + 1;
>  
> -                if ( (egfn - sgfn) > progress && !(progress & mask) )
> +                if ( nr > progress && !(progress & mask) )
>                  {
>                      rc = progress;
>                      goto out;
> @@ -1117,8 +1118,9 @@ static int apply_p2m_changes(struct domain *d,
>  
>      if ( op == INSERT )
>      {
> -        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, _gfn(egfn));
> -        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, _gfn(sgfn));
> +        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
> +                                      gfn_add(sgfn, nr));
> +        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
>      }
>  
>      rc = 0;
> @@ -1127,7 +1129,7 @@ out:
>      if ( flush )
>      {
>          flush_tlb_domain(d);
> -        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
> +        ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
>          if ( !rc )
>              rc = ret;
>      }
> @@ -1146,12 +1148,14 @@ out:
>      if ( rc < 0 && ( op == INSERT ) &&
>           addr != start_gpaddr )
>      {
> +        unsigned long gfn = paddr_to_pfn(addr);
> +
>          BUG_ON(addr == end_gpaddr);
>          /*
>           * addr keeps the address of the end of the last successfully-inserted
>           * mapping.
>           */
> -        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
> +        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,

Worth considering a gfn_sub (we already have gfn_add)? 

In any case

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

>                            mattr, 0, p2m_invalid, d->arch.p2m.default_access);
>      }
>  
> @@ -1164,10 +1168,7 @@ static inline int p2m_insert_mapping(struct domain *d,
>                                       mfn_t mfn,
>                                       int mattr, p2m_type_t t)
>  {
> -    return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(gfn_x(start_gfn)),
> -                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> +    return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
>                               mattr, 0, t, d->arch.p2m.default_access);
>  }
>  
> @@ -1176,10 +1177,7 @@ static inline int p2m_remove_mapping(struct domain *d,
>                                       unsigned long nr,
>                                       mfn_t mfn)
>  {
> -    return apply_p2m_changes(d, REMOVE,
> -                             pfn_to_paddr(gfn_x(start_gfn)),
> -                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> +    return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
>                               /* arguments below not used when removing mapping */
>                               MATTR_MEM, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
> @@ -1399,13 +1397,13 @@ err:
>  int relinquish_p2m_mapping(struct domain *d)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
> +    unsigned long nr;
>  
> -    return apply_p2m_changes(d, RELINQUISH,
> -                              pfn_to_paddr(gfn_x(p2m->lowest_mapped_gfn)),
> -                              pfn_to_paddr(gfn_x(p2m->max_mapped_gfn)),
> -                              pfn_to_paddr(mfn_x(INVALID_MFN)),
> -                              MATTR_MEM, 0, p2m_invalid,
> -                              d->arch.p2m.default_access);
> +    nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
> +
> +    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> +                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
> +                             d->arch.p2m.default_access);
>  }
>  
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
> @@ -1416,10 +1414,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>      start = gfn_max(start, p2m->lowest_mapped_gfn);
>      end = gfn_min(end, p2m->max_mapped_gfn);
>  
> -    return apply_p2m_changes(d, CACHEFLUSH,
> -                             pfn_to_paddr(gfn_x(start)),
> -                             pfn_to_paddr(gfn_x(end)),
> -                             pfn_to_paddr(mfn_x(INVALID_MFN)),
> +    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
>                               MATTR_MEM, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
>  }
> @@ -1828,10 +1823,9 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>          return 0;
>      }
>  
> -    rc = apply_p2m_changes(d, MEMACCESS,
> -                           pfn_to_paddr(gfn_x(gfn) + start),
> -                           pfn_to_paddr(gfn_x(gfn) + nr),
> -                           0, MATTR_MEM, mask, 0, a);
> +    rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
> +                           (nr - start), INVALID_MFN,
> +                           MATTR_MEM, mask, 0, a);
>      if ( rc < 0 )
>          return rc;
>      else if ( rc > 0 )
> -- 
> 1.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 16/17] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn
  2016-06-28 16:17 ` [PATCH v5 16/17] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
@ 2016-07-06 11:07   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 11:07 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, xen-devel

On Tue, 28 Jun 2016, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/mm.c         |  2 +-
>  xen/arch/arm/p2m.c        | 18 +++++++++---------
>  xen/include/asm-arm/p2m.h |  4 ++--
>  3 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index b5fc034..4e256c2 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1004,7 +1004,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return d->arch.p2m.max_mapped_gfn;
> +    return gfn_x(d->arch.p2m.max_mapped_gfn);
>  }
>  
>  void share_xen_page_with_guest(struct page_info *page,
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a5b584b..9fdc417 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -976,7 +976,7 @@ static int apply_p2m_changes(struct domain *d,
>                   * This is set in preempt_count_limit.
>                   *
>                   */
> -                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
> +                p2m->lowest_mapped_gfn = _gfn(addr >> PAGE_SHIFT);
>                  rc = -ERESTART;
>                  goto out;
>  
> @@ -1117,8 +1117,8 @@ static int apply_p2m_changes(struct domain *d,
>  
>      if ( op == INSERT )
>      {
> -        p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
> -        p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
> +        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, _gfn(egfn));
> +        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, _gfn(sgfn));
>      }
>  
>      rc = 0;
> @@ -1383,8 +1383,8 @@ int p2m_init(struct domain *d)
>  
>      p2m->root = NULL;
>  
> -    p2m->max_mapped_gfn = 0;
> -    p2m->lowest_mapped_gfn = ULONG_MAX;
> +    p2m->max_mapped_gfn = _gfn(0);
> +    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
>  
>      p2m->default_access = p2m_access_rwx;
>      p2m->mem_access_enabled = false;
> @@ -1401,8 +1401,8 @@ int relinquish_p2m_mapping(struct domain *d)
>      struct p2m_domain *p2m = &d->arch.p2m;
>  
>      return apply_p2m_changes(d, RELINQUISH,
> -                              pfn_to_paddr(p2m->lowest_mapped_gfn),
> -                              pfn_to_paddr(p2m->max_mapped_gfn),
> +                              pfn_to_paddr(gfn_x(p2m->lowest_mapped_gfn)),
> +                              pfn_to_paddr(gfn_x(p2m->max_mapped_gfn)),
>                                pfn_to_paddr(mfn_x(INVALID_MFN)),
>                                MATTR_MEM, 0, p2m_invalid,
>                                d->arch.p2m.default_access);
> @@ -1413,8 +1413,8 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>      struct p2m_domain *p2m = &d->arch.p2m;
>      gfn_t end = gfn_add(start, nr);
>  
> -    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
> -    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
> +    start = gfn_max(start, p2m->lowest_mapped_gfn);
> +    end = gfn_min(end, p2m->max_mapped_gfn);
>  
>      return apply_p2m_changes(d, CACHEFLUSH,
>                               pfn_to_paddr(gfn_x(start)),
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 6e258b9..34096bc 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -34,13 +34,13 @@ struct p2m_domain {
>      /* Highest guest frame that's ever been mapped in the p2m
>       * Only takes into account ram and foreign mapping
>       */
> -    unsigned long max_mapped_gfn;
> +    gfn_t max_mapped_gfn;
>  
>      /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
>       * preemptible manner this is update to track recall where to
>       * resume the search. Apart from during teardown this can only
>       * decrease. */
> -    unsigned long lowest_mapped_gfn;
> +    gfn_t lowest_mapped_gfn;
>  
>      /* Gather some statistics for information purposes only */
>      struct {
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-07-06 10:43   ` Stefano Stabellini
@ 2016-07-06 11:10     ` Julien Grall
  2016-07-06 11:17       ` Stefano Stabellini
  2016-07-06 15:39       ` Tamas K Lengyel
  0 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2016-07-06 11:10 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Tamas K Lengyel, Shannon Zhao, xen-devel

(CC Tamas)

On 06/07/16 11:43, Stefano Stabellini wrote:
> On Tue, 28 Jun 2016, Julien Grall wrote:
>> The parameter 'access' is used by memaccess to restrict temporarily the
>> permission. This parameter should not be used for other purpose (such
>> as restricting permanently the permission).
>>
>> The type p2m_mmio_direct will map the region Read-Write and
>> non-executable. Note that this is already the current behavior with the
>> combination of the type and the access. So there is no functional
>> change.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>
> I could be mistaken, but isn't default_access actually p2m_access_rwx?

By default, the access is p2m_access_rwx. However this can be changed by 
memaccess and the new default value is stored in 
arch->p2m.default_access. I have CCed Tamas to confirm that.

Note that this is how the other calls are done.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-07-06 11:10     ` Julien Grall
@ 2016-07-06 11:17       ` Stefano Stabellini
  2016-07-06 11:22         ` Julien Grall
  2016-07-06 15:39       ` Tamas K Lengyel
  1 sibling, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2016-07-06 11:17 UTC (permalink / raw)
  To: Julien Grall; +Cc: Stefano Stabellini, Tamas K Lengyel, Shannon Zhao, xen-devel

On Wed, 6 Jul 2016, Julien Grall wrote:
> (CC Tamas)
> 
> On 06/07/16 11:43, Stefano Stabellini wrote:
> > On Tue, 28 Jun 2016, Julien Grall wrote:
> > > The parameter 'access' is used by memaccess to restrict temporarily the
> > > permission. This parameter should not be used for other purpose (such
> > > as restricting permanently the permission).
> > > 
> > > The type p2m_mmio_direct will map the region Read-Write and
> > > non-executable. Note that this is already the current behavior with the
> > > combination of the type and the access. So there is no functional
> > > change.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@arm.com>
> > 
> > I could be mistaken, but isn't default_access actually p2m_access_rwx?
> 
> By default, the access is p2m_access_rwx. However this can be changed by
> memaccess and the new default value is stored in arch->p2m.default_access. I
> have CCed Tamas to confirm that.
> 
> Note that this is how the other calls are done.

This patch replaces p2m_access_rw with default_access, which is
p2m_access_rwx by default. Is it actually indended?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping
  2016-07-06 10:59   ` Stefano Stabellini
@ 2016-07-06 11:20     ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-07-06 11:20 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel

Hi Stefano,

On 06/07/16 11:59, Stefano Stabellini wrote:
> On Tue, 28 Jun 2016, Julien Grall wrote:
>> More the half of the arguments of INSERT and REMOVE are the same for
>> each callers. Simplify the callers of apply_p2m_changes by adding new
>> helpers which will fill common arguments with default values.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>
> I don't see much value in this patch. It looks good because it is before
> "Rework the interface of apply_p2m_changes and use typesafe" in this
> series (therefore eliminates a bunch of temporary casts), but otherwise
> I don't think it would be much of an improvement.

A lot of the parameters of apply_p2m_changes may not apply to a specific 
set of operations. Every time we have to add a new parameters for a 
specific operations, we have to modify all the callers.

Also, with this patch, it is easier to understand what does every 
function without caring of dummy parameters.

For instance if we take the example of INSERT:

return apply_p2m_changes(d, INSERT,
			 pfn_to_paddr(gfn_x(start_gfn)),
			 pfn_to_paddr(gfn_x(start_gfn) + nr),
                          pfn_to_paddr(mfn_x(mfn)),
	                 MATTR_DEV, 0, p2m_mmio_direct,
	                 d->arch.p2m.default_access);

Aside the pfn_to_paddr(*) which will be clean-up in a follow-up patch, 2 
of thoses parameters (2, 7 and 9) are common to all the callers. It is 
hard to understand what means '0' without looking carefully at 
apply_p2m_changes.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-07-06 11:17       ` Stefano Stabellini
@ 2016-07-06 11:22         ` Julien Grall
  2016-07-06 11:44           ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-07-06 11:22 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Tamas K Lengyel, Shannon Zhao, xen-devel



On 06/07/16 12:17, Stefano Stabellini wrote:
> On Wed, 6 Jul 2016, Julien Grall wrote:
>> (CC Tamas)
>>
>> On 06/07/16 11:43, Stefano Stabellini wrote:
>>> On Tue, 28 Jun 2016, Julien Grall wrote:
>>>> The parameter 'access' is used by memaccess to restrict temporarily the
>>>> permission. This parameter should not be used for other purpose (such
>>>> as restricting permanently the permission).
>>>>
>>>> The type p2m_mmio_direct will map the region Read-Write and
>>>> non-executable. Note that this is already the current behavior with the
>>>> combination of the type and the access. So there is no functional
>>>> change.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>
>>> I could be mistaken, but isn't default_access actually p2m_access_rwx?
>>
>> By default, the access is p2m_access_rwx. However this can be changed by
>> memaccess and the new default value is stored in arch->p2m.default_access. I
>> have CCed Tamas to confirm that.
>>
>> Note that this is how the other calls are done.
>
> This patch replaces p2m_access_rw with default_access, which is
> p2m_access_rwx by default. Is it actually indended?

Yes, I explained why in the commit message.

"The type p2m_mmio_direct will map the region Read-Write and
non-executable. Note that this is already the current behavior with the
combination of the type and the access. So there is no functional
change."

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-07-06 11:22         ` Julien Grall
@ 2016-07-06 11:44           ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2016-07-06 11:44 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Tamas K Lengyel, Shannon Zhao, xen-devel



On 06/07/16 12:22, Julien Grall wrote:
>
>
> On 06/07/16 12:17, Stefano Stabellini wrote:
>> On Wed, 6 Jul 2016, Julien Grall wrote:
>>> (CC Tamas)
>>>
>>> On 06/07/16 11:43, Stefano Stabellini wrote:
>>>> On Tue, 28 Jun 2016, Julien Grall wrote:
>>>>> The parameter 'access' is used by memaccess to restrict temporarily
>>>>> the
>>>>> permission. This parameter should not be used for other purpose (such
>>>>> as restricting permanently the permission).
>>>>>
>>>>> The type p2m_mmio_direct will map the region Read-Write and
>>>>> non-executable. Note that this is already the current behavior with
>>>>> the
>>>>> combination of the type and the access. So there is no functional
>>>>> change.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>
>>>> I could be mistaken, but isn't default_access actually p2m_access_rwx?
>>>
>>> By default, the access is p2m_access_rwx. However this can be changed by
>>> memaccess and the new default value is stored in
>>> arch->p2m.default_access. I
>>> have CCed Tamas to confirm that.
>>>
>>> Note that this is how the other calls are done.
>>
>> This patch replaces p2m_access_rw with default_access, which is
>> p2m_access_rwx by default. Is it actually indended?
>
> Yes, I explained why in the commit message.
>
> "The type p2m_mmio_direct will map the region Read-Write and
> non-executable. Note that this is already the current behavior with the
> combination of the type and the access. So there is no functional
> change."

Thinking a bit more, I will reword the commit message with:

"The parameter 'access' used by memaccess to restrict temporarily the 
permission. This parameter should not be for other purpose (such as 
restricting permanently the permission).

Instead, we should use the default_access provided by memaccess. When it 
is not enabled, the access will be p2m_access_rwx (i.e no restriction 
applied).

The type p2m_mmio_direct will map the region read-write and 
non-executable before any further restriction by memaccess. note that 
this is already the current behavior with the combination of the type 
and the access. So there is no functional change".

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe
  2016-07-06 11:06   ` Stefano Stabellini
@ 2016-07-06 11:56     ` Julien Grall
  2016-07-06 12:45       ` Andrew Cooper
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2016-07-06 11:56 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel

Hi Stefano,

On 06/07/16 12:06, Stefano Stabellini wrote:
> On Tue, 28 Jun 2016, Julien Grall wrote:
>> @@ -1146,12 +1148,14 @@ out:
>>       if ( rc < 0 && ( op == INSERT ) &&
>>            addr != start_gpaddr )
>>       {
>> +        unsigned long gfn = paddr_to_pfn(addr);
>> +
>>           BUG_ON(addr == end_gpaddr);
>>           /*
>>            * addr keeps the address of the end of the last successfully-inserted
>>            * mapping.
>>            */
>> -        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
>> +        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
>
> Worth considering a gfn_sub (we already have gfn_add)?

I thought about it. However the prototype of gfn_sub and gfn_add would 
be different.

The former is subtracting two gfns whilst the latter is add a value to a 
gfn. So I think it would be confusing for the user. Although, I am open 
to any other suggestion.

Note that I am working on a series to rework the P2M code for supporting 
Break-Before-Make. And a lot of this code will be different.

>
> In any case
>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe
  2016-07-06 11:56     ` Julien Grall
@ 2016-07-06 12:45       ` Andrew Cooper
  0 siblings, 0 replies; 49+ messages in thread
From: Andrew Cooper @ 2016-07-06 12:45 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini; +Cc: xen-devel

On 06/07/16 12:56, Julien Grall wrote:
> Hi Stefano,
>
> On 06/07/16 12:06, Stefano Stabellini wrote:
>> On Tue, 28 Jun 2016, Julien Grall wrote:
>>> @@ -1146,12 +1148,14 @@ out:
>>>       if ( rc < 0 && ( op == INSERT ) &&
>>>            addr != start_gpaddr )
>>>       {
>>> +        unsigned long gfn = paddr_to_pfn(addr);
>>> +
>>>           BUG_ON(addr == end_gpaddr);
>>>           /*
>>>            * addr keeps the address of the end of the last
>>> successfully-inserted
>>>            * mapping.
>>>            */
>>> -        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
>>> +        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
>>
>> Worth considering a gfn_sub (we already have gfn_add)?
>
> I thought about it. However the prototype of gfn_sub and gfn_add would
> be different.
>
> The former is subtracting two gfns whilst the latter is add a value to
> a gfn. So I think it would be confusing for the user. Although, I am
> open to any other suggestion.

This is the one problem with these boxed types; we have to reimplement
all the base operations ourselves.

I would caution against introducing too many helpers.  The manual unbox
and manipulate does work, and it should only be the very common
variations which should be abstracted.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-07-06 11:10     ` Julien Grall
  2016-07-06 11:17       ` Stefano Stabellini
@ 2016-07-06 15:39       ` Tamas K Lengyel
  1 sibling, 0 replies; 49+ messages in thread
From: Tamas K Lengyel @ 2016-07-06 15:39 UTC (permalink / raw)
  To: Julien Grall; +Cc: Stefano Stabellini, Shannon Zhao, Xen-devel

On Wed, Jul 6, 2016 at 5:10 AM, Julien Grall <julien.grall@arm.com> wrote:
> (CC Tamas)
>
> On 06/07/16 11:43, Stefano Stabellini wrote:
>>
>> On Tue, 28 Jun 2016, Julien Grall wrote:
>>>
>>> The parameter 'access' is used by memaccess to restrict temporarily the
>>> permission. This parameter should not be used for other purpose (such
>>> as restricting permanently the permission).
>>>
>>> The type p2m_mmio_direct will map the region Read-Write and
>>> non-executable. Note that this is already the current behavior with the
>>> combination of the type and the access. So there is no functional
>>> change.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>
>>
>> I could be mistaken, but isn't default_access actually p2m_access_rwx?
>
>
> By default, the access is p2m_access_rwx. However this can be changed by
> memaccess and the new default value is stored in arch->p2m.default_access. I
> have CCed Tamas to confirm that.

Yes, that's correct.

Cheers,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2016-07-06 15:39 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-28 16:17 [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Julien Grall
2016-06-28 16:17 ` [PATCH v5 01/17] xen: Use typesafe gfn/mfn in guest_physmap_* helpers Julien Grall
2016-06-28 16:17 ` [PATCH v5 02/17] xen: Use typesafe gfn in xenmem_add_to_physmap_one Julien Grall
2016-06-28 16:17 ` [PATCH v5 03/17] xen/arm: Rename grant_table_gfpn into grant_table_gfn and use the typesafe gfn Julien Grall
2016-06-28 16:17 ` [PATCH v5 04/17] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
2016-07-06 10:23   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 05/17] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
2016-06-28 16:43   ` Andrew Cooper
2016-06-28 16:47     ` Juergen Gross
2016-06-28 17:04       ` Julien Grall
2016-06-29  9:39   ` Jan Beulich
2016-06-28 16:17 ` [PATCH v5 06/17] xen: Use a typesafe to define INVALID_MFN Julien Grall
2016-06-28 17:05   ` Andrew Cooper
2016-07-06 10:27   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 07/17] xen: Use a typesafe to define INVALID_GFN Julien Grall
2016-06-28 17:12   ` Andrew Cooper
2016-07-06 10:28   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 08/17] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
2016-07-06 10:32   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
2016-07-06 10:35   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 10/17] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
2016-07-06 10:43   ` Stefano Stabellini
2016-07-06 11:10     ` Julien Grall
2016-07-06 11:17       ` Stefano Stabellini
2016-07-06 11:22         ` Julien Grall
2016-07-06 11:44           ` Julien Grall
2016-07-06 15:39       ` Tamas K Lengyel
2016-06-28 16:17 ` [PATCH v5 11/17] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
2016-07-06 10:46   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 12/17] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
2016-07-06 10:49   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 13/17] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
2016-06-28 17:21   ` Andrew Cooper
2016-06-29 12:20     ` Julien Grall
2016-06-28 16:17 ` [PATCH v5 14/17] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
2016-07-06 10:54   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 15/17] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
2016-06-28 17:23   ` Andrew Cooper
2016-07-06 10:59   ` Stefano Stabellini
2016-07-06 11:20     ` Julien Grall
2016-06-28 16:17 ` [PATCH v5 16/17] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
2016-07-06 11:07   ` Stefano Stabellini
2016-06-28 16:17 ` [PATCH v5 17/17] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
2016-07-06 11:06   ` Stefano Stabellini
2016-07-06 11:56     ` Julien Grall
2016-07-06 12:45       ` Andrew Cooper
2016-06-28 18:32 ` [PATCH v5 00/17] xen/arm: Use the typesafes gfn and mfn Andrew Cooper
2016-06-29  9:24   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).