xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn
@ 2016-07-06 13:00 Julien Grall
  2016-07-06 13:01 ` [PATCH v6 01/14] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
                   ` (13 more replies)
  0 siblings, 14 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Kevin Tian, sstabellini, Feng Wu, Jun Nakajima,
	George Dunlap, Andrew Cooper, Christoph Egger, Ian Jackson,
	Tim Deegan, Julien Grall, Paul Durrant, Shannon Zhao,
	Jan Beulich, Liu Jinsong, Suravee Suthikulpanit, Boris Ostrovsky,
	Mukesh Rathor

Hello all,

Some of the ARM functions are mixing gfn vs mfn and even physical vs frame.

To avoid more confusion, this patch series makes use of the terminology
described in xen/include/xen/mm.h and the associated typesafe.

This series requires the patch [1] to be applied beforehand. I pushed a
branch with this patch and this series applied on xenbits:

git://xenbits.xen.org/people/julieng/xen-unstable.git branch typesafe-v6

For all the changes see in each patch.

Yours sincerely,

[1] http://lists.xenproject.org/archives/html/xen-devel/2016-06/msg01744.html

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Christoph Egger <chegger@amazon.de>
Cc: Feng Wu <feng.wu@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Shannon Zhao <shannon.zhao@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

Julien Grall (14):
  xen: Use the typesafe mfn and gfn in map_mmio_regions...
  xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  xen: Use a typesafe to define INVALID_MFN
  xen: Use a typesafe to define INVALID_GFN
  xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and
    mfn
  xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn
  xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  xen/arm: dom0_build: Remove dead code in allocate_memory
  xen/arm: p2m: Remove unused operation ALLOCATE
  xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region...
  xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache ...
  xen/arm: p2m: Introduce helpers to insert and remove mapping
  xen/arm: p2m: Use typesafe gfn for {max,lowest}_mapped_gfn
  xen/arm: p2m: Rework the interface of apply_p2m_changes and use
    typesafe

 xen/arch/arm/domain_build.c             |  70 ++-------
 xen/arch/arm/domctl.c                   |   2 +-
 xen/arch/arm/gic-v2.c                   |   4 +-
 xen/arch/arm/mm.c                       |   4 +-
 xen/arch/arm/p2m.c                      | 263 ++++++++++++--------------------
 xen/arch/arm/platforms/exynos5.c        |   8 +-
 xen/arch/arm/platforms/omap5.c          |  16 +-
 xen/arch/arm/traps.c                    |  21 +--
 xen/arch/arm/vgic-v2.c                  |   4 +-
 xen/arch/x86/cpu/mcheck/mce.c           |   2 +-
 xen/arch/x86/debug.c                    |  72 ++++-----
 xen/arch/x86/domain.c                   |   2 +-
 xen/arch/x86/hvm/emulate.c              |   7 +-
 xen/arch/x86/hvm/hvm.c                  |  12 +-
 xen/arch/x86/hvm/ioreq.c                |   8 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   2 +-
 xen/arch/x86/hvm/viridian.c             |  12 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   8 +-
 xen/arch/x86/mm/altp2m.c                |   2 +-
 xen/arch/x86/mm/guest_walk.c            |   4 +-
 xen/arch/x86/mm/hap/guest_walk.c        |  10 +-
 xen/arch/x86/mm/hap/hap.c               |   4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |   2 +-
 xen/arch/x86/mm/p2m-ept.c               |   6 +-
 xen/arch/x86/mm/p2m-pod.c               |  24 +--
 xen/arch/x86/mm/p2m-pt.c                |  18 +--
 xen/arch/x86/mm/p2m.c                   |  90 +++++------
 xen/arch/x86/mm/paging.c                |  12 +-
 xen/arch/x86/mm/shadow/common.c         |  45 +++---
 xen/arch/x86/mm/shadow/multi.c          |  38 ++---
 xen/arch/x86/mm/shadow/private.h        |   2 +-
 xen/common/domain.c                     |   6 +-
 xen/common/domctl.c                     |   4 +-
 xen/common/grant_table.c                |   6 +-
 xen/drivers/passthrough/amd/iommu_map.c |   2 +-
 xen/drivers/passthrough/vtd/iommu.c     |   4 +-
 xen/drivers/passthrough/x86/iommu.c     |   2 +-
 xen/include/asm-arm/p2m.h               |  32 ++--
 xen/include/asm-x86/guest_pt.h          |   4 +-
 xen/include/asm-x86/p2m.h               |   2 +-
 xen/include/xen/mm.h                    |   4 +-
 xen/include/xen/p2m-common.h            |   8 +-
 42 files changed, 371 insertions(+), 477 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v6 01/14] xen: Use the typesafe mfn and gfn in map_mmio_regions...
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 02/14] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Julien Grall, Jan Beulich

to avoid mixing machine frame with guest frame.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

    Changes in v6:
        - Add Stefano's acked-by

    Changes in v3:
        - Use mfn_add when it is possible
        - Add Jan's acked-by
---
 xen/arch/arm/domain_build.c      |  4 ++--
 xen/arch/arm/gic-v2.c            |  4 ++--
 xen/arch/arm/p2m.c               | 22 +++++++++++-----------
 xen/arch/arm/platforms/exynos5.c |  8 ++++----
 xen/arch/arm/platforms/omap5.c   | 16 ++++++++--------
 xen/arch/arm/vgic-v2.c           |  4 ++--
 xen/arch/x86/mm/p2m.c            | 18 ++++++++++--------
 xen/common/domctl.c              |  4 ++--
 xen/include/xen/p2m-common.h     |  8 ++++----
 9 files changed, 45 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9035486..49185f0 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1036,9 +1036,9 @@ static int map_range_to_domain(const struct dt_device_node *dev,
     if ( need_mapping )
     {
         res = map_mmio_regions(d,
-                               paddr_to_pfn(addr),
+                               _gfn(paddr_to_pfn(addr)),
                                DIV_ROUND_UP(len, PAGE_SIZE),
-                               paddr_to_pfn(addr));
+                               _mfn(paddr_to_pfn(addr)));
         if ( res < 0 )
         {
             printk(XENLOG_ERR "Unable to map 0x%"PRIx64
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 4e2f4c7..3893ece 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -601,9 +601,9 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
                d->domain_id, v2m_data->addr, v2m_data->size,
                v2m_data->spi_start, v2m_data->nr_spis);
 
-        ret = map_mmio_regions(d, paddr_to_pfn(v2m_data->addr),
+        ret = map_mmio_regions(d, _gfn(paddr_to_pfn(v2m_data->addr)),
                             DIV_ROUND_UP(v2m_data->size, PAGE_SIZE),
-                            paddr_to_pfn(v2m_data->addr));
+                            _mfn(paddr_to_pfn(v2m_data->addr)));
         if ( ret )
         {
             printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0395a40..34563bb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1245,27 +1245,27 @@ int unmap_regions_rw_cache(struct domain *d,
 }
 
 int map_mmio_regions(struct domain *d,
-                     unsigned long start_gfn,
+                     gfn_t start_gfn,
                      unsigned long nr,
-                     unsigned long mfn)
+                     mfn_t mfn)
 {
     return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_DEV, 0, p2m_mmio_direct,
                              d->arch.p2m.default_access);
 }
 
 int unmap_mmio_regions(struct domain *d,
-                       unsigned long start_gfn,
+                       gfn_t start_gfn,
                        unsigned long nr,
-                       unsigned long mfn)
+                       mfn_t mfn)
 {
     return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_DEV, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
@@ -1280,7 +1280,7 @@ int map_dev_mmio_region(struct domain *d,
     if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
         return 0;
 
-    res = map_mmio_regions(d, start_gfn, nr, mfn);
+    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
     if ( res < 0 )
     {
         printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index bf4964d..c43934f 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -83,12 +83,12 @@ static int exynos5_init_time(void)
 static int exynos5250_specific_mapping(struct domain *d)
 {
     /* Map the chip ID */
-    map_mmio_regions(d, paddr_to_pfn(EXYNOS5_PA_CHIPID), 1,
-                     paddr_to_pfn(EXYNOS5_PA_CHIPID));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(EXYNOS5_PA_CHIPID)), 1,
+                     _mfn(paddr_to_pfn(EXYNOS5_PA_CHIPID)));
 
     /* Map the PWM region */
-    map_mmio_regions(d, paddr_to_pfn(EXYNOS5_PA_TIMER), 2,
-                     paddr_to_pfn(EXYNOS5_PA_TIMER));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(EXYNOS5_PA_TIMER)), 2,
+                     _mfn(paddr_to_pfn(EXYNOS5_PA_TIMER)));
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
index a49ba62..539588e 100644
--- a/xen/arch/arm/platforms/omap5.c
+++ b/xen/arch/arm/platforms/omap5.c
@@ -102,20 +102,20 @@ static int omap5_init_time(void)
 static int omap5_specific_mapping(struct domain *d)
 {
     /* Map the PRM module */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_PRM_BASE), 2,
-                     paddr_to_pfn(OMAP5_PRM_BASE));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_PRM_BASE)), 2,
+                     _mfn(paddr_to_pfn(OMAP5_PRM_BASE)));
 
     /* Map the PRM_MPU */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_PRCM_MPU_BASE), 1,
-                     paddr_to_pfn(OMAP5_PRCM_MPU_BASE));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_PRCM_MPU_BASE)), 1,
+                     _mfn(paddr_to_pfn(OMAP5_PRCM_MPU_BASE)));
 
     /* Map the Wakeup Gen */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_WKUPGEN_BASE), 1,
-                     paddr_to_pfn(OMAP5_WKUPGEN_BASE));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_WKUPGEN_BASE)), 1,
+                     _mfn(paddr_to_pfn(OMAP5_WKUPGEN_BASE)));
 
     /* Map the on-chip SRAM */
-    map_mmio_regions(d, paddr_to_pfn(OMAP5_SRAM_PA), 32,
-                     paddr_to_pfn(OMAP5_SRAM_PA));
+    map_mmio_regions(d, _gfn(paddr_to_pfn(OMAP5_SRAM_PA)), 32,
+                     _mfn(paddr_to_pfn(OMAP5_SRAM_PA)));
 
     return 0;
 }
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 9adb4a9..cbe61cf 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -688,8 +688,8 @@ static int vgic_v2_domain_init(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, paddr_to_pfn(cbase), csize / PAGE_SIZE,
-                           paddr_to_pfn(vbase));
+    ret = map_mmio_regions(d, _gfn(paddr_to_pfn(cbase)), csize / PAGE_SIZE,
+                           _mfn(paddr_to_pfn(vbase)));
     if ( ret )
         return ret;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 16733a4..6258a5b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2214,9 +2214,9 @@ static unsigned int mmio_order(const struct domain *d,
 #define MAP_MMIO_MAX_ITER 64 /* pretty arbitrary */
 
 int map_mmio_regions(struct domain *d,
-                     unsigned long start_gfn,
+                     gfn_t start_gfn,
                      unsigned long nr,
-                     unsigned long mfn)
+                     mfn_t mfn)
 {
     int ret = 0;
     unsigned long i;
@@ -2229,10 +2229,11 @@ int map_mmio_regions(struct domain *d,
           i += 1UL << order, ++iter )
     {
         /* OR'ing gfn and mfn values will return an order suitable to both. */
-        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
+        for ( order = mmio_order(d, (gfn_x(start_gfn) + i) | (mfn_x(mfn) + i), nr - i); ;
               order = ret - 1 )
         {
-            ret = set_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i), order,
+            ret = set_mmio_p2m_entry(d, gfn_x(start_gfn) + i,
+                                     mfn_add(mfn, i), order,
                                      p2m_get_hostp2m(d)->default_access);
             if ( ret <= 0 )
                 break;
@@ -2246,9 +2247,9 @@ int map_mmio_regions(struct domain *d,
 }
 
 int unmap_mmio_regions(struct domain *d,
-                       unsigned long start_gfn,
+                       gfn_t start_gfn,
                        unsigned long nr,
-                       unsigned long mfn)
+                       mfn_t mfn)
 {
     int ret = 0;
     unsigned long i;
@@ -2261,10 +2262,11 @@ int unmap_mmio_regions(struct domain *d,
           i += 1UL << order, ++iter )
     {
         /* OR'ing gfn and mfn values will return an order suitable to both. */
-        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
+        for ( order = mmio_order(d, (gfn_x(start_gfn) + i) | (mfn_x(mfn) + i), nr - i); ;
               order = ret - 1 )
         {
-            ret = clear_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i), order);
+            ret = clear_mmio_p2m_entry(d, gfn_x(start_gfn) + i,
+                                       mfn_add(mfn, i), order);
             if ( ret <= 0 )
                 break;
             ASSERT(ret <= order);
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index e43904e..b784e6c 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1074,7 +1074,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = map_mmio_regions(d, gfn, nr_mfns, mfn);
+            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
                        "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
@@ -1086,7 +1086,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:remove: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = unmap_mmio_regions(d, gfn, nr_mfns, mfn);
+            ret = unmap_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
             if ( ret < 0 && is_hardware_domain(current->domain) )
                 printk(XENLOG_ERR
                        "memory_map: error %ld removing dom%d access to [%lx,%lx]\n",
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 6374a5b..b4f9077 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -37,13 +37,13 @@ typedef enum {
  *  * the guest physical address space to map, starting from the machine
  *   * frame number mfn. */
 int map_mmio_regions(struct domain *d,
-                     unsigned long start_gfn,
+                     gfn_t start_gfn,
                      unsigned long nr,
-                     unsigned long mfn);
+                     mfn_t mfn);
 int unmap_mmio_regions(struct domain *d,
-                       unsigned long start_gfn,
+                       gfn_t start_gfn,
                        unsigned long nr,
-                       unsigned long mfn);
+                       mfn_t mfn);
 
 /*
  * Set access type for a region of gfns.
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 02/14] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
  2016-07-06 13:01 ` [PATCH v6 01/14] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN Julien Grall
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Julien Grall, sstabellini, Suravee Suthikulpanit,
	Jan Beulich

A variable containing a guest frame should be compared to INVALID_GFN
and not INVALID_MFN.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

    Changes in v6:
        - Fix typo in the commit message
        - Add Andrew's and Jan' reviewed-by

    Changes in v5:
        - Patch added
---
 xen/drivers/passthrough/amd/iommu_map.c | 2 +-
 xen/drivers/passthrough/x86/iommu.c     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 1b914ba..c758459 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
-    if ( gfn == INVALID_MFN )
+    if ( gfn == INVALID_GFN )
         return -EADDRNOTAVAIL;
     ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
 
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index a18a608..cd435d7 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
 
-            if ( gfn != INVALID_MFN )
+            if ( gfn != INVALID_GFN )
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
  2016-07-06 13:01 ` [PATCH v6 01/14] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
  2016-07-06 13:01 ` [PATCH v6 02/14] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:04   ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN Julien Grall
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, sstabellini, Jun Nakajima, George Dunlap,
	Andrew Cooper, Christoph Egger, Tim Deegan, Julien Grall,
	Paul Durrant, Jan Beulich, Liu Jinsong, Mukesh Rathor

Also take the opportunity to convert arch/x86/debug.c to the typesafe
mfn and use proper printf format for MFN/GFN when the code around is
modified.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
Cc: Christoph Egger <chegger@amazon.de>
Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>

    Changes in v6:
        - Add Stefano's acked-by for ARM bits
        - Use PRI_mfn and PRI_gfn
        - Remove set of brackets when it is not necessary
        - Use mfn_add when possible
        - Add Andrew's reviewed-by

    Changes in v5:
        - Patch added
---
 xen/arch/arm/p2m.c              |  4 +--
 xen/arch/x86/cpu/mcheck/mce.c   |  2 +-
 xen/arch/x86/debug.c            | 58 +++++++++++++++++++++--------------------
 xen/arch/x86/hvm/hvm.c          |  6 ++---
 xen/arch/x86/hvm/viridian.c     | 12 ++++-----
 xen/arch/x86/hvm/vmx/vmx.c      |  2 +-
 xen/arch/x86/mm/guest_walk.c    |  4 +--
 xen/arch/x86/mm/hap/hap.c       |  4 +--
 xen/arch/x86/mm/p2m-ept.c       |  6 ++---
 xen/arch/x86/mm/p2m-pod.c       | 18 ++++++-------
 xen/arch/x86/mm/p2m-pt.c        | 18 ++++++-------
 xen/arch/x86/mm/p2m.c           | 54 +++++++++++++++++++-------------------
 xen/arch/x86/mm/paging.c        | 12 ++++-----
 xen/arch/x86/mm/shadow/common.c | 43 +++++++++++++++---------------
 xen/arch/x86/mm/shadow/multi.c  | 36 ++++++++++++-------------
 xen/common/domain.c             |  6 ++---
 xen/common/grant_table.c        |  6 ++---
 xen/include/xen/mm.h            |  2 +-
 18 files changed, 147 insertions(+), 146 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 34563bb..d690602 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1461,7 +1461,7 @@ int relinquish_p2m_mapping(struct domain *d)
     return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
-                              pfn_to_paddr(INVALID_MFN),
+                              pfn_to_paddr(mfn_x(INVALID_MFN)),
                               MATTR_MEM, 0, p2m_invalid,
                               d->arch.p2m.default_access);
 }
@@ -1476,7 +1476,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
     return apply_p2m_changes(d, CACHEFLUSH,
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
-                             pfn_to_paddr(INVALID_MFN),
+                             pfn_to_paddr(mfn_x(INVALID_MFN)),
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index edcbe48..2695b0c 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1455,7 +1455,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
                 gfn = PFN_DOWN(gaddr);
                 mfn = mfn_x(get_gfn(d, gfn, &t));
 
-                if ( mfn == INVALID_MFN )
+                if ( mfn == mfn_x(INVALID_MFN) )
                 {
                     put_gfn(d, gfn);
                     put_domain(d);
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 58cae22..9213ea7 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -43,11 +43,11 @@ typedef unsigned long dbgva_t;
 typedef unsigned char dbgbyte_t;
 
 /* Returns: mfn for the given (hvm guest) vaddr */
-static unsigned long 
+static mfn_t
 dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
                 unsigned long *gfn)
 {
-    unsigned long mfn;
+    mfn_t mfn;
     uint32_t pfec = PFEC_page_present;
     p2m_type_t gfntype;
 
@@ -60,16 +60,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
         return INVALID_MFN;
     }
 
-    mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
+    mfn = get_gfn(dp, *gfn, &gfntype);
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
         mfn = INVALID_MFN;
     }
     else
-        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+        DBGP2("X: vaddr:%lx domid:%d mfn:%#"PRI_mfn"\n",
+              vaddr, dp->domain_id, mfn_x(mfn));
 
-    if ( mfn == INVALID_MFN )
+    if ( mfn_eq(mfn, INVALID_MFN) )
     {
         put_gfn(dp, *gfn);
         *gfn = INVALID_GFN;
@@ -91,7 +92,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
  *       mode.
  * Returns: mfn for the given (pv guest) vaddr 
  */
-static unsigned long 
+static mfn_t
 dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
 {
     l4_pgentry_t l4e, *l4t;
@@ -99,31 +100,31 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2_pgentry_t l2e, *l2t;
     l1_pgentry_t l1e, *l1t;
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
-    unsigned long mfn = cr3 >> PAGE_SHIFT;
+    mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
 
     DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
           cr3, pgd3val);
 
     if ( pgd3val == 0 )
     {
-        l4t = map_domain_page(_mfn(mfn));
+        l4t = map_domain_page(mfn);
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
-        mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        mfn = _mfn(l4e_get_pfn(l4e));
+        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%#"PRI_mfn"\n", l4t,
+              l4_table_offset(vaddr), l4e, mfn_x(mfn));
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
             DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
 
-        l3t = map_domain_page(_mfn(mfn));
+        l3t = map_domain_page(mfn);
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
-        mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        mfn = _mfn(l3e_get_pfn(l3e));
+        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%#"PRI_mfn"\n", l3t,
+              l3_table_offset(vaddr), l3e, mfn_x(mfn));
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -132,26 +133,26 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         }
     }
 
-    l2t = map_domain_page(_mfn(mfn));
+    l2t = map_domain_page(mfn);
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
-    mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    mfn = _mfn(l2e_get_pfn(l2e));
+    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%#"PRI_mfn"\n",
+          l2t, l2_table_offset(vaddr), l2e, mfn_x(mfn));
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
         DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
         return INVALID_MFN;
     }
-    l1t = map_domain_page(_mfn(mfn));
+    l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
-    mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    mfn = _mfn(l1e_get_pfn(l1e));
+    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%#"PRI_mfn"\n", l1t, l1_table_offset(vaddr),
+          l1e, mfn_x(mfn));
 
-    return mfn_valid(mfn) ? mfn : INVALID_MFN;
+    return mfn_valid(mfn_x(mfn)) ? mfn : INVALID_MFN;
 }
 
 /* Returns: number of bytes remaining to be copied */
@@ -163,23 +164,24 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
     {
         char *va;
         unsigned long addr = (unsigned long)gaddr;
-        unsigned long mfn, gfn = INVALID_GFN, pagecnt;
+        mfn_t mfn;
+        unsigned long gfn = INVALID_GFN, pagecnt;
 
         pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
 
         mfn = (has_hvm_container_domain(dp)
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
-        if ( mfn == INVALID_MFN ) 
+        if ( mfn_eq(mfn, INVALID_MFN) )
             break;
 
-        va = map_domain_page(_mfn(mfn));
+        va = map_domain_page(mfn);
         va = va + (addr & (PAGE_SIZE-1));
 
         if ( toaddr )
         {
             copy_from_user(va, buf, pagecnt);    /* va = buf */
-            paging_mark_dirty(dp, mfn);
+            paging_mark_dirty(dp, mfn_x(mfn));
         }
         else
         {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c89ab6e..f3faf2e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1796,7 +1796,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
         p2m = hostp2m;
 
     /* Check access permissions first, then handle faults */
-    if ( mfn_x(mfn) != INVALID_MFN )
+    if ( !mfn_eq(mfn, INVALID_MFN) )
     {
         bool_t violation;
 
@@ -5299,8 +5299,8 @@ static int do_altp2m_op(
             rc = -EINVAL;
 
         if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
-             (mfn_x(get_gfn_query_unlocked(curr->domain,
-                    a.u.enable_notify.gfn, &p2mt)) == INVALID_MFN) )
+             mfn_eq(get_gfn_query_unlocked(curr->domain,
+                    a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
             return -EINVAL;
 
         vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 8253fd0..1734b7e 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -195,8 +195,8 @@ static void enable_hypercall_page(struct domain *d)
     {
         if ( page )
             put_page(page);
-        gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
-                 page ? page_to_mfn(page) : INVALID_MFN);
+        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
+                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
         return;
     }
 
@@ -268,8 +268,8 @@ static void initialize_apic_assist(struct vcpu *v)
     return;
 
  fail:
-    gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
-             page ? page_to_mfn(page) : INVALID_MFN);
+    gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", gmfn,
+             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
 }
 
 static void teardown_apic_assist(struct vcpu *v)
@@ -348,8 +348,8 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
     {
         if ( page )
             put_page(page);
-        gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
-                 page ? page_to_mfn(page) : INVALID_MFN);
+        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
+                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
         return;
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index df19579..a061420 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2025,7 +2025,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
 
             mfn = get_gfn_query_unlocked(d, gfn_x(vcpu_altp2m(v).veinfo_gfn), &t);
 
-            if ( mfn_x(mfn) != INVALID_MFN )
+            if ( !mfn_eq(mfn, INVALID_MFN) )
                 __vmwrite(VIRT_EXCEPTION_INFO, mfn_x(mfn) << PAGE_SHIFT);
             else
                 v->arch.hvm_vmx.secondary_exec_control &=
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index e850502..868e909 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -281,7 +281,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
         start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
                      ((va >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
         gw->l1e = guest_l1e_from_gfn(start, flags);
-        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
+        gw->l2mfn = gw->l1mfn = INVALID_MFN;
         goto set_ad;
     }
 
@@ -356,7 +356,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
         start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
                      guest_l1_table_offset(va));
         gw->l1e = guest_l1e_from_gfn(start, flags);
-        gw->l1mfn = _mfn(INVALID_MFN);
+        gw->l1mfn = INVALID_MFN;
     } 
     else 
     {
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9c2cd49..3218fa2 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -430,7 +430,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
  oom:
     HAP_ERROR("out of memory building monitor pagetable\n");
     domain_crash(d);
-    return _mfn(INVALID_MFN);
+    return INVALID_MFN;
 }
 
 static void hap_destroy_monitor_table(struct vcpu* v, mfn_t mmfn)
@@ -509,7 +509,7 @@ int hap_enable(struct domain *d, u32 mode)
         }
 
         for ( i = 0; i < MAX_EPTP; i++ )
-            d->arch.altp2m_eptp[i] = INVALID_MFN;
+            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 7166c71..6d03736 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -50,7 +50,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
     int rc;
-    unsigned long oldmfn = INVALID_MFN;
+    unsigned long oldmfn = mfn_x(INVALID_MFN);
     bool_t check_foreign = (new.mfn != entryptr->mfn ||
                             new.sa_p2mt != entryptr->sa_p2mt);
 
@@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
 
     write_atomic(&entryptr->epte, new.epte);
 
-    if ( unlikely(oldmfn != INVALID_MFN) )
+    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
         put_page(mfn_to_page(oldmfn));
 
     rc = 0;
@@ -887,7 +887,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
     int i;
     int ret = 0;
     bool_t recalc = 0;
-    mfn_t mfn = _mfn(INVALID_MFN);
+    mfn_t mfn = INVALID_MFN;
     struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index b7ab169..f384589 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -559,7 +559,7 @@ p2m_pod_decrease_reservation(struct domain *d,
     {
         /* All PoD: Mark the whole region invalid and tell caller
          * we're done. */
-        p2m_set_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid,
+        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
                       p2m->default_access);
         p2m->pod.entry_count-=(1<<order);
         BUG_ON(p2m->pod.entry_count < 0);
@@ -602,7 +602,7 @@ p2m_pod_decrease_reservation(struct domain *d,
         n = 1UL << cur_order;
         if ( t == p2m_populate_on_demand )
         {
-            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
+            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m->pod.entry_count -= n;
             BUG_ON(p2m->pod.entry_count < 0);
@@ -624,7 +624,7 @@ p2m_pod_decrease_reservation(struct domain *d,
 
             page = mfn_to_page(mfn);
 
-            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
+            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m_tlb_flush_sync(p2m);
             for ( j = 0; j < n; ++j )
@@ -671,7 +671,7 @@ void p2m_pod_dump_data(struct domain *d)
 static int
 p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 {
-    mfn_t mfn, mfn0 = _mfn(INVALID_MFN);
+    mfn_t mfn, mfn0 = INVALID_MFN;
     p2m_type_t type, type0 = 0;
     unsigned long * map = NULL;
     int ret=0, reset = 0;
@@ -754,7 +754,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     }
 
     /* Try to remove the page, restoring old mapping if it fails. */
-    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_2M,
+    p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_2M,
                   p2m_populate_on_demand, p2m->default_access);
     p2m_tlb_flush_sync(p2m);
 
@@ -871,7 +871,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         }
 
         /* Try to remove the page, restoring old mapping if it fails. */
-        p2m_set_entry(p2m, gfns[i], _mfn(INVALID_MFN), PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
 
         /* See if the page was successfully unmapped.  (Allow one refcount
@@ -1073,7 +1073,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
          * NOTE: In a fine-grained p2m locking scenario this operation
          * may need to promote its locking from gfn->1g superpage
          */
-        p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
+        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
                       p2m_populate_on_demand, p2m->default_access);
         return 0;
     }
@@ -1157,7 +1157,7 @@ remap_and_retry:
      * need promoting the gfn lock from gfn->2M superpage */
     gfn_aligned = (gfn>>order)<<order;
     for(i=0; i<(1<<order); i++)
-        p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
     if ( tb_init_done )
     {
@@ -1215,7 +1215,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
     }
 
     /* Now, actually do the two-way mapping */
-    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order,
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
                        p2m_populate_on_demand, p2m->default_access);
     if ( rc == 0 )
     {
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 4980934..2b6e89e 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -511,7 +511,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * the intermediate one might be).
      */
     unsigned int flags, iommu_old_flags = 0;
-    unsigned long old_mfn = INVALID_MFN;
+    unsigned long old_mfn = mfn_x(INVALID_MFN);
 
     ASSERT(sve != 0);
 
@@ -764,7 +764,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
                      p2m->max_mapped_pfn )
                     break;
         }
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
 
     mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
@@ -777,7 +777,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
         if ( (l4e_get_flags(*l4e) & _PAGE_PRESENT) == 0 )
         {
             unmap_domain_page(l4e);
-            return _mfn(INVALID_MFN);
+            return INVALID_MFN;
         }
         mfn = _mfn(l4e_get_pfn(*l4e));
         recalc = needs_recalc(l4, *l4e);
@@ -805,7 +805,7 @@ pod_retry_l3:
                     *t = p2m_populate_on_demand;
             }
             unmap_domain_page(l3e);
-            return _mfn(INVALID_MFN);
+            return INVALID_MFN;
         }
         if ( flags & _PAGE_PSE )
         {
@@ -817,7 +817,7 @@ pod_retry_l3:
             unmap_domain_page(l3e);
 
             ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
-            return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
+            return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
         }
 
         mfn = _mfn(l3e_get_pfn(*l3e));
@@ -846,7 +846,7 @@ pod_retry_l2:
         }
     
         unmap_domain_page(l2e);
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
     if ( flags & _PAGE_PSE )
     {
@@ -856,7 +856,7 @@ pod_retry_l2:
         unmap_domain_page(l2e);
         
         ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
-        return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
+        return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
     }
 
     mfn = _mfn(l2e_get_pfn(*l2e));
@@ -885,14 +885,14 @@ pod_retry_l1:
         }
     
         unmap_domain_page(l1e);
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
     mfn = _mfn(l1e_get_pfn(*l1e));
     *t = recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
     unmap_domain_page(l1e);
 
     ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
-    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : _mfn(INVALID_MFN);
+    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
 }
 
 static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6258a5b..b93c8a2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -388,7 +388,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
     if (unlikely((p2m_is_broken(*t))))
     {
         /* Return invalid_mfn to avoid caller's access */
-        mfn = _mfn(INVALID_MFN);
+        mfn = INVALID_MFN;
         if ( q & P2M_ALLOC )
             domain_crash(p2m->domain);
     }
@@ -493,8 +493,8 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
             rc = set_rc;
 
         gfn += 1ul << order;
-        if ( mfn_x(mfn) != INVALID_MFN )
-            mfn = _mfn(mfn_x(mfn) + (1ul << order));
+        if ( !mfn_eq(mfn, INVALID_MFN) )
+            mfn = mfn_add(mfn, 1ul << order);
         todo -= 1ul << order;
     }
 
@@ -580,7 +580,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     /* Initialise physmap tables for slot zero. Other code assumes this. */
     p2m->defer_nested_flush = 1;
-    rc = p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+    rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K,
                        p2m_invalid, p2m->default_access);
     p2m->defer_nested_flush = 0;
     p2m_unlock(p2m);
@@ -670,7 +670,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
             ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
         }
     }
-    return p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid,
+    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
                          p2m->default_access);
 }
 
@@ -840,7 +840,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     {
         gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
                  gfn_x(gfn), mfn_x(mfn));
-        rc = p2m_set_entry(p2m, gfn_x(gfn), _mfn(INVALID_MFN), page_order,
+        rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order,
                            p2m_invalid, p2m->default_access);
         if ( rc == 0 )
         {
@@ -1107,7 +1107,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
     }
 
     /* Do not use mfn_valid() here as it will usually fail for MMIO pages. */
-    if ( (INVALID_MFN == mfn_x(actual_mfn)) || (t != p2m_mmio_direct) )
+    if ( mfn_eq(actual_mfn, INVALID_MFN) || (t != p2m_mmio_direct) )
     {
         gdprintk(XENLOG_ERR,
                  "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
@@ -1117,7 +1117,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
         gdprintk(XENLOG_WARNING,
                  "no mapping between mfn %08lx and gfn %08lx\n",
                  mfn_x(mfn), gfn);
-    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order, p2m_invalid,
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
                        p2m->default_access);
 
  out:
@@ -1146,7 +1146,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
     mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
     if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
     {
-        ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+        ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
                             p2m_invalid, p2m->default_access);
         gfn_unlock(p2m, gfn, 0);
     }
@@ -1316,7 +1316,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
         put_page(page);
 
     /* Remove mapping from p2m table */
-    ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+    ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
                         p2m_ram_paged, a);
 
     /* Clear content before returning the page to Xen */
@@ -1844,7 +1844,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= MAX_ALTP2M ||
-             d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN )
+             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
             return -EINVAL;
 
         ap2m = d->arch.altp2m_p2m[altp2m_idx];
@@ -1942,7 +1942,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
     mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
     gfn_unlock(p2m, gfn, 0);
 
-    if ( mfn_x(mfn) == INVALID_MFN )
+    if ( mfn_eq(mfn, INVALID_MFN) )
         return -ESRCH;
     
     if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
@@ -2288,7 +2288,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
         p2m = d->arch.altp2m_p2m[i];
@@ -2315,7 +2315,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
     {
         if ( idx != vcpu_altp2m(v).p2midx )
         {
@@ -2359,14 +2359,14 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
                               0, &page_order);
     __put_gfn(*ap2m, gfn_x(gfn));
 
-    if ( mfn_x(mfn) != INVALID_MFN )
+    if ( !mfn_eq(mfn, INVALID_MFN) )
         return 0;
 
     mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
                               P2M_ALLOC | P2M_UNSHARE, &page_order);
     __put_gfn(hp2m, gfn_x(gfn));
 
-    if ( mfn_x(mfn) == INVALID_MFN )
+    if ( mfn_eq(mfn, INVALID_MFN) )
         return 0;
 
     p2m_lock(*ap2m);
@@ -2404,7 +2404,7 @@ void p2m_flush_altp2m(struct domain *d)
         /* Uninit and reinit ept to force TLB shootdown */
         ept_p2m_uninit(d->arch.altp2m_p2m[i]);
         ept_p2m_init(d->arch.altp2m_p2m[i]);
-        d->arch.altp2m_eptp[i] = INVALID_MFN;
+        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
     }
 
     altp2m_list_unlock(d);
@@ -2431,7 +2431,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] == INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
     {
         p2m_init_altp2m_helper(d, idx);
         rc = 0;
@@ -2450,7 +2450,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] != INVALID_MFN )
+        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
         p2m_init_altp2m_helper(d, i);
@@ -2476,7 +2476,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
     {
         p2m = d->arch.altp2m_p2m[idx];
 
@@ -2486,7 +2486,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
             /* Uninit and reinit ept to force TLB shootdown */
             ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
             ept_p2m_init(d->arch.altp2m_p2m[idx]);
-            d->arch.altp2m_eptp[idx] = INVALID_MFN;
+            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
             rc = 0;
         }
     }
@@ -2510,7 +2510,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
     {
         for_each_vcpu( d, v )
             if ( idx != vcpu_altp2m(v).p2midx )
@@ -2541,7 +2541,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     unsigned int page_order;
     int rc = -EINVAL;
 
-    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN )
+    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
         return rc;
 
     hp2m = p2m_get_hostp2m(d);
@@ -2636,14 +2636,14 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
         p2m = d->arch.altp2m_p2m[i];
         m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
 
         /* Check for a dropped page that may impact this altp2m */
-        if ( mfn_x(mfn) == INVALID_MFN &&
+        if ( mfn_eq(mfn, INVALID_MFN) &&
              gfn_x(gfn) >= p2m->min_remapped_gfn &&
              gfn_x(gfn) <= p2m->max_remapped_gfn )
         {
@@ -2660,7 +2660,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
-                         d->arch.altp2m_eptp[i] == INVALID_MFN )
+                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
                         continue;
 
                     p2m = d->arch.altp2m_p2m[i];
@@ -2672,7 +2672,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 goto out;
             }
         }
-        else if ( mfn_x(m) != INVALID_MFN )
+        else if ( !mfn_eq(m, INVALID_MFN) )
             p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma);
 
         __put_gfn(p2m, gfn_x(gfn));
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 8219bb6..107fc8c 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -67,7 +67,7 @@ static mfn_t paging_new_log_dirty_page(struct domain *d)
     if ( unlikely(page == NULL) )
     {
         d->arch.paging.log_dirty.failed_allocs++;
-        return _mfn(INVALID_MFN);
+        return INVALID_MFN;
     }
 
     d->arch.paging.log_dirty.allocs++;
@@ -95,7 +95,7 @@ static mfn_t paging_new_log_dirty_node(struct domain *d)
         int i;
         mfn_t *node = map_domain_page(mfn);
         for ( i = 0; i < LOGDIRTY_NODE_ENTRIES; i++ )
-            node[i] = _mfn(INVALID_MFN);
+            node[i] = INVALID_MFN;
         unmap_domain_page(node);
     }
     return mfn;
@@ -167,7 +167,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
 
             unmap_domain_page(l2);
             paging_free_log_dirty_page(d, l3[i3]);
-            l3[i3] = _mfn(INVALID_MFN);
+            l3[i3] = INVALID_MFN;
 
             if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
             {
@@ -182,7 +182,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
         if ( rc )
             break;
         paging_free_log_dirty_page(d, l4[i4]);
-        l4[i4] = _mfn(INVALID_MFN);
+        l4[i4] = INVALID_MFN;
 
         if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
         {
@@ -198,7 +198,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
     if ( !rc )
     {
         paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
-        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
+        d->arch.paging.log_dirty.top = INVALID_MFN;
 
         ASSERT(d->arch.paging.log_dirty.allocs == 0);
         d->arch.paging.log_dirty.failed_allocs = 0;
@@ -660,7 +660,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags)
     /* This must be initialized separately from the rest of the
      * log-dirty init code as that can be called more than once and we
      * don't want to leak any active log-dirty bitmaps */
-    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
+    d->arch.paging.log_dirty.top = INVALID_MFN;
 
     /*
      * Shadow pagetables are the default, but we will use
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 226e32d..1c0b6cd 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -88,10 +88,10 @@ void shadow_vcpu_init(struct vcpu *v)
 
     for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
     {
-        v->arch.paging.shadow.oos[i] = _mfn(INVALID_MFN);
-        v->arch.paging.shadow.oos_snapshot[i] = _mfn(INVALID_MFN);
+        v->arch.paging.shadow.oos[i] = INVALID_MFN;
+        v->arch.paging.shadow.oos_snapshot[i] = INVALID_MFN;
         for ( j = 0; j < SHADOW_OOS_FIXUPS; j++ )
-            v->arch.paging.shadow.oos_fixup[i].smfn[j] = _mfn(INVALID_MFN);
+            v->arch.paging.shadow.oos_fixup[i].smfn[j] = INVALID_MFN;
     }
 #endif
 
@@ -593,12 +593,12 @@ static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
     int i;
     for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
     {
-        if ( mfn_x(fixup->smfn[i]) != INVALID_MFN )
+        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
         {
             sh_remove_write_access_from_sl1p(d, gmfn,
                                              fixup->smfn[i],
                                              fixup->off[i]);
-            fixup->smfn[i] = _mfn(INVALID_MFN);
+            fixup->smfn[i] = INVALID_MFN;
         }
     }
 
@@ -636,7 +636,7 @@ void oos_fixup_add(struct domain *d, mfn_t gmfn,
 
             next = oos_fixup[idx].next;
 
-            if ( mfn_x(oos_fixup[idx].smfn[next]) != INVALID_MFN )
+            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
             {
                 TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
 
@@ -757,7 +757,7 @@ static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
     struct oos_fixup fixup = { .next = 0 };
 
     for (i = 0; i < SHADOW_OOS_FIXUPS; i++ )
-        fixup.smfn[i] = _mfn(INVALID_MFN);
+        fixup.smfn[i] = INVALID_MFN;
 
     idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
     oidx = idx;
@@ -807,7 +807,7 @@ static void oos_hash_remove(struct domain *d, mfn_t gmfn)
             idx = (idx + 1) % SHADOW_OOS_PAGES;
         if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
         {
-            oos[idx] = _mfn(INVALID_MFN);
+            oos[idx] = INVALID_MFN;
             return;
         }
     }
@@ -838,7 +838,6 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
 
     SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
     BUG();
-    return _mfn(INVALID_MFN);
 }
 
 /* Pull a single guest page back into sync */
@@ -862,7 +861,7 @@ void sh_resync(struct domain *d, mfn_t gmfn)
         if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
         {
             _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
-            oos[idx] = _mfn(INVALID_MFN);
+            oos[idx] = INVALID_MFN;
             return;
         }
     }
@@ -914,7 +913,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
         {
             /* Write-protect and sync contents */
             _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-            oos[idx] = _mfn(INVALID_MFN);
+            oos[idx] = INVALID_MFN;
         }
 
  resync_others:
@@ -948,7 +947,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
             {
                 /* Write-protect and sync contents */
                 _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-                oos[idx] = _mfn(INVALID_MFN);
+                oos[idx] = INVALID_MFN;
             }
         }
     }
@@ -1784,7 +1783,7 @@ void *sh_emulate_map_dest(struct vcpu *v, unsigned long vaddr,
     if ( likely(((vaddr + bytes - 1) & PAGE_MASK) == (vaddr & PAGE_MASK)) )
     {
         /* Whole write fits on a single page. */
-        sh_ctxt->mfn[1] = _mfn(INVALID_MFN);
+        sh_ctxt->mfn[1] = INVALID_MFN;
         map = map_domain_page(sh_ctxt->mfn[0]) + (vaddr & ~PAGE_MASK);
     }
     else if ( !is_hvm_domain(d) )
@@ -2086,7 +2085,7 @@ mfn_t shadow_hash_lookup(struct domain *d, unsigned long n, unsigned int t)
     }
 
     perfc_incr(shadow_hash_lookup_miss);
-    return _mfn(INVALID_MFN);
+    return INVALID_MFN;
 }
 
 void shadow_hash_insert(struct domain *d, unsigned long n, unsigned int t,
@@ -2910,7 +2909,7 @@ void sh_reset_l3_up_pointers(struct vcpu *v)
     };
     static const unsigned int callback_mask = SHF_L3_64;
 
-    hash_vcpu_foreach(v, callback_mask, callbacks, _mfn(INVALID_MFN));
+    hash_vcpu_foreach(v, callback_mask, callbacks, INVALID_MFN);
 }
 
 
@@ -2940,7 +2939,7 @@ static void sh_update_paging_modes(struct vcpu *v)
 #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
-    if ( mfn_x(v->arch.paging.shadow.oos_snapshot[0]) == INVALID_MFN )
+    if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
@@ -3284,7 +3283,7 @@ void shadow_teardown(struct domain *d, int *preempted)
                 if ( mfn_valid(oos_snapshot[i]) )
                 {
                     shadow_free(d, oos_snapshot[i]);
-                    oos_snapshot[i] = _mfn(INVALID_MFN);
+                    oos_snapshot[i] = INVALID_MFN;
                 }
         }
 #endif /* OOS */
@@ -3449,7 +3448,7 @@ static int shadow_one_bit_disable(struct domain *d, u32 mode)
                     if ( mfn_valid(oos_snapshot[i]) )
                     {
                         shadow_free(d, oos_snapshot[i]);
-                        oos_snapshot[i] = _mfn(INVALID_MFN);
+                        oos_snapshot[i] = INVALID_MFN;
                     }
             }
 #endif /* OOS */
@@ -3744,7 +3743,7 @@ int shadow_track_dirty_vram(struct domain *d,
         memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
     else
     {
-        unsigned long map_mfn = INVALID_MFN;
+        unsigned long map_mfn = mfn_x(INVALID_MFN);
         void *map_sl1p = NULL;
 
         /* Iterate over VRAM to track dirty bits. */
@@ -3754,7 +3753,7 @@ int shadow_track_dirty_vram(struct domain *d,
             int dirty = 0;
             paddr_t sl1ma = dirty_vram->sl1ma[i];
 
-            if (mfn_x(mfn) == INVALID_MFN)
+            if ( !mfn_eq(mfn, INVALID_MFN) )
             {
                 dirty = 1;
             }
@@ -3830,7 +3829,7 @@ int shadow_track_dirty_vram(struct domain *d,
             for ( i = begin_pfn; i < end_pfn; i++ )
             {
                 mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
-                if ( mfn_x(mfn) != INVALID_MFN )
+                if ( !mfn_eq(mfn, INVALID_MFN) )
                     flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
             }
             dirty_vram->last_dirty = -1;
@@ -3968,7 +3967,7 @@ void shadow_audit_tables(struct vcpu *v)
         }
     }
 
-    hash_vcpu_foreach(v, mask, callbacks, _mfn(INVALID_MFN));
+    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN_T);
 }
 
 #endif /* Shadow audit */
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index dfe59a2..f892e2f 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -177,7 +177,7 @@ sh_walk_guest_tables(struct vcpu *v, unsigned long va, walk_t *gw,
 {
     return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
 #if GUEST_PAGING_LEVELS == 3 /* PAE */
-                             _mfn(INVALID_MFN),
+                             INVALID_MFN,
                              v->arch.paging.shadow.gl3e
 #else /* 32 or 64 */
                              pagetable_get_mfn(v->arch.guest_table),
@@ -336,32 +336,32 @@ static void sh_audit_gw(struct vcpu *v, walk_t *gw)
     if ( mfn_valid(gw->l4mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
                                                 SH_type_l4_shadow))) )
-        (void) sh_audit_l4_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
     if ( mfn_valid(gw->l3mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
                                                 SH_type_l3_shadow))) )
-        (void) sh_audit_l3_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
 #endif /* PAE or 64... */
     if ( mfn_valid(gw->l2mfn) )
     {
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
+            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
 #if GUEST_PAGING_LEVELS == 3
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2h_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
+            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
 #endif
     }
     if ( mfn_valid(gw->l1mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
                                                 SH_type_l1_shadow))) )
-        (void) sh_audit_l1_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
     else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
               && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
               && mfn_valid(
               (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
-        (void) sh_audit_fl1_table(v, smfn, _mfn(INVALID_MFN));
+        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
 }
 
 #else
@@ -1752,7 +1752,7 @@ static shadow_l2e_t * shadow_get_and_create_l2e(struct vcpu *v,
 {
 #if GUEST_PAGING_LEVELS >= 4 /* 64bit... */
     struct domain *d = v->domain;
-    mfn_t sl3mfn = _mfn(INVALID_MFN);
+    mfn_t sl3mfn = INVALID_MFN;
     shadow_l3e_t *sl3e;
     if ( !mfn_valid(gw->l2mfn) ) return NULL; /* No guest page. */
     /* Get the l3e */
@@ -2158,7 +2158,7 @@ static int validate_gl4e(struct vcpu *v, void *new_ge, mfn_t sl4mfn, void *se)
     shadow_l4e_t new_sl4e;
     guest_l4e_t new_gl4e = *(guest_l4e_t *)new_ge;
     shadow_l4e_t *sl4p = se;
-    mfn_t sl3mfn = _mfn(INVALID_MFN);
+    mfn_t sl3mfn = INVALID_MFN;
     struct domain *d = v->domain;
     p2m_type_t p2mt;
     int result = 0;
@@ -2217,7 +2217,7 @@ static int validate_gl3e(struct vcpu *v, void *new_ge, mfn_t sl3mfn, void *se)
     shadow_l3e_t new_sl3e;
     guest_l3e_t new_gl3e = *(guest_l3e_t *)new_ge;
     shadow_l3e_t *sl3p = se;
-    mfn_t sl2mfn = _mfn(INVALID_MFN);
+    mfn_t sl2mfn = INVALID_MFN;
     p2m_type_t p2mt;
     int result = 0;
 
@@ -2250,7 +2250,7 @@ static int validate_gl2e(struct vcpu *v, void *new_ge, mfn_t sl2mfn, void *se)
     shadow_l2e_t new_sl2e;
     guest_l2e_t new_gl2e = *(guest_l2e_t *)new_ge;
     shadow_l2e_t *sl2p = se;
-    mfn_t sl1mfn = _mfn(INVALID_MFN);
+    mfn_t sl1mfn = INVALID_MFN;
     p2m_type_t p2mt;
     int result = 0;
 
@@ -2608,7 +2608,7 @@ static inline void check_for_early_unshadow(struct vcpu *v, mfn_t gmfn)
 static inline void reset_early_unshadow(struct vcpu *v)
 {
 #if SHADOW_OPTIMIZATIONS & SHOPT_EARLY_UNSHADOW
-    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = INVALID_MFN;
+    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = mfn_x(INVALID_MFN);
 #endif
 }
 
@@ -4105,10 +4105,10 @@ sh_update_cr3(struct vcpu *v, int do_locking)
                                            ? SH_type_l2h_shadow
                                            : SH_type_l2_shadow);
                 else
-                    sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
+                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
             }
             else
-                sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
+                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
         }
     }
 #elif GUEST_PAGING_LEVELS == 4
@@ -4531,7 +4531,7 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
 
         if ( fast_path ) {
             if ( pagetable_is_null(v->arch.shadow_table[i]) )
-                smfn = _mfn(INVALID_MFN);
+                smfn = INVALID_MFN;
             else
                 smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i]));
         }
@@ -4540,8 +4540,8 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
             /* retrieving the l2s */
             gmfn = get_gfn_query_unlocked(d, gfn_x(guest_l3e_get_gfn(gl3e[i])),
                                           &p2mt);
-            smfn = unlikely(mfn_x(gmfn) == INVALID_MFN)
-                   ? _mfn(INVALID_MFN)
+            smfn = unlikely(mfn_eq(gmfn, INVALID_MFN))
+                   ? INVALID_MFN
                    : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
         }
 
@@ -4846,7 +4846,7 @@ int sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
 {
     guest_l1e_t *gl1e, e;
     shadow_l1e_t *sl1e;
-    mfn_t gl1mfn = _mfn(INVALID_MFN);
+    mfn_t gl1mfn = INVALID_MFN;
     int f;
     int done = 0;
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 45273d4..42c07ee 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -117,7 +117,7 @@ static void vcpu_info_reset(struct vcpu *v)
     v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
                     ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
                     : &dummy_vcpu_info);
-    v->vcpu_info_mfn = INVALID_MFN;
+    v->vcpu_info_mfn = mfn_x(INVALID_MFN);
 }
 
 struct vcpu *alloc_vcpu(
@@ -1141,7 +1141,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
         return -EINVAL;
 
-    if ( v->vcpu_info_mfn != INVALID_MFN )
+    if ( v->vcpu_info_mfn != mfn_x(INVALID_MFN) )
         return -EINVAL;
 
     /* Run this command on yourself or on other offline VCPUS. */
@@ -1205,7 +1205,7 @@ void unmap_vcpu_info(struct vcpu *v)
 {
     unsigned long mfn;
 
-    if ( v->vcpu_info_mfn == INVALID_MFN )
+    if ( v->vcpu_info_mfn == mfn_x(INVALID_MFN) )
         return;
 
     mfn = v->vcpu_info_mfn;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 3f15543..ecace07 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -244,7 +244,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
                               (readonly) ? P2M_ALLOC : P2M_UNSHARE);
     if ( !(*page) )
     {
-        *frame = INVALID_MFN;
+        *frame = mfn_x(INVALID_MFN);
         if ( p2m_is_shared(p2mt) )
             return GNTST_eagain;
         if ( p2m_is_paging(p2mt) )
@@ -260,7 +260,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
     *page = mfn_valid(*frame) ? mfn_to_page(*frame) : NULL;
     if ( (!(*page)) || (!get_page(*page, rd)) )
     {
-        *frame = INVALID_MFN;
+        *frame = mfn_x(INVALID_MFN);
         *page = NULL;
         rc = GNTST_bad_page;
     }
@@ -1785,7 +1785,7 @@ gnttab_transfer(
             p2m_type_t __p2mt;
             mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &__p2mt));
             if ( p2m_is_shared(__p2mt) || !p2m_is_valid(__p2mt) )
-                mfn = INVALID_MFN;
+                mfn = mfn_x(INVALID_MFN);
         }
 #else
         mfn = mfn_x(gfn_to_mfn(d, _gfn(gop.mfn)));
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index afbb1a1..7f207ec 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -55,7 +55,7 @@
 
 TYPE_SAFE(unsigned long, mfn);
 #define PRI_mfn          "05lx"
-#define INVALID_MFN      (~0UL)
+#define INVALID_MFN      _mfn(~0UL)
 
 #ifndef mfn_t
 #define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (2 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:05   ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 05/14] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, sstabellini, Feng Wu, Suravee Suthikulpanit,
	Jun Nakajima, Andrew Cooper, Tim Deegan, George Dunlap,
	Julien Grall, Paul Durrant, Jan Beulich, Mukesh Rathor,
	Boris Ostrovsky

Also take the opportunity to convert arch/x86/debug.c to the typesafe gfn.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Feng Wu <feng.wu@intel.com>

    Changes in v6:
        - Add Stefano's acked-by for ARM bits
        - Remove set of brackets when it is not necessary
        - Add Andrew's reviewed-by

    Changes in v5:
        - Patch added
---
 xen/arch/arm/p2m.c                      |  4 ++--
 xen/arch/x86/debug.c                    | 18 +++++++++---------
 xen/arch/x86/domain.c                   |  2 +-
 xen/arch/x86/hvm/emulate.c              |  7 ++++---
 xen/arch/x86/hvm/hvm.c                  |  6 +++---
 xen/arch/x86/hvm/ioreq.c                |  8 ++++----
 xen/arch/x86/hvm/svm/nestedsvm.c        |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c              |  6 +++---
 xen/arch/x86/mm/altp2m.c                |  2 +-
 xen/arch/x86/mm/hap/guest_walk.c        | 10 +++++-----
 xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
 xen/arch/x86/mm/p2m-pod.c               |  6 +++---
 xen/arch/x86/mm/p2m.c                   | 18 +++++++++---------
 xen/arch/x86/mm/shadow/common.c         |  2 +-
 xen/arch/x86/mm/shadow/multi.c          |  2 +-
 xen/arch/x86/mm/shadow/private.h        |  2 +-
 xen/drivers/passthrough/amd/iommu_map.c |  2 +-
 xen/drivers/passthrough/vtd/iommu.c     |  4 ++--
 xen/drivers/passthrough/x86/iommu.c     |  2 +-
 xen/include/asm-x86/guest_pt.h          |  4 ++--
 xen/include/asm-x86/p2m.h               |  2 +-
 xen/include/xen/mm.h                    |  2 +-
 22 files changed, 57 insertions(+), 56 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d690602..c938dde 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -479,7 +479,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
     }
 
     /* If request to get default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         *access = memaccess[p2m->default_access];
         return 0;
@@ -1879,7 +1879,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     p2m->mem_access_enabled = true;
 
     /* If request to set default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         p2m->default_access = a;
         return 0;
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 9213ea7..3030022 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -44,8 +44,7 @@ typedef unsigned char dbgbyte_t;
 
 /* Returns: mfn for the given (hvm guest) vaddr */
 static mfn_t
-dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
-                unsigned long *gfn)
+dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr, gfn_t *gfn)
 {
     mfn_t mfn;
     uint32_t pfec = PFEC_page_present;
@@ -53,14 +52,14 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
 
     DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
 
-    *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
-    if ( *gfn == INVALID_GFN )
+    *gfn = _gfn(paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec));
+    if ( gfn_eq(*gfn, INVALID_GFN) )
     {
         DBGP2("kdb:bad gfn from gva_to_gfn\n");
         return INVALID_MFN;
     }
 
-    mfn = get_gfn(dp, *gfn, &gfntype);
+    mfn = get_gfn(dp, gfn_x(*gfn), &gfntype);
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
@@ -72,7 +71,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
 
     if ( mfn_eq(mfn, INVALID_MFN) )
     {
-        put_gfn(dp, *gfn);
+        put_gfn(dp, gfn_x(*gfn));
         *gfn = INVALID_GFN;
     }
 
@@ -165,7 +164,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
         char *va;
         unsigned long addr = (unsigned long)gaddr;
         mfn_t mfn;
-        unsigned long gfn = INVALID_GFN, pagecnt;
+        gfn_t gfn = INVALID_GFN;
+        unsigned long pagecnt;
 
         pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
 
@@ -189,8 +189,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
         }
 
         unmap_domain_page(va);
-        if ( gfn != INVALID_GFN )
-            put_gfn(dp, gfn);
+        if ( !gfn_eq(gfn, INVALID_GFN) )
+            put_gfn(dp, gfn_x(gfn));
 
         addr += pagecnt;
         buf += pagecnt;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index bb59247..c8c7e2d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -783,7 +783,7 @@ int arch_domain_soft_reset(struct domain *d)
      * gfn == INVALID_GFN indicates that the shared_info page was never mapped
      * to the domain's address space and there is nothing to replace.
      */
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
         goto exit_put_page;
 
     if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 855af4d..c55ad7b 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -455,7 +455,7 @@ static int hvmemul_linear_to_phys(
             return rc;
         pfn = _paddr >> PAGE_SHIFT;
     }
-    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == INVALID_GFN )
+    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) )
     {
         if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
             return X86EMUL_RETRY;
@@ -472,7 +472,8 @@ static int hvmemul_linear_to_phys(
         npfn = paging_gva_to_gfn(curr, addr, &pfec);
 
         /* Is it contiguous with the preceding PFNs? If not then we're done. */
-        if ( (npfn == INVALID_GFN) || (npfn != (pfn + (reverse ? -i : i))) )
+        if ( (npfn == gfn_x(INVALID_GFN)) ||
+             (npfn != (pfn + (reverse ? -i : i))) )
         {
             if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
                 return X86EMUL_RETRY;
@@ -480,7 +481,7 @@ static int hvmemul_linear_to_phys(
             if ( done == 0 )
             {
                 ASSERT(!reverse);
-                if ( npfn != INVALID_GFN )
+                if ( npfn != gfn_x(INVALID_GFN) )
                     return X86EMUL_UNHANDLEABLE;
                 hvm_inject_page_fault(pfec, addr & PAGE_MASK);
                 return X86EMUL_EXCEPTION;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f3faf2e..bb39d5f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3039,7 +3039,7 @@ static enum hvm_copy_result __hvm_copy(
         if ( flags & HVMCOPY_virt )
         {
             gfn = paging_gva_to_gfn(curr, addr, &pfec);
-            if ( gfn == INVALID_GFN )
+            if ( gfn == gfn_x(INVALID_GFN) )
             {
                 if ( pfec & PFEC_page_paged )
                     return HVMCOPY_gfn_paged_out;
@@ -3154,7 +3154,7 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, int size)
         count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
 
         gfn = paging_gva_to_gfn(curr, addr, &pfec);
-        if ( gfn == INVALID_GFN )
+        if ( gfn == gfn_x(INVALID_GFN) )
         {
             if ( pfec & PFEC_page_paged )
                 return HVMCOPY_gfn_paged_out;
@@ -5298,7 +5298,7 @@ static int do_altp2m_op(
              a.u.enable_notify.vcpu_id != curr->vcpu_id )
             rc = -EINVAL;
 
-        if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
+        if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
              mfn_eq(get_gfn_query_unlocked(curr->domain,
                     a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
             return -EINVAL;
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 7148ac4..d2245e2 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -204,7 +204,7 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
 {
     unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
 
-    if ( gmfn != INVALID_GFN )
+    if ( gmfn != gfn_x(INVALID_GFN) )
         set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
 }
 
@@ -420,7 +420,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
     if ( rc )
         return rc;
 
-    if ( bufioreq_pfn != INVALID_GFN )
+    if ( bufioreq_pfn != gfn_x(INVALID_GFN) )
         rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
 
     if ( rc )
@@ -434,8 +434,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
                                         bool_t handle_bufioreq)
 {
     struct domain *d = s->domain;
-    unsigned long ioreq_pfn = INVALID_GFN;
-    unsigned long bufioreq_pfn = INVALID_GFN;
+    unsigned long ioreq_pfn = gfn_x(INVALID_GFN);
+    unsigned long bufioreq_pfn = gfn_x(INVALID_GFN);
     int rc;
 
     if ( is_default )
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 9d2ac09..f9b38ab 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1200,7 +1200,7 @@ nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     /* Walk the guest-supplied NPT table, just as if it were a pagetable */
     gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
 
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
         return NESTEDHVM_PAGEFAULT_INJECT;
 
     *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a061420..088f454 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2057,13 +2057,13 @@ static int vmx_vcpu_emulate_vmfunc(struct cpu_user_regs *regs)
 static bool_t vmx_vcpu_emulate_ve(struct vcpu *v)
 {
     bool_t rc = 0, writable;
-    unsigned long gfn = gfn_x(vcpu_altp2m(v).veinfo_gfn);
+    gfn_t gfn = vcpu_altp2m(v).veinfo_gfn;
     ve_info_t *veinfo;
 
-    if ( gfn == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
         return 0;
 
-    veinfo = hvm_map_guest_frame_rw(gfn, 0, &writable);
+    veinfo = hvm_map_guest_frame_rw(gfn_x(gfn), 0, &writable);
     if ( !veinfo )
         return 0;
     if ( !writable || veinfo->semaphore != 0 )
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 10605c8..930bdc2 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -26,7 +26,7 @@ altp2m_vcpu_reset(struct vcpu *v)
     struct altp2mvcpu *av = &vcpu_altp2m(v);
 
     av->p2midx = INVALID_ALTP2M;
-    av->veinfo_gfn = _gfn(INVALID_GFN);
+    av->veinfo_gfn = INVALID_GFN;
 }
 
 void
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index d2716f9..1b1a15d 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -70,14 +70,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
         if ( top_page )
             put_page(top_page);
         p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
-        return INVALID_GFN;
+        return gfn_x(INVALID_GFN);
     }
     if ( p2m_is_shared(p2mt) )
     {
         pfec[0] = PFEC_page_shared;
         if ( top_page )
             put_page(top_page);
-        return INVALID_GFN;
+        return gfn_x(INVALID_GFN);
     }
     if ( !top_page )
     {
@@ -110,12 +110,12 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
             ASSERT(p2m_is_hostp2m(p2m));
             pfec[0] = PFEC_page_paged;
             p2m_mem_paging_populate(p2m->domain, gfn_x(gfn));
-            return INVALID_GFN;
+            return gfn_x(INVALID_GFN);
         }
         if ( p2m_is_shared(p2mt) )
         {
             pfec[0] = PFEC_page_shared;
-            return INVALID_GFN;
+            return gfn_x(INVALID_GFN);
         }
 
         if ( page_order )
@@ -147,7 +147,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
     if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
         pfec[0] &= ~PFEC_insn_fetch;
 
-    return INVALID_GFN;
+    return gfn_x(INVALID_GFN);
 }
 
 
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 94cf832..02b27b1 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -236,7 +236,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
     ept_walk_t gw;
     rwx_acc &= EPTE_RWX_MASK;
 
-    *l1gfn = INVALID_GFN;
+    *l1gfn = gfn_x(INVALID_GFN);
 
     rc = nept_walk_tables(v, l2ga, &gw);
     switch ( rc )
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index f384589..149f529 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1003,7 +1003,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
         unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
         unsigned long gfn = mrp->list[idx];
 
-        if ( gfn != INVALID_GFN )
+        if ( gfn != gfn_x(INVALID_GFN) )
         {
             if ( gfn & POD_LAST_SUPERPAGE )
             {
@@ -1020,7 +1020,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
             else
                 p2m_pod_zero_check(p2m, &gfn, 1);
 
-            mrp->list[idx] = INVALID_GFN;
+            mrp->list[idx] = gfn_x(INVALID_GFN);
         }
 
     } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
@@ -1031,7 +1031,7 @@ static void pod_eager_record(struct p2m_domain *p2m,
 {
     struct pod_mrp_list *mrp = &p2m->pod.mrp;
 
-    ASSERT(gfn != INVALID_GFN);
+    ASSERT(gfn != gfn_x(INVALID_GFN));
 
     mrp->list[mrp->idx++] =
         gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index b93c8a2..ff0cce8 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -76,7 +76,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
-        p2m->pod.mrp.list[i] = INVALID_GFN;
+        p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ret = ept_p2m_init(p2m);
@@ -1863,7 +1863,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     }
 
     /* If request to set default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         p2m->default_access = a;
         return 0;
@@ -1932,7 +1932,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
     };
 
     /* If request to get default access. */
-    if ( gfn_x(gfn) == INVALID_GFN )
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         *access = memaccess[p2m->default_access];
         return 0;
@@ -2113,8 +2113,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         mode = paging_get_nestedmode(v);
         l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
-        if ( l2_gfn == INVALID_GFN )
-            return INVALID_GFN;
+        if ( l2_gfn == gfn_x(INVALID_GFN) )
+            return gfn_x(INVALID_GFN);
 
         /* translate l2 guest gfn into l1 guest gfn */
         rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
@@ -2123,7 +2123,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
                                    !!(*pfec & PFEC_insn_fetch));
 
         if ( rv != NESTEDHVM_PAGEFAULT_DONE )
-            return INVALID_GFN;
+            return gfn_x(INVALID_GFN);
 
         /*
          * Sanity check that l1_gfn can be used properly as a 4K mapping, even
@@ -2415,7 +2415,7 @@ static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
     struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
     struct ept_data *ept;
 
-    p2m->min_remapped_gfn = INVALID_GFN;
+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
     p2m->max_remapped_gfn = 0;
     ept = &p2m->ept;
     ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
@@ -2551,7 +2551,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
 
     mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
 
-    if ( gfn_x(new_gfn) == INVALID_GFN )
+    if ( gfn_eq(new_gfn, INVALID_GFN) )
     {
         if ( mfn_valid(mfn) )
             p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
@@ -2613,7 +2613,7 @@ static void p2m_reset_altp2m(struct p2m_domain *p2m)
     /* Uninit and reinit ept to force TLB shootdown */
     ept_p2m_uninit(p2m);
     ept_p2m_init(p2m);
-    p2m->min_remapped_gfn = INVALID_GFN;
+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
     p2m->max_remapped_gfn = 0;
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 1c0b6cd..61ccddf 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1707,7 +1707,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
 
     /* Translate the VA to a GFN. */
     gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
     {
         if ( is_hvm_vcpu(v) )
             hvm_inject_page_fault(pfec, vaddr);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index f892e2f..e54c8b7 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3660,7 +3660,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
          */
         if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
             pfec[0] &= ~PFEC_insn_fetch;
-        return INVALID_GFN;
+        return gfn_x(INVALID_GFN);
     }
     gfn = guest_walk_to_gfn(&gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index c424ad6..824796f 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -796,7 +796,7 @@ static inline unsigned long vtlb_lookup(struct vcpu *v,
                                         unsigned long va, uint32_t pfec)
 {
     unsigned long page_number = va >> PAGE_SHIFT;
-    unsigned long frame_number = INVALID_GFN;
+    unsigned long frame_number = gfn_x(INVALID_GFN);
     int i = vtlb_hash(page_number);
 
     spin_lock(&v->arch.paging.vtlb_lock);
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index c758459..b8c0a48 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
-    if ( gfn == INVALID_GFN )
+    if ( gfn == gfn_x(INVALID_GFN) )
         return -EADDRNOTAVAIL;
     ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index f010612..c322b9f 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -611,7 +611,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( iommu_domid == -1 )
             continue;
 
-        if ( page_count != 1 || gfn == INVALID_GFN )
+        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
             rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
                                        0, flush_dev_iotlb);
         else
@@ -640,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
 
 static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
+    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
 }
 
 /* clear one page's page table */
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cd435d7..69cd6c5 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
 
-            if ( gfn != INVALID_GFN )
+            if ( gfn != gfn_x(INVALID_GFN) )
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index a8d980c..79ed4ff 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -32,7 +32,7 @@
 #error GUEST_PAGING_LEVELS not defined
 #endif
 
-#define VALID_GFN(m) (m != INVALID_GFN)
+#define VALID_GFN(m) (m != gfn_x(INVALID_GFN))
 
 static inline int
 valid_gfn(gfn_t m)
@@ -251,7 +251,7 @@ static inline gfn_t
 guest_walk_to_gfn(walk_t *gw)
 {
     if ( !(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT) )
-        return _gfn(INVALID_GFN);
+        return INVALID_GFN;
     return guest_l1e_get_gfn(gw->l1e);
 }
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 4ab3574..194020e 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -324,7 +324,7 @@ struct p2m_domain {
 #define NR_POD_MRP_ENTRIES 32
 
 /* Encode ORDER_2M superpage in top bit of GFN */
-#define POD_LAST_SUPERPAGE (INVALID_GFN & ~(INVALID_GFN >> 1))
+#define POD_LAST_SUPERPAGE (gfn_x(INVALID_GFN) & ~(gfn_x(INVALID_GFN) >> 1))
 
             unsigned long list[NR_POD_MRP_ENTRIES];
             unsigned int idx;
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 7f207ec..58bc0b8 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -84,7 +84,7 @@ static inline bool_t mfn_eq(mfn_t x, mfn_t y)
 
 TYPE_SAFE(unsigned long, gfn);
 #define PRI_gfn          "05lx"
-#define INVALID_GFN      (~0UL)
+#define INVALID_GFN      _gfn(~0UL)
 
 #ifndef gfn_t
 #define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 05/14] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (3 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 06/14] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The prototype and the declaration of p2m_lookup disagree on how the
function should be used. One expect a frame number whilst the other
an address.

Thankfully, everyone is using with an address today. However, most of
the callers have to convert a guest frame to an address. Modify
the interface to take a guest physical frame in parameter and return
a machine frame.

Whilst modifying the interface, use typesafe gfn and mfn for clarity
and catching possible misusage.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v6:
        - Add Stefano's acked-by

    Changes in v4:
        - Use INVALID_MFN_T when possible
---
 xen/arch/arm/p2m.c        | 43 +++++++++++++++++++++++--------------------
 xen/arch/arm/traps.c      | 21 +++++++++++----------
 xen/include/asm-arm/p2m.h |  7 +++----
 3 files changed, 37 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c938dde..54a363a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -140,14 +140,15 @@ void flush_tlb_domain(struct domain *d)
 }
 
 /*
- * Lookup the MFN corresponding to a domain's PFN.
+ * Lookup the MFN corresponding to a domain's GFN.
  *
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
  */
-static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
+    const paddr_t paddr = pfn_to_paddr(gfn_x(gfn));
     const unsigned int offsets[4] = {
         zeroeth_table_offset(paddr),
         first_table_offset(paddr),
@@ -158,7 +159,7 @@ static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
         ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK
     };
     lpae_t pte, *map;
-    paddr_t maddr = INVALID_PADDR;
+    mfn_t mfn = INVALID_MFN;
     paddr_t mask = 0;
     p2m_type_t _t;
     unsigned int level, root_table;
@@ -216,21 +217,22 @@ static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
     {
         ASSERT(mask);
         ASSERT(pte.p2m.type != p2m_invalid);
-        maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
+        mfn = _mfn(paddr_to_pfn((pte.bits & PADDR_MASK & mask) |
+                                (paddr & ~mask)));
         *t = pte.p2m.type;
     }
 
 err:
-    return maddr;
+    return mfn;
 }
 
-paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
-    paddr_t ret;
+    mfn_t ret;
     struct p2m_domain *p2m = &d->arch.p2m;
 
     spin_lock(&p2m->lock);
-    ret = __p2m_lookup(d, paddr, t);
+    ret = __p2m_lookup(d, gfn, t);
     spin_unlock(&p2m->lock);
 
     return ret;
@@ -493,8 +495,9 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
          * No setting was found in the Radix tree. Check if the
          * entry exists in the page-tables.
          */
-        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
-        if ( INVALID_PADDR == maddr )
+        mfn_t mfn = __p2m_lookup(d, gfn, NULL);
+
+        if ( mfn_eq(mfn, INVALID_MFN) )
             return -ESRCH;
 
         /* If entry exists then its rwx. */
@@ -1483,8 +1486,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
 {
-    paddr_t p = p2m_lookup(d, pfn_to_paddr(gfn_x(gfn)), NULL);
-    return _mfn(p >> PAGE_SHIFT);
+    return p2m_lookup(d, gfn, NULL);
 }
 
 /*
@@ -1498,8 +1500,8 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
 {
     long rc;
     paddr_t ipa;
-    unsigned long maddr;
-    unsigned long mfn;
+    gfn_t gfn;
+    mfn_t mfn;
     xenmem_access_t xma;
     p2m_type_t t;
     struct page_info *page = NULL;
@@ -1508,11 +1510,13 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     if ( rc < 0 )
         goto err;
 
+    gfn = _gfn(paddr_to_pfn(ipa));
+
     /*
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
      */
-    rc = __p2m_get_mem_access(current->domain, _gfn(paddr_to_pfn(ipa)), &xma);
+    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
     if ( rc < 0 )
         goto err;
 
@@ -1561,12 +1565,11 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We had a mem_access permission limiting the access, but the page type
      * could also be limiting, so we need to check that as well.
      */
-    maddr = __p2m_lookup(current->domain, ipa, &t);
-    if ( maddr == INVALID_PADDR )
+    mfn = __p2m_lookup(current->domain, gfn, &t);
+    if ( mfn_eq(mfn, INVALID_MFN) )
         goto err;
 
-    mfn = maddr >> PAGE_SHIFT;
-    if ( !mfn_valid(mfn) )
+    if ( !mfn_valid(mfn_x(mfn)) )
         goto err;
 
     /*
@@ -1575,7 +1578,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     if ( t != p2m_ram_rw )
         goto err;
 
-    page = mfn_to_page(mfn);
+    page = mfn_to_page(mfn_x(mfn));
 
     if ( unlikely(!get_page(page, current->domain)) )
         page = NULL;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 44926ca..b653f61 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2319,14 +2319,16 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
 {
     register_t ttbcr = READ_SYSREG(TCR_EL1);
     uint64_t ttbr0 = READ_SYSREG64(TTBR0_EL1);
-    paddr_t paddr;
     uint32_t offset;
     uint32_t *first = NULL, *second = NULL;
+    mfn_t mfn;
+
+    mfn = p2m_lookup(d, _gfn(paddr_to_pfn(ttbr0)), NULL);
 
     printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr);
     printk("    TTBCR: 0x%08"PRIregister"\n", ttbcr);
     printk("    TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n",
-           ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL));
+           ttbr0, pfn_to_paddr(mfn_x(mfn)));
 
     if ( ttbcr & TTBCR_EAE )
     {
@@ -2339,32 +2341,31 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
         return;
     }
 
-    paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL);
-    if ( paddr == INVALID_PADDR )
+    if ( mfn_eq(mfn, INVALID_MFN) )
     {
         printk("Failed TTBR0 maddr lookup\n");
         goto done;
     }
-    first = map_domain_page(_mfn(paddr_to_pfn(paddr)));
+    first = map_domain_page(mfn);
 
     offset = addr >> (12+10);
     printk("1ST[0x%"PRIx32"] (0x%"PRIpaddr") = 0x%08"PRIx32"\n",
-           offset, paddr, first[offset]);
+           offset, pfn_to_paddr(mfn_x(mfn)), first[offset]);
     if ( !(first[offset] & 0x1) ||
          !(first[offset] & 0x2) )
         goto done;
 
-    paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL);
+    mfn = p2m_lookup(d, _gfn(paddr_to_pfn(first[offset])), NULL);
 
-    if ( paddr == INVALID_PADDR )
+    if ( mfn_eq(mfn, INVALID_MFN) )
     {
         printk("Failed L1 entry maddr lookup\n");
         goto done;
     }
-    second = map_domain_page(_mfn(paddr_to_pfn(paddr)));
+    second = map_domain_page(mfn);
     offset = (addr >> 12) & 0x3FF;
     printk("2ND[0x%"PRIx32"] (0x%"PRIpaddr") = 0x%08"PRIx32"\n",
-           offset, paddr, second[offset]);
+           offset, pfn_to_paddr(mfn_x(mfn)), second[offset]);
 
 done:
     if (second) unmap_domain_page(second);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 0d1e61e..f204482 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -135,8 +135,8 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
-/* Look up the MFN corresponding to a domain's PFN. */
-paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
+/* Look up the MFN corresponding to a domain's GFN. */
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
@@ -201,8 +201,7 @@ static inline struct page_info *get_page_from_gfn(
 {
     struct page_info *page;
     p2m_type_t p2mt;
-    paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt);
-    unsigned long mfn = maddr >> PAGE_SHIFT;
+    unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
 
     if (t)
         *t = p2mt;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 06/14] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (4 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 05/14] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 07/14] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

p2m_cache_flush is expecting GFNs in parameter and not MFNs. Rename
the variable to *gfn* and use typesafe to avoid possible misusage.

Also, modify the prototype of the function to describe the range
using the start and the number of GFNs. This will avoid to wonder
whether the end if inclusive or exclusive.

Note that the type of the parameters 'start' is changed from xen_pfn_t
(aka uint64_t) to gfn_t (aka unsigned long). This means that a truncation
will occur for ARM32. It is fine because it will always be encoded on 28
bits maximum (40 bits address).

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v6:
        - Add Stefano's acked-by

    Changes in v4:
        - This patch was originally called "xen/arm: p2m_cache_flush:
        Use the correct terminology and typesafe gfn"
        - Describe the range using the start and the number of GFNs.

    Changes in v3:
        - Add a word in the commit message about the truncation.

    Changes in v2:
        - Drop _gfn suffix
---
 xen/arch/arm/domctl.c     |  2 +-
 xen/arch/arm/p2m.c        | 11 ++++++-----
 xen/include/asm-arm/p2m.h |  2 +-
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 30453d8..f61f98a 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -30,7 +30,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( e < s )
             return -EINVAL;
 
-        return p2m_cache_flush(d, s, e);
+        return p2m_cache_flush(d, _gfn(s), domctl->u.cacheflush.nr_pfns);
     }
     case XEN_DOMCTL_bind_pt_irq:
     {
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 54a363a..1cfb62b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1469,16 +1469,17 @@ int relinquish_p2m_mapping(struct domain *d)
                               d->arch.p2m.default_access);
 }
 
-int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
+    gfn_t end = gfn_add(start, nr);
 
-    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
-    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
+    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
 
     return apply_p2m_changes(d, CACHEFLUSH,
-                             pfn_to_paddr(start_mfn),
-                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(gfn_x(start)),
+                             pfn_to_paddr(gfn_x(end)),
                              pfn_to_paddr(mfn_x(INVALID_MFN)),
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index f204482..8a96e68 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -139,7 +139,7 @@ void p2m_dump_info(struct domain *d);
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
 /* Clean & invalidate caches corresponding to a region of guest address space */
-int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 07/14] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (5 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 06/14] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 08/14] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini, Shannon Zhao

The parameter 'access' is used by memaccess to restrict temporarily the
permission. This parameter should not be used for other purpose (such
as restricting permanently the permission).

Instead, we should use the default access requested by memacess. When it
is not enabled, the access will be p2m_access_rwx (i.e no restriction
applied).

The type p2m_mmio_direct will map the region read-write and
non-executable before any further restriction by memaccess. Note that
this is already the resulting permission with the curreent combination
of the type and the access. So there is no functional change.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
Cc: Shannon Zhao <shannon.zhao@linaro.org>

    This patch is a candidate for Xen 4.7. Currently this function is
    only used to map ACPI regions.

    I am wondering if we should introduce a new p2m type for it. And map
    this region RO (I am not sure why a guest would want to
    modify this region).

    Changes in v2:
        - Reword the commit message

    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1cfb62b..fcc4513 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1231,7 +1231,7 @@ int map_regions_rw_cache(struct domain *d,
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
                              MATTR_MEM, 0, p2m_mmio_direct,
-                             p2m_access_rw);
+                             d->arch.p2m.default_access);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1244,7 +1244,7 @@ int unmap_regions_rw_cache(struct domain *d,
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
                              MATTR_MEM, 0, p2m_invalid,
-                             p2m_access_rw);
+                             d->arch.p2m.default_access);
 }
 
 int map_mmio_regions(struct domain *d,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 08/14] xen/arm: dom0_build: Remove dead code in allocate_memory
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (6 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 07/14] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 09/14] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The code to allocate memory when dom0 does not use direct mapping is
relying on the presence of memory node in the DT.

However, they are not present when booting using UEFI or when using
ACPI.

Rather than fixing the code, remove it because dom0 is always direct
memory mapped and therefore the code is never tested. Also add a
check to avoid disabling direct memory mapped and not implementing
the associated RAM bank allocation.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v6:
        - Add Stefano's acked-by

    Changes in v4:
        - Patch added
---
 xen/arch/arm/domain_build.c | 58 ++++++---------------------------------------
 1 file changed, 7 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 49185f0..923f48a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -235,7 +235,7 @@ fail:
  * (as described above) we allow higher allocations and continue until
  * that runs out (or we have allocated sufficient dom0 memory).
  */
-static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
+static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
 {
     const unsigned int min_low_order =
         get_order_from_bytes(min_t(paddr_t, dom0_mem, MB(128)));
@@ -247,6 +247,12 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
     bool_t lowmem = is_32bit_domain(d);
     unsigned int bits;
 
+    /*
+     * TODO: Implement memory bank allocation when DOM0 is not direct
+     * mapped
+     */
+    BUG_ON(!dom0_11_mapping);
+
     printk("Allocating 1:1 mappings totalling %ldMB for dom0:\n",
            /* Don't want format this as PRIpaddr (16 digit hex) */
            (unsigned long)(kinfo->unassigned_mem >> 20));
@@ -343,56 +349,6 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
     }
 }
 
-static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
-{
-
-    struct dt_device_node *memory = NULL;
-    const void *reg;
-    u32 reg_len, reg_size;
-    unsigned int bank = 0;
-
-    if ( dom0_11_mapping )
-        return allocate_memory_11(d, kinfo);
-
-    while ( (memory = dt_find_node_by_type(memory, "memory")) )
-    {
-        int l;
-
-        dt_dprintk("memory node\n");
-
-        reg_size = dt_cells_to_size(dt_n_addr_cells(memory) + dt_n_size_cells(memory));
-
-        reg = dt_get_property(memory, "reg", &reg_len);
-        if ( reg == NULL )
-            panic("Memory node has no reg property");
-
-        for ( l = 0;
-              kinfo->unassigned_mem > 0 && l + reg_size <= reg_len
-                  && kinfo->mem.nr_banks < NR_MEM_BANKS;
-              l += reg_size )
-        {
-            paddr_t start, size;
-
-            if ( dt_device_get_address(memory, bank, &start, &size) )
-                panic("Unable to retrieve the bank %u for %s",
-                      bank, dt_node_full_name(memory));
-
-            if ( size > kinfo->unassigned_mem )
-                size = kinfo->unassigned_mem;
-
-            printk("Populate P2M %#"PRIx64"->%#"PRIx64"\n",
-                   start, start + size);
-            if ( p2m_populate_ram(d, start, start + size) < 0 )
-                panic("Failed to populate P2M");
-            kinfo->mem.bank[kinfo->mem.nr_banks].start = start;
-            kinfo->mem.bank[kinfo->mem.nr_banks].size = size;
-            kinfo->mem.nr_banks++;
-
-            kinfo->unassigned_mem -= size;
-        }
-    }
-}
-
 static int write_properties(struct domain *d, struct kernel_info *kinfo,
                             const struct dt_device_node *node)
 {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 09/14] xen/arm: p2m: Remove unused operation ALLOCATE
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (7 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 08/14] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 10/14] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

The operation ALLOCATE is unused. If we ever need it, it could be
reimplemented with INSERT.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v6:
        - Add Stefano's acked-by

    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c        | 67 ++---------------------------------------------
 xen/include/asm-arm/p2m.h |  3 ---
 2 files changed, 2 insertions(+), 68 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index fcc4513..f11094e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -547,7 +547,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long pfn,
 
 enum p2m_operation {
     INSERT,
-    ALLOCATE,
     REMOVE,
     RELINQUISH,
     CACHEFLUSH,
@@ -667,7 +666,6 @@ static int apply_one_level(struct domain *d,
 {
     const paddr_t level_size = level_sizes[level];
     const paddr_t level_mask = level_masks[level];
-    const paddr_t level_shift = level_shifts[level];
 
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte;
@@ -678,58 +676,6 @@ static int apply_one_level(struct domain *d,
 
     switch ( op )
     {
-    case ALLOCATE:
-        ASSERT(level < 3 || !p2m_valid(orig_pte));
-        ASSERT(*maddr == 0);
-
-        if ( p2m_valid(orig_pte) )
-            return P2M_ONE_DESCEND;
-
-        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
-           /* We only create superpages when mem_access is not in use. */
-             (level == 3 || (level < 3 && !p2m->mem_access_enabled)) )
-        {
-            struct page_info *page;
-
-            page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
-            if ( page )
-            {
-                rc = p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), a);
-                if ( rc < 0 )
-                {
-                    free_domheap_page(page);
-                    return rc;
-                }
-
-                pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
-                if ( level < 3 )
-                    pte.p2m.table = 0;
-                p2m_write_pte(entry, pte, flush_cache);
-                p2m->stats.mappings[level]++;
-
-                *addr += level_size;
-
-                return P2M_ONE_PROGRESS;
-            }
-            else if ( level == 3 )
-                return -ENOMEM;
-        }
-
-        /* L3 is always suitably aligned for mapping (handled, above) */
-        BUG_ON(level == 3);
-
-        /*
-         * If we get here then we failed to allocate a sufficiently
-         * large contiguous region for this level (which can't be
-         * L3) or mem_access is in use. Create a page table and
-         * continue to descend so we try smaller allocations.
-         */
-        rc = p2m_create_table(d, entry, 0, flush_cache);
-        if ( rc < 0 )
-            return rc;
-
-        return P2M_ONE_DESCEND;
-
     case INSERT:
         if ( is_mapping_aligned(*addr, end_gpaddr, *maddr, level_size) &&
            /*
@@ -1169,7 +1115,7 @@ static int apply_p2m_changes(struct domain *d,
         }
     }
 
-    if ( op == ALLOCATE || op == INSERT )
+    if ( op == INSERT )
     {
         p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
         p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
@@ -1197,7 +1143,7 @@ out:
 
     spin_unlock(&p2m->lock);
 
-    if ( rc < 0 && ( op == INSERT || op == ALLOCATE ) &&
+    if ( rc < 0 && ( op == INSERT ) &&
          addr != start_gpaddr )
     {
         BUG_ON(addr == end_gpaddr);
@@ -1212,15 +1158,6 @@ out:
     return rc;
 }
 
-int p2m_populate_ram(struct domain *d,
-                     paddr_t start,
-                     paddr_t end)
-{
-    return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, 0, p2m_ram_rw,
-                             d->arch.p2m.default_access);
-}
-
 int map_regions_rw_cache(struct domain *d,
                          unsigned long start_gfn,
                          unsigned long nr,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8a96e68..4752161 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -141,9 +141,6 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
-/* Setup p2m RAM mapping for domain d from start-end. */
-int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
-
 int map_regions_rw_cache(struct domain *d,
                          unsigned long start_gfn,
                          unsigned long nr_mfns,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 10/14] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region...
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (8 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 09/14] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 11/14] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

to avoid mixing machine frame with guest frame. Also drop the prefix start_.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v6:
        - Qualify what is being mapped
        - Use PRI_mfn

    Changes in v4:
        - Patch added
---
 xen/arch/arm/mm.c         |  2 +-
 xen/arch/arm/p2m.c        | 12 ++++++------
 xen/include/asm-arm/p2m.h |  4 ++--
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0e408f8..b5fc034 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1145,7 +1145,7 @@ int xenmem_add_to_physmap_one(
         if ( extra.res0 )
             return -EOPNOTSUPP;
 
-        rc = map_dev_mmio_region(d, gfn_x(gfn), 1, idx);
+        rc = map_dev_mmio_region(d, gfn, 1, _mfn(idx));
         return rc;
 
     default:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f11094e..5fe1b91 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1211,20 +1211,20 @@ int unmap_mmio_regions(struct domain *d,
 }
 
 int map_dev_mmio_region(struct domain *d,
-                        unsigned long start_gfn,
+                        gfn_t gfn,
                         unsigned long nr,
-                        unsigned long mfn)
+                        mfn_t mfn)
 {
     int res;
 
-    if ( !(nr && iomem_access_permitted(d, mfn, mfn + nr - 1)) )
+    if ( !(nr && iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + nr - 1)) )
         return 0;
 
-    res = map_mmio_regions(d, _gfn(start_gfn), nr, _mfn(mfn));
+    res = map_mmio_regions(d, gfn, nr, mfn);
     if ( res < 0 )
     {
-        printk(XENLOG_G_ERR "Unable to map [%#lx - %#lx] in Dom%d\n",
-               mfn, mfn + nr - 1, d->domain_id);
+        printk(XENLOG_G_ERR "Unable to map MFNs [%#"PRI_mfn" - %#"PRI_mfn" in Dom%d\n",
+               mfn_x(mfn), mfn_x(mfn) + nr - 1, d->domain_id);
         return res;
     }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 4752161..8d29eda 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -152,9 +152,9 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long mfn);
 
 int map_dev_mmio_region(struct domain *d,
-                        unsigned long start_gfn,
+                        gfn_t gfn,
                         unsigned long nr,
-                        unsigned long mfn);
+                        mfn_t mfn);
 
 int guest_physmap_add_entry(struct domain *d,
                             gfn_t gfn,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 11/14] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache ...
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (9 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 10/14] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

to avoid mixing machine frame with guest frame. Also rename the
parameters of the function and drop pointless PAGE_MASK in the caller.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v6:
        - Add Stefano's acked-by

    Changes in v4:
        - Patch added
---
 xen/arch/arm/domain_build.c |  8 ++++----
 xen/arch/arm/p2m.c          | 20 ++++++++++----------
 xen/include/asm-arm/p2m.h   | 12 ++++++------
 3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 923f48a..60db9e4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1522,9 +1522,9 @@ static void acpi_map_other_tables(struct domain *d)
         addr = acpi_gbl_root_table_list.tables[i].address;
         size = acpi_gbl_root_table_list.tables[i].length;
         res = map_regions_rw_cache(d,
-                                   paddr_to_pfn(addr & PAGE_MASK),
+                                   _gfn(paddr_to_pfn(addr)),
                                    DIV_ROUND_UP(size, PAGE_SIZE),
-                                   paddr_to_pfn(addr & PAGE_MASK));
+                                   _mfn(paddr_to_pfn(addr)));
         if ( res )
         {
              panic(XENLOG_ERR "Unable to map ACPI region 0x%"PRIx64
@@ -1878,9 +1878,9 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
 
     /* Map the EFI and ACPI tables to Dom0 */
     rc = map_regions_rw_cache(d,
-                              paddr_to_pfn(d->arch.efi_acpi_gpa),
+                              _gfn(paddr_to_pfn(d->arch.efi_acpi_gpa)),
                               PFN_UP(d->arch.efi_acpi_len),
-                              paddr_to_pfn(virt_to_maddr(d->arch.efi_acpi_table)));
+                              _mfn(paddr_to_pfn(virt_to_maddr(d->arch.efi_acpi_table))));
     if ( rc != 0 )
     {
         printk(XENLOG_ERR "Unable to map EFI/ACPI table 0x%"PRIx64
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5fe1b91..2ba9477 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1159,27 +1159,27 @@ out:
 }
 
 int map_regions_rw_cache(struct domain *d,
-                         unsigned long start_gfn,
+                         gfn_t gfn,
                          unsigned long nr,
-                         unsigned long mfn)
+                         mfn_t mfn)
 {
     return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(gfn)),
+                             pfn_to_paddr(gfn_x(gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_MEM, 0, p2m_mmio_direct,
                              d->arch.p2m.default_access);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
-                           unsigned long start_gfn,
+                           gfn_t gfn,
                            unsigned long nr,
-                           unsigned long mfn)
+                           mfn_t mfn)
 {
     return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(start_gfn),
-                             pfn_to_paddr(start_gfn + nr),
-                             pfn_to_paddr(mfn),
+                             pfn_to_paddr(gfn_x(gfn)),
+                             pfn_to_paddr(gfn_x(gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8d29eda..6e258b9 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -142,14 +142,14 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
 int map_regions_rw_cache(struct domain *d,
-                         unsigned long start_gfn,
-                         unsigned long nr_mfns,
-                         unsigned long mfn);
+                         gfn_t gfn,
+                         unsigned long nr,
+                         mfn_t mfn);
 
 int unmap_regions_rw_cache(struct domain *d,
-                           unsigned long start_gfn,
-                           unsigned long nr_mfns,
-                           unsigned long mfn);
+                           gfn_t gfn,
+                           unsigned long nr,
+                           mfn_t mfn);
 
 int map_dev_mmio_region(struct domain *d,
                         gfn_t gfn,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (10 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 11/14] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-11 16:16   ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 13/14] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
  2016-07-06 13:01 ` [PATCH v6 14/14] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
  13 siblings, 1 reply; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

More the half of the arguments of INSERT and REMOVE are the same for
each callers. Simplify the callers of apply_p2m_changes by adding new
helpers which will fill common arguments with default values.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v5:
        - Add missing Signed-off-by

    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c | 70 ++++++++++++++++++++++++++++--------------------------
 1 file changed, 36 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2ba9477..b98eff4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1158,17 +1158,40 @@ out:
     return rc;
 }
 
+static inline int p2m_insert_mapping(struct domain *d,
+                                     gfn_t start_gfn,
+                                     unsigned long nr,
+                                     mfn_t mfn,
+                                     int mattr, p2m_type_t t)
+{
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
+                             mattr, 0, t, d->arch.p2m.default_access);
+}
+
+static inline int p2m_remove_mapping(struct domain *d,
+                                     gfn_t start_gfn,
+                                     unsigned long nr,
+                                     mfn_t mfn)
+{
+    return apply_p2m_changes(d, REMOVE,
+                             pfn_to_paddr(gfn_x(start_gfn)),
+                             pfn_to_paddr(gfn_x(start_gfn) + nr),
+                             pfn_to_paddr(mfn_x(mfn)),
+                             /* arguments below not used when removing mapping */
+                             MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
+}
+
 int map_regions_rw_cache(struct domain *d,
                          gfn_t gfn,
                          unsigned long nr,
                          mfn_t mfn)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(gfn)),
-                             pfn_to_paddr(gfn_x(gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_MEM, 0, p2m_mmio_direct,
-                             d->arch.p2m.default_access);
+    return p2m_insert_mapping(d, gfn, nr, mfn,
+                              MATTR_MEM, p2m_mmio_direct);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1176,12 +1199,7 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long nr,
                            mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(gfn_x(gfn)),
-                             pfn_to_paddr(gfn_x(gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_MEM, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+    return p2m_remove_mapping(d, gfn, nr, mfn);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -1189,12 +1207,8 @@ int map_mmio_regions(struct domain *d,
                      unsigned long nr,
                      mfn_t mfn)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_DEV, 0, p2m_mmio_direct,
-                             d->arch.p2m.default_access);
+    return p2m_insert_mapping(d, start_gfn, nr, mfn,
+                              MATTR_MEM, p2m_mmio_direct);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -1202,12 +1216,7 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
-                             MATTR_DEV, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+    return p2m_remove_mapping(d, start_gfn, nr, mfn);
 }
 
 int map_dev_mmio_region(struct domain *d,
@@ -1237,22 +1246,15 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(gfn)),
-                             pfn_to_paddr(gfn_x(gfn) + (1 << page_order)),
-                             pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t,
-                             d->arch.p2m.default_access);
+    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
+                              MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                gfn_t gfn,
                                mfn_t mfn, unsigned int page_order)
 {
-    apply_p2m_changes(d, REMOVE,
-                      pfn_to_paddr(gfn_x(gfn)),
-                      pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
-                      pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
-                      d->arch.p2m.default_access);
+    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
 int p2m_alloc_table(struct domain *d)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 13/14] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (11 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  2016-07-06 13:01 ` [PATCH v6 14/14] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v6:
        - Add Stefano's acked-by

    Changes in v4:
        - Patch added
---
 xen/arch/arm/mm.c         |  2 +-
 xen/arch/arm/p2m.c        | 18 +++++++++---------
 xen/include/asm-arm/p2m.h |  4 ++--
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b5fc034..4e256c2 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1004,7 +1004,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return d->arch.p2m.max_mapped_gfn;
+    return gfn_x(d->arch.p2m.max_mapped_gfn);
 }
 
 void share_xen_page_with_guest(struct page_info *page,
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b98eff4..c7f6766 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -976,7 +976,7 @@ static int apply_p2m_changes(struct domain *d,
                  * This is set in preempt_count_limit.
                  *
                  */
-                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = _gfn(addr >> PAGE_SHIFT);
                 rc = -ERESTART;
                 goto out;
 
@@ -1117,8 +1117,8 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( op == INSERT )
     {
-        p2m->max_mapped_gfn = max(p2m->max_mapped_gfn, egfn);
-        p2m->lowest_mapped_gfn = min(p2m->lowest_mapped_gfn, sgfn);
+        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, _gfn(egfn));
+        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, _gfn(sgfn));
     }
 
     rc = 0;
@@ -1383,8 +1383,8 @@ int p2m_init(struct domain *d)
 
     p2m->root = NULL;
 
-    p2m->max_mapped_gfn = 0;
-    p2m->lowest_mapped_gfn = ULONG_MAX;
+    p2m->max_mapped_gfn = _gfn(0);
+    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
     p2m->default_access = p2m_access_rwx;
     p2m->mem_access_enabled = false;
@@ -1401,8 +1401,8 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->lowest_mapped_gfn),
-                              pfn_to_paddr(p2m->max_mapped_gfn),
+                              pfn_to_paddr(gfn_x(p2m->lowest_mapped_gfn)),
+                              pfn_to_paddr(gfn_x(p2m->max_mapped_gfn)),
                               pfn_to_paddr(mfn_x(INVALID_MFN)),
                               MATTR_MEM, 0, p2m_invalid,
                               d->arch.p2m.default_access);
@@ -1413,8 +1413,8 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
     struct p2m_domain *p2m = &d->arch.p2m;
     gfn_t end = gfn_add(start, nr);
 
-    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
-    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
+    start = gfn_max(start, p2m->lowest_mapped_gfn);
+    end = gfn_min(end, p2m->max_mapped_gfn);
 
     return apply_p2m_changes(d, CACHEFLUSH,
                              pfn_to_paddr(gfn_x(start)),
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 6e258b9..34096bc 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -34,13 +34,13 @@ struct p2m_domain {
     /* Highest guest frame that's ever been mapped in the p2m
      * Only takes into account ram and foreign mapping
      */
-    unsigned long max_mapped_gfn;
+    gfn_t max_mapped_gfn;
 
     /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
      * preemptible manner this is update to track recall where to
      * resume the search. Apart from during teardown this can only
      * decrease. */
-    unsigned long lowest_mapped_gfn;
+    gfn_t lowest_mapped_gfn;
 
     /* Gather some statistics for information purposes only */
     struct {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 14/14] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe
  2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
                   ` (12 preceding siblings ...)
  2016-07-06 13:01 ` [PATCH v6 13/14] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
@ 2016-07-06 13:01 ` Julien Grall
  13 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, sstabellini

Most of the callers of apply_p2m_changes have a GFN, a MFN and the
number of frame to change in hand.

Rather than asking each caller to convert the frame to an address,
rework the interfaces to pass the GFN, MFN and the number of frame.

Note that it would be possible to do more clean-up in apply_p2m_changes,
but this will be done in a follow-up series.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c | 62 ++++++++++++++++++++++++------------------------------
 1 file changed, 28 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c7f6766..ce1c1e0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -906,25 +906,26 @@ static void update_reference_mapping(struct page_info *page,
 
 static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
-                     paddr_t start_gpaddr,
-                     paddr_t end_gpaddr,
-                     paddr_t maddr,
+                     gfn_t sgfn,
+                     unsigned long nr,
+                     mfn_t smfn,
                      int mattr,
                      uint32_t mask,
                      p2m_type_t t,
                      p2m_access_t a)
 {
+    paddr_t start_gpaddr = pfn_to_paddr(gfn_x(sgfn));
+    paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
+    paddr_t maddr = pfn_to_paddr(mfn_x(smfn));
     int rc, ret;
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
     struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
-    paddr_t addr, orig_maddr = maddr;
+    paddr_t addr;
     unsigned int level = 0;
     unsigned int cur_root_table = ~0;
     unsigned int cur_offset[4] = { ~0, ~0, ~0, ~0 };
     unsigned int count = 0;
-    const unsigned long sgfn = paddr_to_pfn(start_gpaddr),
-                        egfn = paddr_to_pfn(end_gpaddr);
     const unsigned int preempt_count_limit = (op == MEMACCESS) ? 1 : 0x2000;
     const bool_t preempt = !is_idle_vcpu(current);
     bool_t flush = false;
@@ -986,9 +987,9 @@ static int apply_p2m_changes(struct domain *d,
                  * Preempt setting mem_access permissions as required by XSA-89,
                  * if it's not the last iteration.
                  */
-                uint32_t progress = paddr_to_pfn(addr) - sgfn + 1;
+                uint32_t progress = paddr_to_pfn(addr) - gfn_x(sgfn) + 1;
 
-                if ( (egfn - sgfn) > progress && !(progress & mask) )
+                if ( nr > progress && !(progress & mask) )
                 {
                     rc = progress;
                     goto out;
@@ -1117,8 +1118,9 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( op == INSERT )
     {
-        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn, _gfn(egfn));
-        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, _gfn(sgfn));
+        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
+                                      gfn_add(sgfn, nr));
+        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -1127,7 +1129,7 @@ out:
     if ( flush )
     {
         flush_tlb_domain(d);
-        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
         if ( !rc )
             rc = ret;
     }
@@ -1146,12 +1148,14 @@ out:
     if ( rc < 0 && ( op == INSERT ) &&
          addr != start_gpaddr )
     {
+        unsigned long gfn = paddr_to_pfn(addr);
+
         BUG_ON(addr == end_gpaddr);
         /*
          * addr keeps the address of the end of the last successfully-inserted
          * mapping.
          */
-        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
+        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
                           mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
@@ -1164,10 +1168,7 @@ static inline int p2m_insert_mapping(struct domain *d,
                                      mfn_t mfn,
                                      int mattr, p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
+    return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
                              mattr, 0, t, d->arch.p2m.default_access);
 }
 
@@ -1176,10 +1177,7 @@ static inline int p2m_remove_mapping(struct domain *d,
                                      unsigned long nr,
                                      mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
-                             pfn_to_paddr(gfn_x(start_gfn)),
-                             pfn_to_paddr(gfn_x(start_gfn) + nr),
-                             pfn_to_paddr(mfn_x(mfn)),
+    return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
                              /* arguments below not used when removing mapping */
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
@@ -1399,13 +1397,13 @@ err:
 int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
+    unsigned long nr;
 
-    return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(gfn_x(p2m->lowest_mapped_gfn)),
-                              pfn_to_paddr(gfn_x(p2m->max_mapped_gfn)),
-                              pfn_to_paddr(mfn_x(INVALID_MFN)),
-                              MATTR_MEM, 0, p2m_invalid,
-                              d->arch.p2m.default_access);
+    nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
+
+    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
+                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
+                             d->arch.p2m.default_access);
 }
 
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
@@ -1416,10 +1414,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
     start = gfn_max(start, p2m->lowest_mapped_gfn);
     end = gfn_min(end, p2m->max_mapped_gfn);
 
-    return apply_p2m_changes(d, CACHEFLUSH,
-                             pfn_to_paddr(gfn_x(start)),
-                             pfn_to_paddr(gfn_x(end)),
-                             pfn_to_paddr(mfn_x(INVALID_MFN)),
+    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
                              MATTR_MEM, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
@@ -1828,10 +1823,9 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
         return 0;
     }
 
-    rc = apply_p2m_changes(d, MEMACCESS,
-                           pfn_to_paddr(gfn_x(gfn) + start),
-                           pfn_to_paddr(gfn_x(gfn) + nr),
-                           0, MATTR_MEM, mask, 0, a);
+    rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
+                           (nr - start), INVALID_MFN,
+                           MATTR_MEM, mask, 0, a);
     if ( rc < 0 )
         return rc;
     else if ( rc > 0 )
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN
  2016-07-06 13:01 ` [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN Julien Grall
@ 2016-07-06 13:04   ` Julien Grall
  2016-07-08 22:01     ` Elena Ufimtseva
  0 siblings, 1 reply; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:04 UTC (permalink / raw)
  To: xen-devel
  Cc: elena.ufimtseva, Kevin Tian, sstabellini, Jun Nakajima,
	George Dunlap, Andrew Cooper, Christoph Egger, Tim Deegan,
	Paul Durrant, Jan Beulich, Liu Jinsong

(CC Elena).

On 06/07/16 14:01, Julien Grall wrote:
> Also take the opportunity to convert arch/x86/debug.c to the typesafe
> mfn and use proper printf format for MFN/GFN when the code around is
> modified.
>
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>
> ---
> Cc: Christoph Egger <chegger@amazon.de>
> Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>

I forgot to update the CC list since GDSX maintainership was take over 
by Elena. Sorry for that.

> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
>
>      Changes in v6:
>          - Add Stefano's acked-by for ARM bits
>          - Use PRI_mfn and PRI_gfn
>          - Remove set of brackets when it is not necessary
>          - Use mfn_add when possible
>          - Add Andrew's reviewed-by
>
>      Changes in v5:
>          - Patch added
> ---
>   xen/arch/arm/p2m.c              |  4 +--
>   xen/arch/x86/cpu/mcheck/mce.c   |  2 +-
>   xen/arch/x86/debug.c            | 58 +++++++++++++++++++++--------------------
>   xen/arch/x86/hvm/hvm.c          |  6 ++---
>   xen/arch/x86/hvm/viridian.c     | 12 ++++-----
>   xen/arch/x86/hvm/vmx/vmx.c      |  2 +-
>   xen/arch/x86/mm/guest_walk.c    |  4 +--
>   xen/arch/x86/mm/hap/hap.c       |  4 +--
>   xen/arch/x86/mm/p2m-ept.c       |  6 ++---
>   xen/arch/x86/mm/p2m-pod.c       | 18 ++++++-------
>   xen/arch/x86/mm/p2m-pt.c        | 18 ++++++-------
>   xen/arch/x86/mm/p2m.c           | 54 +++++++++++++++++++-------------------
>   xen/arch/x86/mm/paging.c        | 12 ++++-----
>   xen/arch/x86/mm/shadow/common.c | 43 +++++++++++++++---------------
>   xen/arch/x86/mm/shadow/multi.c  | 36 ++++++++++++-------------
>   xen/common/domain.c             |  6 ++---
>   xen/common/grant_table.c        |  6 ++---
>   xen/include/xen/mm.h            |  2 +-
>   18 files changed, 147 insertions(+), 146 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 34563bb..d690602 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1461,7 +1461,7 @@ int relinquish_p2m_mapping(struct domain *d)
>       return apply_p2m_changes(d, RELINQUISH,
>                                 pfn_to_paddr(p2m->lowest_mapped_gfn),
>                                 pfn_to_paddr(p2m->max_mapped_gfn),
> -                              pfn_to_paddr(INVALID_MFN),
> +                              pfn_to_paddr(mfn_x(INVALID_MFN)),
>                                 MATTR_MEM, 0, p2m_invalid,
>                                 d->arch.p2m.default_access);
>   }
> @@ -1476,7 +1476,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
>       return apply_p2m_changes(d, CACHEFLUSH,
>                                pfn_to_paddr(start_mfn),
>                                pfn_to_paddr(end_mfn),
> -                             pfn_to_paddr(INVALID_MFN),
> +                             pfn_to_paddr(mfn_x(INVALID_MFN)),
>                                MATTR_MEM, 0, p2m_invalid,
>                                d->arch.p2m.default_access);
>   }
> diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
> index edcbe48..2695b0c 100644
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -1455,7 +1455,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
>                   gfn = PFN_DOWN(gaddr);
>                   mfn = mfn_x(get_gfn(d, gfn, &t));
>
> -                if ( mfn == INVALID_MFN )
> +                if ( mfn == mfn_x(INVALID_MFN) )
>                   {
>                       put_gfn(d, gfn);
>                       put_domain(d);
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 58cae22..9213ea7 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -43,11 +43,11 @@ typedef unsigned long dbgva_t;
>   typedef unsigned char dbgbyte_t;
>
>   /* Returns: mfn for the given (hvm guest) vaddr */
> -static unsigned long
> +static mfn_t
>   dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>                   unsigned long *gfn)
>   {
> -    unsigned long mfn;
> +    mfn_t mfn;
>       uint32_t pfec = PFEC_page_present;
>       p2m_type_t gfntype;
>
> @@ -60,16 +60,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>           return INVALID_MFN;
>       }
>
> -    mfn = mfn_x(get_gfn(dp, *gfn, &gfntype));
> +    mfn = get_gfn(dp, *gfn, &gfntype);
>       if ( p2m_is_readonly(gfntype) && toaddr )
>       {
>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>           mfn = INVALID_MFN;
>       }
>       else
> -        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +        DBGP2("X: vaddr:%lx domid:%d mfn:%#"PRI_mfn"\n",
> +              vaddr, dp->domain_id, mfn_x(mfn));
>
> -    if ( mfn == INVALID_MFN )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>       {
>           put_gfn(dp, *gfn);
>           *gfn = INVALID_GFN;
> @@ -91,7 +92,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>    *       mode.
>    * Returns: mfn for the given (pv guest) vaddr
>    */
> -static unsigned long
> +static mfn_t
>   dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
>   {
>       l4_pgentry_t l4e, *l4t;
> @@ -99,31 +100,31 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
>       l2_pgentry_t l2e, *l2t;
>       l1_pgentry_t l1e, *l1t;
>       unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
> -    unsigned long mfn = cr3 >> PAGE_SHIFT;
> +    mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
>
>       DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
>             cr3, pgd3val);
>
>       if ( pgd3val == 0 )
>       {
> -        l4t = map_domain_page(_mfn(mfn));
> +        l4t = map_domain_page(mfn);
>           l4e = l4t[l4_table_offset(vaddr)];
>           unmap_domain_page(l4t);
> -        mfn = l4e_get_pfn(l4e);
> -        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t,
> -              l4_table_offset(vaddr), l4e, mfn);
> +        mfn = _mfn(l4e_get_pfn(l4e));
> +        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%#"PRI_mfn"\n", l4t,
> +              l4_table_offset(vaddr), l4e, mfn_x(mfn));
>           if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
>           {
>               DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
>               return INVALID_MFN;
>           }
>
> -        l3t = map_domain_page(_mfn(mfn));
> +        l3t = map_domain_page(mfn);
>           l3e = l3t[l3_table_offset(vaddr)];
>           unmap_domain_page(l3t);
> -        mfn = l3e_get_pfn(l3e);
> -        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t,
> -              l3_table_offset(vaddr), l3e, mfn);
> +        mfn = _mfn(l3e_get_pfn(l3e));
> +        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%#"PRI_mfn"\n", l3t,
> +              l3_table_offset(vaddr), l3e, mfn_x(mfn));
>           if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
>                (l3e_get_flags(l3e) & _PAGE_PSE) )
>           {
> @@ -132,26 +133,26 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
>           }
>       }
>
> -    l2t = map_domain_page(_mfn(mfn));
> +    l2t = map_domain_page(mfn);
>       l2e = l2t[l2_table_offset(vaddr)];
>       unmap_domain_page(l2t);
> -    mfn = l2e_get_pfn(l2e);
> -    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
> -          l2e, mfn);
> +    mfn = _mfn(l2e_get_pfn(l2e));
> +    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%#"PRI_mfn"\n",
> +          l2t, l2_table_offset(vaddr), l2e, mfn_x(mfn));
>       if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
>            (l2e_get_flags(l2e) & _PAGE_PSE) )
>       {
>           DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
>           return INVALID_MFN;
>       }
> -    l1t = map_domain_page(_mfn(mfn));
> +    l1t = map_domain_page(mfn);
>       l1e = l1t[l1_table_offset(vaddr)];
>       unmap_domain_page(l1t);
> -    mfn = l1e_get_pfn(l1e);
> -    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
> -          l1e, mfn);
> +    mfn = _mfn(l1e_get_pfn(l1e));
> +    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%#"PRI_mfn"\n", l1t, l1_table_offset(vaddr),
> +          l1e, mfn_x(mfn));
>
> -    return mfn_valid(mfn) ? mfn : INVALID_MFN;
> +    return mfn_valid(mfn_x(mfn)) ? mfn : INVALID_MFN;
>   }
>
>   /* Returns: number of bytes remaining to be copied */
> @@ -163,23 +164,24 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>       {
>           char *va;
>           unsigned long addr = (unsigned long)gaddr;
> -        unsigned long mfn, gfn = INVALID_GFN, pagecnt;
> +        mfn_t mfn;
> +        unsigned long gfn = INVALID_GFN, pagecnt;
>
>           pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
>
>           mfn = (has_hvm_container_domain(dp)
>                  ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>                  : dbg_pv_va2mfn(addr, dp, pgd3));
> -        if ( mfn == INVALID_MFN )
> +        if ( mfn_eq(mfn, INVALID_MFN) )
>               break;
>
> -        va = map_domain_page(_mfn(mfn));
> +        va = map_domain_page(mfn);
>           va = va + (addr & (PAGE_SIZE-1));
>
>           if ( toaddr )
>           {
>               copy_from_user(va, buf, pagecnt);    /* va = buf */
> -            paging_mark_dirty(dp, mfn);
> +            paging_mark_dirty(dp, mfn_x(mfn));
>           }
>           else
>           {
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c89ab6e..f3faf2e 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1796,7 +1796,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>           p2m = hostp2m;
>
>       /* Check access permissions first, then handle faults */
> -    if ( mfn_x(mfn) != INVALID_MFN )
> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>       {
>           bool_t violation;
>
> @@ -5299,8 +5299,8 @@ static int do_altp2m_op(
>               rc = -EINVAL;
>
>           if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> -             (mfn_x(get_gfn_query_unlocked(curr->domain,
> -                    a.u.enable_notify.gfn, &p2mt)) == INVALID_MFN) )
> +             mfn_eq(get_gfn_query_unlocked(curr->domain,
> +                    a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
>               return -EINVAL;
>
>           vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 8253fd0..1734b7e 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -195,8 +195,8 @@ static void enable_hypercall_page(struct domain *d)
>       {
>           if ( page )
>               put_page(page);
> -        gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> -                 page ? page_to_mfn(page) : INVALID_MFN);
> +        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> +                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
>           return;
>       }
>
> @@ -268,8 +268,8 @@ static void initialize_apic_assist(struct vcpu *v)
>       return;
>
>    fail:
> -    gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> -             page ? page_to_mfn(page) : INVALID_MFN);
> +    gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", gmfn,
> +             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
>   }
>
>   static void teardown_apic_assist(struct vcpu *v)
> @@ -348,8 +348,8 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
>       {
>           if ( page )
>               put_page(page);
> -        gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> -                 page ? page_to_mfn(page) : INVALID_MFN);
> +        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> +                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
>           return;
>       }
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index df19579..a061420 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2025,7 +2025,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
>
>               mfn = get_gfn_query_unlocked(d, gfn_x(vcpu_altp2m(v).veinfo_gfn), &t);
>
> -            if ( mfn_x(mfn) != INVALID_MFN )
> +            if ( !mfn_eq(mfn, INVALID_MFN) )
>                   __vmwrite(VIRT_EXCEPTION_INFO, mfn_x(mfn) << PAGE_SHIFT);
>               else
>                   v->arch.hvm_vmx.secondary_exec_control &=
> diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
> index e850502..868e909 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -281,7 +281,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>           start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
>                        ((va >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
>           gw->l1e = guest_l1e_from_gfn(start, flags);
> -        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
> +        gw->l2mfn = gw->l1mfn = INVALID_MFN;
>           goto set_ad;
>       }
>
> @@ -356,7 +356,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>           start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
>                        guest_l1_table_offset(va));
>           gw->l1e = guest_l1e_from_gfn(start, flags);
> -        gw->l1mfn = _mfn(INVALID_MFN);
> +        gw->l1mfn = INVALID_MFN;
>       }
>       else
>       {
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index 9c2cd49..3218fa2 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -430,7 +430,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
>    oom:
>       HAP_ERROR("out of memory building monitor pagetable\n");
>       domain_crash(d);
> -    return _mfn(INVALID_MFN);
> +    return INVALID_MFN;
>   }
>
>   static void hap_destroy_monitor_table(struct vcpu* v, mfn_t mmfn)
> @@ -509,7 +509,7 @@ int hap_enable(struct domain *d, u32 mode)
>           }
>
>           for ( i = 0; i < MAX_EPTP; i++ )
> -            d->arch.altp2m_eptp[i] = INVALID_MFN;
> +            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
>
>           for ( i = 0; i < MAX_ALTP2M; i++ )
>           {
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 7166c71..6d03736 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -50,7 +50,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>                                     int level)
>   {
>       int rc;
> -    unsigned long oldmfn = INVALID_MFN;
> +    unsigned long oldmfn = mfn_x(INVALID_MFN);
>       bool_t check_foreign = (new.mfn != entryptr->mfn ||
>                               new.sa_p2mt != entryptr->sa_p2mt);
>
> @@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
>
>       write_atomic(&entryptr->epte, new.epte);
>
> -    if ( unlikely(oldmfn != INVALID_MFN) )
> +    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
>           put_page(mfn_to_page(oldmfn));
>
>       rc = 0;
> @@ -887,7 +887,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>       int i;
>       int ret = 0;
>       bool_t recalc = 0;
> -    mfn_t mfn = _mfn(INVALID_MFN);
> +    mfn_t mfn = INVALID_MFN;
>       struct ept_data *ept = &p2m->ept;
>
>       *t = p2m_mmio_dm;
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index b7ab169..f384589 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -559,7 +559,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>       {
>           /* All PoD: Mark the whole region invalid and tell caller
>            * we're done. */
> -        p2m_set_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid,
> +        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
>                         p2m->default_access);
>           p2m->pod.entry_count-=(1<<order);
>           BUG_ON(p2m->pod.entry_count < 0);
> @@ -602,7 +602,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>           n = 1UL << cur_order;
>           if ( t == p2m_populate_on_demand )
>           {
> -            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> +            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
>                             p2m_invalid, p2m->default_access);
>               p2m->pod.entry_count -= n;
>               BUG_ON(p2m->pod.entry_count < 0);
> @@ -624,7 +624,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>
>               page = mfn_to_page(mfn);
>
> -            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> +            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
>                             p2m_invalid, p2m->default_access);
>               p2m_tlb_flush_sync(p2m);
>               for ( j = 0; j < n; ++j )
> @@ -671,7 +671,7 @@ void p2m_pod_dump_data(struct domain *d)
>   static int
>   p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>   {
> -    mfn_t mfn, mfn0 = _mfn(INVALID_MFN);
> +    mfn_t mfn, mfn0 = INVALID_MFN;
>       p2m_type_t type, type0 = 0;
>       unsigned long * map = NULL;
>       int ret=0, reset = 0;
> @@ -754,7 +754,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>       }
>
>       /* Try to remove the page, restoring old mapping if it fails. */
> -    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_2M,
> +    p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_2M,
>                     p2m_populate_on_demand, p2m->default_access);
>       p2m_tlb_flush_sync(p2m);
>
> @@ -871,7 +871,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>           }
>
>           /* Try to remove the page, restoring old mapping if it fails. */
> -        p2m_set_entry(p2m, gfns[i], _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
>                         p2m_populate_on_demand, p2m->default_access);
>
>           /* See if the page was successfully unmapped.  (Allow one refcount
> @@ -1073,7 +1073,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>            * NOTE: In a fine-grained p2m locking scenario this operation
>            * may need to promote its locking from gfn->1g superpage
>            */
> -        p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
> +        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
>                         p2m_populate_on_demand, p2m->default_access);
>           return 0;
>       }
> @@ -1157,7 +1157,7 @@ remap_and_retry:
>        * need promoting the gfn lock from gfn->2M superpage */
>       gfn_aligned = (gfn>>order)<<order;
>       for(i=0; i<(1<<order); i++)
> -        p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
>                         p2m_populate_on_demand, p2m->default_access);
>       if ( tb_init_done )
>       {
> @@ -1215,7 +1215,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
>       }
>
>       /* Now, actually do the two-way mapping */
> -    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order,
> +    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
>                          p2m_populate_on_demand, p2m->default_access);
>       if ( rc == 0 )
>       {
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 4980934..2b6e89e 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -511,7 +511,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>        * the intermediate one might be).
>        */
>       unsigned int flags, iommu_old_flags = 0;
> -    unsigned long old_mfn = INVALID_MFN;
> +    unsigned long old_mfn = mfn_x(INVALID_MFN);
>
>       ASSERT(sve != 0);
>
> @@ -764,7 +764,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
>                        p2m->max_mapped_pfn )
>                       break;
>           }
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>       }
>
>       mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
> @@ -777,7 +777,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
>           if ( (l4e_get_flags(*l4e) & _PAGE_PRESENT) == 0 )
>           {
>               unmap_domain_page(l4e);
> -            return _mfn(INVALID_MFN);
> +            return INVALID_MFN;
>           }
>           mfn = _mfn(l4e_get_pfn(*l4e));
>           recalc = needs_recalc(l4, *l4e);
> @@ -805,7 +805,7 @@ pod_retry_l3:
>                       *t = p2m_populate_on_demand;
>               }
>               unmap_domain_page(l3e);
> -            return _mfn(INVALID_MFN);
> +            return INVALID_MFN;
>           }
>           if ( flags & _PAGE_PSE )
>           {
> @@ -817,7 +817,7 @@ pod_retry_l3:
>               unmap_domain_page(l3e);
>
>               ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
> -            return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
> +            return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
>           }
>
>           mfn = _mfn(l3e_get_pfn(*l3e));
> @@ -846,7 +846,7 @@ pod_retry_l2:
>           }
>
>           unmap_domain_page(l2e);
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>       }
>       if ( flags & _PAGE_PSE )
>       {
> @@ -856,7 +856,7 @@ pod_retry_l2:
>           unmap_domain_page(l2e);
>
>           ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
> -        return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
> +        return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
>       }
>
>       mfn = _mfn(l2e_get_pfn(*l2e));
> @@ -885,14 +885,14 @@ pod_retry_l1:
>           }
>
>           unmap_domain_page(l1e);
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>       }
>       mfn = _mfn(l1e_get_pfn(*l1e));
>       *t = recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
>       unmap_domain_page(l1e);
>
>       ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
> -    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : _mfn(INVALID_MFN);
> +    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
>   }
>
>   static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6258a5b..b93c8a2 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -388,7 +388,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
>       if (unlikely((p2m_is_broken(*t))))
>       {
>           /* Return invalid_mfn to avoid caller's access */
> -        mfn = _mfn(INVALID_MFN);
> +        mfn = INVALID_MFN;
>           if ( q & P2M_ALLOC )
>               domain_crash(p2m->domain);
>       }
> @@ -493,8 +493,8 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>               rc = set_rc;
>
>           gfn += 1ul << order;
> -        if ( mfn_x(mfn) != INVALID_MFN )
> -            mfn = _mfn(mfn_x(mfn) + (1ul << order));
> +        if ( !mfn_eq(mfn, INVALID_MFN) )
> +            mfn = mfn_add(mfn, 1ul << order);
>           todo -= 1ul << order;
>       }
>
> @@ -580,7 +580,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
>
>       /* Initialise physmap tables for slot zero. Other code assumes this. */
>       p2m->defer_nested_flush = 1;
> -    rc = p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +    rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K,
>                          p2m_invalid, p2m->default_access);
>       p2m->defer_nested_flush = 0;
>       p2m_unlock(p2m);
> @@ -670,7 +670,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
>               ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
>           }
>       }
> -    return p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid,
> +    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
>                            p2m->default_access);
>   }
>
> @@ -840,7 +840,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
>       {
>           gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
>                    gfn_x(gfn), mfn_x(mfn));
> -        rc = p2m_set_entry(p2m, gfn_x(gfn), _mfn(INVALID_MFN), page_order,
> +        rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order,
>                              p2m_invalid, p2m->default_access);
>           if ( rc == 0 )
>           {
> @@ -1107,7 +1107,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
>       }
>
>       /* Do not use mfn_valid() here as it will usually fail for MMIO pages. */
> -    if ( (INVALID_MFN == mfn_x(actual_mfn)) || (t != p2m_mmio_direct) )
> +    if ( mfn_eq(actual_mfn, INVALID_MFN) || (t != p2m_mmio_direct) )
>       {
>           gdprintk(XENLOG_ERR,
>                    "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
> @@ -1117,7 +1117,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
>           gdprintk(XENLOG_WARNING,
>                    "no mapping between mfn %08lx and gfn %08lx\n",
>                    mfn_x(mfn), gfn);
> -    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order, p2m_invalid,
> +    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
>                          p2m->default_access);
>
>    out:
> @@ -1146,7 +1146,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
>       mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
>       if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
>       {
> -        ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +        ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
>                               p2m_invalid, p2m->default_access);
>           gfn_unlock(p2m, gfn, 0);
>       }
> @@ -1316,7 +1316,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
>           put_page(page);
>
>       /* Remove mapping from p2m table */
> -    ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> +    ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
>                           p2m_ram_paged, a);
>
>       /* Clear content before returning the page to Xen */
> @@ -1844,7 +1844,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>       if ( altp2m_idx )
>       {
>           if ( altp2m_idx >= MAX_ALTP2M ||
> -             d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN )
> +             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
>               return -EINVAL;
>
>           ap2m = d->arch.altp2m_p2m[altp2m_idx];
> @@ -1942,7 +1942,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
>       mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
>       gfn_unlock(p2m, gfn, 0);
>
> -    if ( mfn_x(mfn) == INVALID_MFN )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>           return -ESRCH;
>
>       if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> @@ -2288,7 +2288,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
>
>       for ( i = 0; i < MAX_ALTP2M; i++ )
>       {
> -        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
> +        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>               continue;
>
>           p2m = d->arch.altp2m_p2m[i];
> @@ -2315,7 +2315,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
>
>       altp2m_list_lock(d);
>
> -    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>       {
>           if ( idx != vcpu_altp2m(v).p2midx )
>           {
> @@ -2359,14 +2359,14 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
>                                 0, &page_order);
>       __put_gfn(*ap2m, gfn_x(gfn));
>
> -    if ( mfn_x(mfn) != INVALID_MFN )
> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>           return 0;
>
>       mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
>                                 P2M_ALLOC | P2M_UNSHARE, &page_order);
>       __put_gfn(hp2m, gfn_x(gfn));
>
> -    if ( mfn_x(mfn) == INVALID_MFN )
> +    if ( mfn_eq(mfn, INVALID_MFN) )
>           return 0;
>
>       p2m_lock(*ap2m);
> @@ -2404,7 +2404,7 @@ void p2m_flush_altp2m(struct domain *d)
>           /* Uninit and reinit ept to force TLB shootdown */
>           ept_p2m_uninit(d->arch.altp2m_p2m[i]);
>           ept_p2m_init(d->arch.altp2m_p2m[i]);
> -        d->arch.altp2m_eptp[i] = INVALID_MFN;
> +        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
>       }
>
>       altp2m_list_unlock(d);
> @@ -2431,7 +2431,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>
>       altp2m_list_lock(d);
>
> -    if ( d->arch.altp2m_eptp[idx] == INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
>       {
>           p2m_init_altp2m_helper(d, idx);
>           rc = 0;
> @@ -2450,7 +2450,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
>
>       for ( i = 0; i < MAX_ALTP2M; i++ )
>       {
> -        if ( d->arch.altp2m_eptp[i] != INVALID_MFN )
> +        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
>               continue;
>
>           p2m_init_altp2m_helper(d, i);
> @@ -2476,7 +2476,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>
>       altp2m_list_lock(d);
>
> -    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>       {
>           p2m = d->arch.altp2m_p2m[idx];
>
> @@ -2486,7 +2486,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>               /* Uninit and reinit ept to force TLB shootdown */
>               ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
>               ept_p2m_init(d->arch.altp2m_p2m[idx]);
> -            d->arch.altp2m_eptp[idx] = INVALID_MFN;
> +            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
>               rc = 0;
>           }
>       }
> @@ -2510,7 +2510,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
>
>       altp2m_list_lock(d);
>
> -    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> +    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
>       {
>           for_each_vcpu( d, v )
>               if ( idx != vcpu_altp2m(v).p2midx )
> @@ -2541,7 +2541,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>       unsigned int page_order;
>       int rc = -EINVAL;
>
> -    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN )
> +    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
>           return rc;
>
>       hp2m = p2m_get_hostp2m(d);
> @@ -2636,14 +2636,14 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>
>       for ( i = 0; i < MAX_ALTP2M; i++ )
>       {
> -        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
> +        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>               continue;
>
>           p2m = d->arch.altp2m_p2m[i];
>           m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
>
>           /* Check for a dropped page that may impact this altp2m */
> -        if ( mfn_x(mfn) == INVALID_MFN &&
> +        if ( mfn_eq(mfn, INVALID_MFN) &&
>                gfn_x(gfn) >= p2m->min_remapped_gfn &&
>                gfn_x(gfn) <= p2m->max_remapped_gfn )
>           {
> @@ -2660,7 +2660,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>                   for ( i = 0; i < MAX_ALTP2M; i++ )
>                   {
>                       if ( i == last_reset_idx ||
> -                         d->arch.altp2m_eptp[i] == INVALID_MFN )
> +                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>                           continue;
>
>                       p2m = d->arch.altp2m_p2m[i];
> @@ -2672,7 +2672,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>                   goto out;
>               }
>           }
> -        else if ( mfn_x(m) != INVALID_MFN )
> +        else if ( !mfn_eq(m, INVALID_MFN) )
>               p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma);
>
>           __put_gfn(p2m, gfn_x(gfn));
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index 8219bb6..107fc8c 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -67,7 +67,7 @@ static mfn_t paging_new_log_dirty_page(struct domain *d)
>       if ( unlikely(page == NULL) )
>       {
>           d->arch.paging.log_dirty.failed_allocs++;
> -        return _mfn(INVALID_MFN);
> +        return INVALID_MFN;
>       }
>
>       d->arch.paging.log_dirty.allocs++;
> @@ -95,7 +95,7 @@ static mfn_t paging_new_log_dirty_node(struct domain *d)
>           int i;
>           mfn_t *node = map_domain_page(mfn);
>           for ( i = 0; i < LOGDIRTY_NODE_ENTRIES; i++ )
> -            node[i] = _mfn(INVALID_MFN);
> +            node[i] = INVALID_MFN;
>           unmap_domain_page(node);
>       }
>       return mfn;
> @@ -167,7 +167,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
>
>               unmap_domain_page(l2);
>               paging_free_log_dirty_page(d, l3[i3]);
> -            l3[i3] = _mfn(INVALID_MFN);
> +            l3[i3] = INVALID_MFN;
>
>               if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
>               {
> @@ -182,7 +182,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
>           if ( rc )
>               break;
>           paging_free_log_dirty_page(d, l4[i4]);
> -        l4[i4] = _mfn(INVALID_MFN);
> +        l4[i4] = INVALID_MFN;
>
>           if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
>           {
> @@ -198,7 +198,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
>       if ( !rc )
>       {
>           paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
> -        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> +        d->arch.paging.log_dirty.top = INVALID_MFN;
>
>           ASSERT(d->arch.paging.log_dirty.allocs == 0);
>           d->arch.paging.log_dirty.failed_allocs = 0;
> @@ -660,7 +660,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags)
>       /* This must be initialized separately from the rest of the
>        * log-dirty init code as that can be called more than once and we
>        * don't want to leak any active log-dirty bitmaps */
> -    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> +    d->arch.paging.log_dirty.top = INVALID_MFN;
>
>       /*
>        * Shadow pagetables are the default, but we will use
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 226e32d..1c0b6cd 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -88,10 +88,10 @@ void shadow_vcpu_init(struct vcpu *v)
>
>       for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
>       {
> -        v->arch.paging.shadow.oos[i] = _mfn(INVALID_MFN);
> -        v->arch.paging.shadow.oos_snapshot[i] = _mfn(INVALID_MFN);
> +        v->arch.paging.shadow.oos[i] = INVALID_MFN;
> +        v->arch.paging.shadow.oos_snapshot[i] = INVALID_MFN;
>           for ( j = 0; j < SHADOW_OOS_FIXUPS; j++ )
> -            v->arch.paging.shadow.oos_fixup[i].smfn[j] = _mfn(INVALID_MFN);
> +            v->arch.paging.shadow.oos_fixup[i].smfn[j] = INVALID_MFN;
>       }
>   #endif
>
> @@ -593,12 +593,12 @@ static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
>       int i;
>       for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
>       {
> -        if ( mfn_x(fixup->smfn[i]) != INVALID_MFN )
> +        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
>           {
>               sh_remove_write_access_from_sl1p(d, gmfn,
>                                                fixup->smfn[i],
>                                                fixup->off[i]);
> -            fixup->smfn[i] = _mfn(INVALID_MFN);
> +            fixup->smfn[i] = INVALID_MFN;
>           }
>       }
>
> @@ -636,7 +636,7 @@ void oos_fixup_add(struct domain *d, mfn_t gmfn,
>
>               next = oos_fixup[idx].next;
>
> -            if ( mfn_x(oos_fixup[idx].smfn[next]) != INVALID_MFN )
> +            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
>               {
>                   TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
>
> @@ -757,7 +757,7 @@ static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
>       struct oos_fixup fixup = { .next = 0 };
>
>       for (i = 0; i < SHADOW_OOS_FIXUPS; i++ )
> -        fixup.smfn[i] = _mfn(INVALID_MFN);
> +        fixup.smfn[i] = INVALID_MFN;
>
>       idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
>       oidx = idx;
> @@ -807,7 +807,7 @@ static void oos_hash_remove(struct domain *d, mfn_t gmfn)
>               idx = (idx + 1) % SHADOW_OOS_PAGES;
>           if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
>           {
> -            oos[idx] = _mfn(INVALID_MFN);
> +            oos[idx] = INVALID_MFN;
>               return;
>           }
>       }
> @@ -838,7 +838,6 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
>
>       SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
>       BUG();
> -    return _mfn(INVALID_MFN);
>   }
>
>   /* Pull a single guest page back into sync */
> @@ -862,7 +861,7 @@ void sh_resync(struct domain *d, mfn_t gmfn)
>           if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
>           {
>               _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
> -            oos[idx] = _mfn(INVALID_MFN);
> +            oos[idx] = INVALID_MFN;
>               return;
>           }
>       }
> @@ -914,7 +913,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
>           {
>               /* Write-protect and sync contents */
>               _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
> -            oos[idx] = _mfn(INVALID_MFN);
> +            oos[idx] = INVALID_MFN;
>           }
>
>    resync_others:
> @@ -948,7 +947,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
>               {
>                   /* Write-protect and sync contents */
>                   _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
> -                oos[idx] = _mfn(INVALID_MFN);
> +                oos[idx] = INVALID_MFN;
>               }
>           }
>       }
> @@ -1784,7 +1783,7 @@ void *sh_emulate_map_dest(struct vcpu *v, unsigned long vaddr,
>       if ( likely(((vaddr + bytes - 1) & PAGE_MASK) == (vaddr & PAGE_MASK)) )
>       {
>           /* Whole write fits on a single page. */
> -        sh_ctxt->mfn[1] = _mfn(INVALID_MFN);
> +        sh_ctxt->mfn[1] = INVALID_MFN;
>           map = map_domain_page(sh_ctxt->mfn[0]) + (vaddr & ~PAGE_MASK);
>       }
>       else if ( !is_hvm_domain(d) )
> @@ -2086,7 +2085,7 @@ mfn_t shadow_hash_lookup(struct domain *d, unsigned long n, unsigned int t)
>       }
>
>       perfc_incr(shadow_hash_lookup_miss);
> -    return _mfn(INVALID_MFN);
> +    return INVALID_MFN;
>   }
>
>   void shadow_hash_insert(struct domain *d, unsigned long n, unsigned int t,
> @@ -2910,7 +2909,7 @@ void sh_reset_l3_up_pointers(struct vcpu *v)
>       };
>       static const unsigned int callback_mask = SHF_L3_64;
>
> -    hash_vcpu_foreach(v, callback_mask, callbacks, _mfn(INVALID_MFN));
> +    hash_vcpu_foreach(v, callback_mask, callbacks, INVALID_MFN);
>   }
>
>
> @@ -2940,7 +2939,7 @@ static void sh_update_paging_modes(struct vcpu *v)
>   #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */
>
>   #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
> -    if ( mfn_x(v->arch.paging.shadow.oos_snapshot[0]) == INVALID_MFN )
> +    if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
>       {
>           int i;
>           for(i = 0; i < SHADOW_OOS_PAGES; i++)
> @@ -3284,7 +3283,7 @@ void shadow_teardown(struct domain *d, int *preempted)
>                   if ( mfn_valid(oos_snapshot[i]) )
>                   {
>                       shadow_free(d, oos_snapshot[i]);
> -                    oos_snapshot[i] = _mfn(INVALID_MFN);
> +                    oos_snapshot[i] = INVALID_MFN;
>                   }
>           }
>   #endif /* OOS */
> @@ -3449,7 +3448,7 @@ static int shadow_one_bit_disable(struct domain *d, u32 mode)
>                       if ( mfn_valid(oos_snapshot[i]) )
>                       {
>                           shadow_free(d, oos_snapshot[i]);
> -                        oos_snapshot[i] = _mfn(INVALID_MFN);
> +                        oos_snapshot[i] = INVALID_MFN;
>                       }
>               }
>   #endif /* OOS */
> @@ -3744,7 +3743,7 @@ int shadow_track_dirty_vram(struct domain *d,
>           memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
>       else
>       {
> -        unsigned long map_mfn = INVALID_MFN;
> +        unsigned long map_mfn = mfn_x(INVALID_MFN);
>           void *map_sl1p = NULL;
>
>           /* Iterate over VRAM to track dirty bits. */
> @@ -3754,7 +3753,7 @@ int shadow_track_dirty_vram(struct domain *d,
>               int dirty = 0;
>               paddr_t sl1ma = dirty_vram->sl1ma[i];
>
> -            if (mfn_x(mfn) == INVALID_MFN)
> +            if ( !mfn_eq(mfn, INVALID_MFN) )
>               {
>                   dirty = 1;
>               }
> @@ -3830,7 +3829,7 @@ int shadow_track_dirty_vram(struct domain *d,
>               for ( i = begin_pfn; i < end_pfn; i++ )
>               {
>                   mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
> -                if ( mfn_x(mfn) != INVALID_MFN )
> +                if ( !mfn_eq(mfn, INVALID_MFN) )
>                       flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
>               }
>               dirty_vram->last_dirty = -1;
> @@ -3968,7 +3967,7 @@ void shadow_audit_tables(struct vcpu *v)
>           }
>       }
>
> -    hash_vcpu_foreach(v, mask, callbacks, _mfn(INVALID_MFN));
> +    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN_T);
>   }
>
>   #endif /* Shadow audit */
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index dfe59a2..f892e2f 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -177,7 +177,7 @@ sh_walk_guest_tables(struct vcpu *v, unsigned long va, walk_t *gw,
>   {
>       return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
>   #if GUEST_PAGING_LEVELS == 3 /* PAE */
> -                             _mfn(INVALID_MFN),
> +                             INVALID_MFN,
>                                v->arch.paging.shadow.gl3e
>   #else /* 32 or 64 */
>                                pagetable_get_mfn(v->arch.guest_table),
> @@ -336,32 +336,32 @@ static void sh_audit_gw(struct vcpu *v, walk_t *gw)
>       if ( mfn_valid(gw->l4mfn)
>            && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
>                                                   SH_type_l4_shadow))) )
> -        (void) sh_audit_l4_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
>       if ( mfn_valid(gw->l3mfn)
>            && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
>                                                   SH_type_l3_shadow))) )
> -        (void) sh_audit_l3_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
>   #endif /* PAE or 64... */
>       if ( mfn_valid(gw->l2mfn) )
>       {
>           if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
>                                                    SH_type_l2_shadow))) )
> -            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
> +            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
>   #if GUEST_PAGING_LEVELS == 3
>           if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
>                                                    SH_type_l2h_shadow))) )
> -            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
> +            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
>   #endif
>       }
>       if ( mfn_valid(gw->l1mfn)
>            && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
>                                                   SH_type_l1_shadow))) )
> -        (void) sh_audit_l1_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
>       else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
>                 && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
>                 && mfn_valid(
>                 (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
> -        (void) sh_audit_fl1_table(v, smfn, _mfn(INVALID_MFN));
> +        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
>   }
>
>   #else
> @@ -1752,7 +1752,7 @@ static shadow_l2e_t * shadow_get_and_create_l2e(struct vcpu *v,
>   {
>   #if GUEST_PAGING_LEVELS >= 4 /* 64bit... */
>       struct domain *d = v->domain;
> -    mfn_t sl3mfn = _mfn(INVALID_MFN);
> +    mfn_t sl3mfn = INVALID_MFN;
>       shadow_l3e_t *sl3e;
>       if ( !mfn_valid(gw->l2mfn) ) return NULL; /* No guest page. */
>       /* Get the l3e */
> @@ -2158,7 +2158,7 @@ static int validate_gl4e(struct vcpu *v, void *new_ge, mfn_t sl4mfn, void *se)
>       shadow_l4e_t new_sl4e;
>       guest_l4e_t new_gl4e = *(guest_l4e_t *)new_ge;
>       shadow_l4e_t *sl4p = se;
> -    mfn_t sl3mfn = _mfn(INVALID_MFN);
> +    mfn_t sl3mfn = INVALID_MFN;
>       struct domain *d = v->domain;
>       p2m_type_t p2mt;
>       int result = 0;
> @@ -2217,7 +2217,7 @@ static int validate_gl3e(struct vcpu *v, void *new_ge, mfn_t sl3mfn, void *se)
>       shadow_l3e_t new_sl3e;
>       guest_l3e_t new_gl3e = *(guest_l3e_t *)new_ge;
>       shadow_l3e_t *sl3p = se;
> -    mfn_t sl2mfn = _mfn(INVALID_MFN);
> +    mfn_t sl2mfn = INVALID_MFN;
>       p2m_type_t p2mt;
>       int result = 0;
>
> @@ -2250,7 +2250,7 @@ static int validate_gl2e(struct vcpu *v, void *new_ge, mfn_t sl2mfn, void *se)
>       shadow_l2e_t new_sl2e;
>       guest_l2e_t new_gl2e = *(guest_l2e_t *)new_ge;
>       shadow_l2e_t *sl2p = se;
> -    mfn_t sl1mfn = _mfn(INVALID_MFN);
> +    mfn_t sl1mfn = INVALID_MFN;
>       p2m_type_t p2mt;
>       int result = 0;
>
> @@ -2608,7 +2608,7 @@ static inline void check_for_early_unshadow(struct vcpu *v, mfn_t gmfn)
>   static inline void reset_early_unshadow(struct vcpu *v)
>   {
>   #if SHADOW_OPTIMIZATIONS & SHOPT_EARLY_UNSHADOW
> -    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = INVALID_MFN;
> +    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = mfn_x(INVALID_MFN);
>   #endif
>   }
>
> @@ -4105,10 +4105,10 @@ sh_update_cr3(struct vcpu *v, int do_locking)
>                                              ? SH_type_l2h_shadow
>                                              : SH_type_l2_shadow);
>                   else
> -                    sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
> +                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
>               }
>               else
> -                sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
> +                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
>           }
>       }
>   #elif GUEST_PAGING_LEVELS == 4
> @@ -4531,7 +4531,7 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
>
>           if ( fast_path ) {
>               if ( pagetable_is_null(v->arch.shadow_table[i]) )
> -                smfn = _mfn(INVALID_MFN);
> +                smfn = INVALID_MFN;
>               else
>                   smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i]));
>           }
> @@ -4540,8 +4540,8 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
>               /* retrieving the l2s */
>               gmfn = get_gfn_query_unlocked(d, gfn_x(guest_l3e_get_gfn(gl3e[i])),
>                                             &p2mt);
> -            smfn = unlikely(mfn_x(gmfn) == INVALID_MFN)
> -                   ? _mfn(INVALID_MFN)
> +            smfn = unlikely(mfn_eq(gmfn, INVALID_MFN))
> +                   ? INVALID_MFN
>                      : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
>           }
>
> @@ -4846,7 +4846,7 @@ int sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
>   {
>       guest_l1e_t *gl1e, e;
>       shadow_l1e_t *sl1e;
> -    mfn_t gl1mfn = _mfn(INVALID_MFN);
> +    mfn_t gl1mfn = INVALID_MFN;
>       int f;
>       int done = 0;
>
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 45273d4..42c07ee 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -117,7 +117,7 @@ static void vcpu_info_reset(struct vcpu *v)
>       v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
>                       ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
>                       : &dummy_vcpu_info);
> -    v->vcpu_info_mfn = INVALID_MFN;
> +    v->vcpu_info_mfn = mfn_x(INVALID_MFN);
>   }
>
>   struct vcpu *alloc_vcpu(
> @@ -1141,7 +1141,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
>       if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
>           return -EINVAL;
>
> -    if ( v->vcpu_info_mfn != INVALID_MFN )
> +    if ( v->vcpu_info_mfn != mfn_x(INVALID_MFN) )
>           return -EINVAL;
>
>       /* Run this command on yourself or on other offline VCPUS. */
> @@ -1205,7 +1205,7 @@ void unmap_vcpu_info(struct vcpu *v)
>   {
>       unsigned long mfn;
>
> -    if ( v->vcpu_info_mfn == INVALID_MFN )
> +    if ( v->vcpu_info_mfn == mfn_x(INVALID_MFN) )
>           return;
>
>       mfn = v->vcpu_info_mfn;
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 3f15543..ecace07 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -244,7 +244,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
>                                 (readonly) ? P2M_ALLOC : P2M_UNSHARE);
>       if ( !(*page) )
>       {
> -        *frame = INVALID_MFN;
> +        *frame = mfn_x(INVALID_MFN);
>           if ( p2m_is_shared(p2mt) )
>               return GNTST_eagain;
>           if ( p2m_is_paging(p2mt) )
> @@ -260,7 +260,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
>       *page = mfn_valid(*frame) ? mfn_to_page(*frame) : NULL;
>       if ( (!(*page)) || (!get_page(*page, rd)) )
>       {
> -        *frame = INVALID_MFN;
> +        *frame = mfn_x(INVALID_MFN);
>           *page = NULL;
>           rc = GNTST_bad_page;
>       }
> @@ -1785,7 +1785,7 @@ gnttab_transfer(
>               p2m_type_t __p2mt;
>               mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &__p2mt));
>               if ( p2m_is_shared(__p2mt) || !p2m_is_valid(__p2mt) )
> -                mfn = INVALID_MFN;
> +                mfn = mfn_x(INVALID_MFN);
>           }
>   #else
>           mfn = mfn_x(gfn_to_mfn(d, _gfn(gop.mfn)));
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index afbb1a1..7f207ec 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -55,7 +55,7 @@
>
>   TYPE_SAFE(unsigned long, mfn);
>   #define PRI_mfn          "05lx"
> -#define INVALID_MFN      (~0UL)
> +#define INVALID_MFN      _mfn(~0UL)
>
>   #ifndef mfn_t
>   #define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN
  2016-07-06 13:01 ` [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN Julien Grall
@ 2016-07-06 13:05   ` Julien Grall
  2016-07-08 22:05     ` Elena Ufimtseva
  0 siblings, 1 reply; 23+ messages in thread
From: Julien Grall @ 2016-07-06 13:05 UTC (permalink / raw)
  To: xen-devel
  Cc: elena.ufimtseva, Kevin Tian, sstabellini, Feng Wu, Jan Beulich,
	Jun Nakajima, Andrew Cooper, Tim Deegan, George Dunlap,
	Paul Durrant, Suravee Suthikulpanit, Boris Ostrovsky

(CC Elena)

On 06/07/16 14:01, Julien Grall wrote:
> Also take the opportunity to convert arch/x86/debug.c to the typesafe gfn.
>
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>
> ---
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>

I forgot to update the CC list since GDSX maintainership was taken over 
by Elena. Sorry for that.

Regards,

> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Feng Wu <feng.wu@intel.com>
>
>      Changes in v6:
>          - Add Stefano's acked-by for ARM bits
>          - Remove set of brackets when it is not necessary
>          - Add Andrew's reviewed-by
>
>      Changes in v5:
>          - Patch added
> ---
>   xen/arch/arm/p2m.c                      |  4 ++--
>   xen/arch/x86/debug.c                    | 18 +++++++++---------
>   xen/arch/x86/domain.c                   |  2 +-
>   xen/arch/x86/hvm/emulate.c              |  7 ++++---
>   xen/arch/x86/hvm/hvm.c                  |  6 +++---
>   xen/arch/x86/hvm/ioreq.c                |  8 ++++----
>   xen/arch/x86/hvm/svm/nestedsvm.c        |  2 +-
>   xen/arch/x86/hvm/vmx/vmx.c              |  6 +++---
>   xen/arch/x86/mm/altp2m.c                |  2 +-
>   xen/arch/x86/mm/hap/guest_walk.c        | 10 +++++-----
>   xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
>   xen/arch/x86/mm/p2m-pod.c               |  6 +++---
>   xen/arch/x86/mm/p2m.c                   | 18 +++++++++---------
>   xen/arch/x86/mm/shadow/common.c         |  2 +-
>   xen/arch/x86/mm/shadow/multi.c          |  2 +-
>   xen/arch/x86/mm/shadow/private.h        |  2 +-
>   xen/drivers/passthrough/amd/iommu_map.c |  2 +-
>   xen/drivers/passthrough/vtd/iommu.c     |  4 ++--
>   xen/drivers/passthrough/x86/iommu.c     |  2 +-
>   xen/include/asm-x86/guest_pt.h          |  4 ++--
>   xen/include/asm-x86/p2m.h               |  2 +-
>   xen/include/xen/mm.h                    |  2 +-
>   22 files changed, 57 insertions(+), 56 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d690602..c938dde 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -479,7 +479,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>       }
>
>       /* If request to get default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>       {
>           *access = memaccess[p2m->default_access];
>           return 0;
> @@ -1879,7 +1879,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>       p2m->mem_access_enabled = true;
>
>       /* If request to set default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>       {
>           p2m->default_access = a;
>           return 0;
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 9213ea7..3030022 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -44,8 +44,7 @@ typedef unsigned char dbgbyte_t;
>
>   /* Returns: mfn for the given (hvm guest) vaddr */
>   static mfn_t
> -dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> -                unsigned long *gfn)
> +dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr, gfn_t *gfn)
>   {
>       mfn_t mfn;
>       uint32_t pfec = PFEC_page_present;
> @@ -53,14 +52,14 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>
>       DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
>
> -    *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
> -    if ( *gfn == INVALID_GFN )
> +    *gfn = _gfn(paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec));
> +    if ( gfn_eq(*gfn, INVALID_GFN) )
>       {
>           DBGP2("kdb:bad gfn from gva_to_gfn\n");
>           return INVALID_MFN;
>       }
>
> -    mfn = get_gfn(dp, *gfn, &gfntype);
> +    mfn = get_gfn(dp, gfn_x(*gfn), &gfntype);
>       if ( p2m_is_readonly(gfntype) && toaddr )
>       {
>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> @@ -72,7 +71,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>
>       if ( mfn_eq(mfn, INVALID_MFN) )
>       {
> -        put_gfn(dp, *gfn);
> +        put_gfn(dp, gfn_x(*gfn));
>           *gfn = INVALID_GFN;
>       }
>
> @@ -165,7 +164,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>           char *va;
>           unsigned long addr = (unsigned long)gaddr;
>           mfn_t mfn;
> -        unsigned long gfn = INVALID_GFN, pagecnt;
> +        gfn_t gfn = INVALID_GFN;
> +        unsigned long pagecnt;
>
>           pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
>
> @@ -189,8 +189,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
>           }
>
>           unmap_domain_page(va);
> -        if ( gfn != INVALID_GFN )
> -            put_gfn(dp, gfn);
> +        if ( !gfn_eq(gfn, INVALID_GFN) )
> +            put_gfn(dp, gfn_x(gfn));
>
>           addr += pagecnt;
>           buf += pagecnt;
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index bb59247..c8c7e2d 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -783,7 +783,7 @@ int arch_domain_soft_reset(struct domain *d)
>        * gfn == INVALID_GFN indicates that the shared_info page was never mapped
>        * to the domain's address space and there is nothing to replace.
>        */
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>           goto exit_put_page;
>
>       if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 855af4d..c55ad7b 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -455,7 +455,7 @@ static int hvmemul_linear_to_phys(
>               return rc;
>           pfn = _paddr >> PAGE_SHIFT;
>       }
> -    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == INVALID_GFN )
> +    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) )
>       {
>           if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
>               return X86EMUL_RETRY;
> @@ -472,7 +472,8 @@ static int hvmemul_linear_to_phys(
>           npfn = paging_gva_to_gfn(curr, addr, &pfec);
>
>           /* Is it contiguous with the preceding PFNs? If not then we're done. */
> -        if ( (npfn == INVALID_GFN) || (npfn != (pfn + (reverse ? -i : i))) )
> +        if ( (npfn == gfn_x(INVALID_GFN)) ||
> +             (npfn != (pfn + (reverse ? -i : i))) )
>           {
>               if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
>                   return X86EMUL_RETRY;
> @@ -480,7 +481,7 @@ static int hvmemul_linear_to_phys(
>               if ( done == 0 )
>               {
>                   ASSERT(!reverse);
> -                if ( npfn != INVALID_GFN )
> +                if ( npfn != gfn_x(INVALID_GFN) )
>                       return X86EMUL_UNHANDLEABLE;
>                   hvm_inject_page_fault(pfec, addr & PAGE_MASK);
>                   return X86EMUL_EXCEPTION;
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index f3faf2e..bb39d5f 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3039,7 +3039,7 @@ static enum hvm_copy_result __hvm_copy(
>           if ( flags & HVMCOPY_virt )
>           {
>               gfn = paging_gva_to_gfn(curr, addr, &pfec);
> -            if ( gfn == INVALID_GFN )
> +            if ( gfn == gfn_x(INVALID_GFN) )
>               {
>                   if ( pfec & PFEC_page_paged )
>                       return HVMCOPY_gfn_paged_out;
> @@ -3154,7 +3154,7 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, int size)
>           count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
>
>           gfn = paging_gva_to_gfn(curr, addr, &pfec);
> -        if ( gfn == INVALID_GFN )
> +        if ( gfn == gfn_x(INVALID_GFN) )
>           {
>               if ( pfec & PFEC_page_paged )
>                   return HVMCOPY_gfn_paged_out;
> @@ -5298,7 +5298,7 @@ static int do_altp2m_op(
>                a.u.enable_notify.vcpu_id != curr->vcpu_id )
>               rc = -EINVAL;
>
> -        if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> +        if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
>                mfn_eq(get_gfn_query_unlocked(curr->domain,
>                       a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
>               return -EINVAL;
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 7148ac4..d2245e2 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -204,7 +204,7 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>   {
>       unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>
> -    if ( gmfn != INVALID_GFN )
> +    if ( gmfn != gfn_x(INVALID_GFN) )
>           set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>   }
>
> @@ -420,7 +420,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
>       if ( rc )
>           return rc;
>
> -    if ( bufioreq_pfn != INVALID_GFN )
> +    if ( bufioreq_pfn != gfn_x(INVALID_GFN) )
>           rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
>
>       if ( rc )
> @@ -434,8 +434,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
>                                           bool_t handle_bufioreq)
>   {
>       struct domain *d = s->domain;
> -    unsigned long ioreq_pfn = INVALID_GFN;
> -    unsigned long bufioreq_pfn = INVALID_GFN;
> +    unsigned long ioreq_pfn = gfn_x(INVALID_GFN);
> +    unsigned long bufioreq_pfn = gfn_x(INVALID_GFN);
>       int rc;
>
>       if ( is_default )
> diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
> index 9d2ac09..f9b38ab 100644
> --- a/xen/arch/x86/hvm/svm/nestedsvm.c
> +++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> @@ -1200,7 +1200,7 @@ nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>       /* Walk the guest-supplied NPT table, just as if it were a pagetable */
>       gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
>
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>           return NESTEDHVM_PAGEFAULT_INJECT;
>
>       *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index a061420..088f454 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2057,13 +2057,13 @@ static int vmx_vcpu_emulate_vmfunc(struct cpu_user_regs *regs)
>   static bool_t vmx_vcpu_emulate_ve(struct vcpu *v)
>   {
>       bool_t rc = 0, writable;
> -    unsigned long gfn = gfn_x(vcpu_altp2m(v).veinfo_gfn);
> +    gfn_t gfn = vcpu_altp2m(v).veinfo_gfn;
>       ve_info_t *veinfo;
>
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>           return 0;
>
> -    veinfo = hvm_map_guest_frame_rw(gfn, 0, &writable);
> +    veinfo = hvm_map_guest_frame_rw(gfn_x(gfn), 0, &writable);
>       if ( !veinfo )
>           return 0;
>       if ( !writable || veinfo->semaphore != 0 )
> diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
> index 10605c8..930bdc2 100644
> --- a/xen/arch/x86/mm/altp2m.c
> +++ b/xen/arch/x86/mm/altp2m.c
> @@ -26,7 +26,7 @@ altp2m_vcpu_reset(struct vcpu *v)
>       struct altp2mvcpu *av = &vcpu_altp2m(v);
>
>       av->p2midx = INVALID_ALTP2M;
> -    av->veinfo_gfn = _gfn(INVALID_GFN);
> +    av->veinfo_gfn = INVALID_GFN;
>   }
>
>   void
> diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
> index d2716f9..1b1a15d 100644
> --- a/xen/arch/x86/mm/hap/guest_walk.c
> +++ b/xen/arch/x86/mm/hap/guest_walk.c
> @@ -70,14 +70,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>           if ( top_page )
>               put_page(top_page);
>           p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
> -        return INVALID_GFN;
> +        return gfn_x(INVALID_GFN);
>       }
>       if ( p2m_is_shared(p2mt) )
>       {
>           pfec[0] = PFEC_page_shared;
>           if ( top_page )
>               put_page(top_page);
> -        return INVALID_GFN;
> +        return gfn_x(INVALID_GFN);
>       }
>       if ( !top_page )
>       {
> @@ -110,12 +110,12 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>               ASSERT(p2m_is_hostp2m(p2m));
>               pfec[0] = PFEC_page_paged;
>               p2m_mem_paging_populate(p2m->domain, gfn_x(gfn));
> -            return INVALID_GFN;
> +            return gfn_x(INVALID_GFN);
>           }
>           if ( p2m_is_shared(p2mt) )
>           {
>               pfec[0] = PFEC_page_shared;
> -            return INVALID_GFN;
> +            return gfn_x(INVALID_GFN);
>           }
>
>           if ( page_order )
> @@ -147,7 +147,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>       if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
>           pfec[0] &= ~PFEC_insn_fetch;
>
> -    return INVALID_GFN;
> +    return gfn_x(INVALID_GFN);
>   }
>
>
> diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
> index 94cf832..02b27b1 100644
> --- a/xen/arch/x86/mm/hap/nested_ept.c
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -236,7 +236,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
>       ept_walk_t gw;
>       rwx_acc &= EPTE_RWX_MASK;
>
> -    *l1gfn = INVALID_GFN;
> +    *l1gfn = gfn_x(INVALID_GFN);
>
>       rc = nept_walk_tables(v, l2ga, &gw);
>       switch ( rc )
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index f384589..149f529 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1003,7 +1003,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
>           unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
>           unsigned long gfn = mrp->list[idx];
>
> -        if ( gfn != INVALID_GFN )
> +        if ( gfn != gfn_x(INVALID_GFN) )
>           {
>               if ( gfn & POD_LAST_SUPERPAGE )
>               {
> @@ -1020,7 +1020,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
>               else
>                   p2m_pod_zero_check(p2m, &gfn, 1);
>
> -            mrp->list[idx] = INVALID_GFN;
> +            mrp->list[idx] = gfn_x(INVALID_GFN);
>           }
>
>       } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
> @@ -1031,7 +1031,7 @@ static void pod_eager_record(struct p2m_domain *p2m,
>   {
>       struct pod_mrp_list *mrp = &p2m->pod.mrp;
>
> -    ASSERT(gfn != INVALID_GFN);
> +    ASSERT(gfn != gfn_x(INVALID_GFN));
>
>       mrp->list[mrp->idx++] =
>           gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index b93c8a2..ff0cce8 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -76,7 +76,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
>       p2m->np2m_base = P2M_BASE_EADDR;
>
>       for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
> -        p2m->pod.mrp.list[i] = INVALID_GFN;
> +        p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
>
>       if ( hap_enabled(d) && cpu_has_vmx )
>           ret = ept_p2m_init(p2m);
> @@ -1863,7 +1863,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>       }
>
>       /* If request to set default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>       {
>           p2m->default_access = a;
>           return 0;
> @@ -1932,7 +1932,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
>       };
>
>       /* If request to get default access. */
> -    if ( gfn_x(gfn) == INVALID_GFN )
> +    if ( gfn_eq(gfn, INVALID_GFN) )
>       {
>           *access = memaccess[p2m->default_access];
>           return 0;
> @@ -2113,8 +2113,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
>           mode = paging_get_nestedmode(v);
>           l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>
> -        if ( l2_gfn == INVALID_GFN )
> -            return INVALID_GFN;
> +        if ( l2_gfn == gfn_x(INVALID_GFN) )
> +            return gfn_x(INVALID_GFN);
>
>           /* translate l2 guest gfn into l1 guest gfn */
>           rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
> @@ -2123,7 +2123,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
>                                      !!(*pfec & PFEC_insn_fetch));
>
>           if ( rv != NESTEDHVM_PAGEFAULT_DONE )
> -            return INVALID_GFN;
> +            return gfn_x(INVALID_GFN);
>
>           /*
>            * Sanity check that l1_gfn can be used properly as a 4K mapping, even
> @@ -2415,7 +2415,7 @@ static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>       struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>       struct ept_data *ept;
>
> -    p2m->min_remapped_gfn = INVALID_GFN;
> +    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
>       p2m->max_remapped_gfn = 0;
>       ept = &p2m->ept;
>       ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> @@ -2551,7 +2551,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>
>       mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
>
> -    if ( gfn_x(new_gfn) == INVALID_GFN )
> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
>       {
>           if ( mfn_valid(mfn) )
>               p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
> @@ -2613,7 +2613,7 @@ static void p2m_reset_altp2m(struct p2m_domain *p2m)
>       /* Uninit and reinit ept to force TLB shootdown */
>       ept_p2m_uninit(p2m);
>       ept_p2m_init(p2m);
> -    p2m->min_remapped_gfn = INVALID_GFN;
> +    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
>       p2m->max_remapped_gfn = 0;
>   }
>
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 1c0b6cd..61ccddf 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -1707,7 +1707,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
>
>       /* Translate the VA to a GFN. */
>       gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>       {
>           if ( is_hvm_vcpu(v) )
>               hvm_inject_page_fault(pfec, vaddr);
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index f892e2f..e54c8b7 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -3660,7 +3660,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
>            */
>           if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
>               pfec[0] &= ~PFEC_insn_fetch;
> -        return INVALID_GFN;
> +        return gfn_x(INVALID_GFN);
>       }
>       gfn = guest_walk_to_gfn(&gw);
>
> diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
> index c424ad6..824796f 100644
> --- a/xen/arch/x86/mm/shadow/private.h
> +++ b/xen/arch/x86/mm/shadow/private.h
> @@ -796,7 +796,7 @@ static inline unsigned long vtlb_lookup(struct vcpu *v,
>                                           unsigned long va, uint32_t pfec)
>   {
>       unsigned long page_number = va >> PAGE_SHIFT;
> -    unsigned long frame_number = INVALID_GFN;
> +    unsigned long frame_number = gfn_x(INVALID_GFN);
>       int i = vtlb_hash(page_number);
>
>       spin_lock(&v->arch.paging.vtlb_lock);
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> index c758459..b8c0a48 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
>       unsigned long old_root_mfn;
>       struct domain_iommu *hd = dom_iommu(d);
>
> -    if ( gfn == INVALID_GFN )
> +    if ( gfn == gfn_x(INVALID_GFN) )
>           return -EADDRNOTAVAIL;
>       ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
>
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index f010612..c322b9f 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -611,7 +611,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
>           if ( iommu_domid == -1 )
>               continue;
>
> -        if ( page_count != 1 || gfn == INVALID_GFN )
> +        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
>               rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
>                                          0, flush_dev_iotlb);
>           else
> @@ -640,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
>
>   static int __must_check iommu_flush_iotlb_all(struct domain *d)
>   {
> -    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
> +    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
>   }
>
>   /* clear one page's page table */
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index cd435d7..69cd6c5 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
>               unsigned long mfn = page_to_mfn(page);
>               unsigned long gfn = mfn_to_gmfn(d, mfn);
>
> -            if ( gfn != INVALID_GFN )
> +            if ( gfn != gfn_x(INVALID_GFN) )
>               {
>                   ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
>                   BUG_ON(SHARED_M2P(gfn));
> diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
> index a8d980c..79ed4ff 100644
> --- a/xen/include/asm-x86/guest_pt.h
> +++ b/xen/include/asm-x86/guest_pt.h
> @@ -32,7 +32,7 @@
>   #error GUEST_PAGING_LEVELS not defined
>   #endif
>
> -#define VALID_GFN(m) (m != INVALID_GFN)
> +#define VALID_GFN(m) (m != gfn_x(INVALID_GFN))
>
>   static inline int
>   valid_gfn(gfn_t m)
> @@ -251,7 +251,7 @@ static inline gfn_t
>   guest_walk_to_gfn(walk_t *gw)
>   {
>       if ( !(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT) )
> -        return _gfn(INVALID_GFN);
> +        return INVALID_GFN;
>       return guest_l1e_get_gfn(gw->l1e);
>   }
>
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 4ab3574..194020e 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -324,7 +324,7 @@ struct p2m_domain {
>   #define NR_POD_MRP_ENTRIES 32
>
>   /* Encode ORDER_2M superpage in top bit of GFN */
> -#define POD_LAST_SUPERPAGE (INVALID_GFN & ~(INVALID_GFN >> 1))
> +#define POD_LAST_SUPERPAGE (gfn_x(INVALID_GFN) & ~(gfn_x(INVALID_GFN) >> 1))
>
>               unsigned long list[NR_POD_MRP_ENTRIES];
>               unsigned int idx;
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 7f207ec..58bc0b8 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -84,7 +84,7 @@ static inline bool_t mfn_eq(mfn_t x, mfn_t y)
>
>   TYPE_SAFE(unsigned long, gfn);
>   #define PRI_gfn          "05lx"
> -#define INVALID_GFN      (~0UL)
> +#define INVALID_GFN      _gfn(~0UL)
>
>   #ifndef gfn_t
>   #define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN
  2016-07-08 22:01     ` Elena Ufimtseva
@ 2016-07-08 19:20       ` Andrew Cooper
  2016-07-09  0:21         ` Elena Ufimtseva
  2016-07-08 19:39       ` Julien Grall
  1 sibling, 1 reply; 23+ messages in thread
From: Andrew Cooper @ 2016-07-08 19:20 UTC (permalink / raw)
  To: Elena Ufimtseva, Julien Grall
  Cc: Kevin Tian, sstabellini, Jan Beulich, George Dunlap, Liu Jinsong,
	Christoph Egger, Tim Deegan, xen-devel, Paul Durrant,
	Jun Nakajima

On 08/07/2016 23:01, Elena Ufimtseva wrote:
>
>>> @@ -838,7 +838,6 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
>>>
>>>      SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
>>>      BUG();
>>> -    return _mfn(INVALID_MFN);
> Can compiler be unhappy about this?

This was my suggestion, from a previous round of review.

A while ago, I annotated BUG() with unreachable(), as as execution will
not continue from a bugframe, but the shadow code is definitely older
than my change.

As such, compilers will have been dropping this return statement as part
of dead-code-elimination anyway.

This option is better than just replacing one bit of dead code with a
different bit of dead code.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN
  2016-07-08 22:01     ` Elena Ufimtseva
  2016-07-08 19:20       ` Andrew Cooper
@ 2016-07-08 19:39       ` Julien Grall
  1 sibling, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-08 19:39 UTC (permalink / raw)
  To: Elena Ufimtseva
  Cc: Kevin Tian, sstabellini, Jan Beulich, George Dunlap, Liu Jinsong,
	Christoph Egger, Tim Deegan, xen-devel, Paul Durrant,
	Jun Nakajima, Andrew Cooper

Hi Elena,

On 08/07/2016 23:01, Elena Ufimtseva wrote:
> On Wed, Jul 06, 2016 at 02:04:17PM +0100, Julien Grall wrote:
>>> @@ -3968,7 +3967,7 @@ void shadow_audit_tables(struct vcpu *v)
>>>          }
>>>      }
>>>
>>> -    hash_vcpu_foreach(v, mask, callbacks, _mfn(INVALID_MFN));
>>> +    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN_T);
>
> What is INVALID_MFN_T?

This was the name of the unbox type on a previous series. I forgot to 
rename this one and my build scripts did not fail.

Thank you for spotting it!

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN
  2016-07-06 13:04   ` Julien Grall
@ 2016-07-08 22:01     ` Elena Ufimtseva
  2016-07-08 19:20       ` Andrew Cooper
  2016-07-08 19:39       ` Julien Grall
  0 siblings, 2 replies; 23+ messages in thread
From: Elena Ufimtseva @ 2016-07-08 22:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, sstabellini, Jun Nakajima, George Dunlap,
	Liu Jinsong, Christoph Egger, Tim Deegan, xen-devel,
	Paul Durrant, Jan Beulich, Andrew Cooper

On Wed, Jul 06, 2016 at 02:04:17PM +0100, Julien Grall wrote:
> (CC Elena).
> 
> On 06/07/16 14:01, Julien Grall wrote:
> >Also take the opportunity to convert arch/x86/debug.c to the typesafe
> >mfn and use proper printf format for MFN/GFN when the code around is
> >modified.
> >
> >Signed-off-by: Julien Grall <julien.grall@arm.com>
> >Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> >
> >---
> >Cc: Christoph Egger <chegger@amazon.de>
> >Cc: Liu Jinsong <jinsong.liu@alibaba-inc.com>
> >Cc: Jan Beulich <jbeulich@suse.com>
> >Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> I forgot to update the CC list since GDSX maintainership was take over by
> Elena. Sorry for that.

No problem!
> 
> >Cc: Paul Durrant <paul.durrant@citrix.com>
> >Cc: Jun Nakajima <jun.nakajima@intel.com>
> >Cc: Kevin Tian <kevin.tian@intel.com>
> >Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >Cc: Tim Deegan <tim@xen.org>
> >
> >     Changes in v6:
> >         - Add Stefano's acked-by for ARM bits
> >         - Use PRI_mfn and PRI_gfn
> >         - Remove set of brackets when it is not necessary
> >         - Use mfn_add when possible
> >         - Add Andrew's reviewed-by
> >
> >     Changes in v5:
> >         - Patch added
> >---
> >  xen/arch/arm/p2m.c              |  4 +--
> >  xen/arch/x86/cpu/mcheck/mce.c   |  2 +-
> >  xen/arch/x86/debug.c            | 58 +++++++++++++++++++++--------------------
> >  xen/arch/x86/hvm/hvm.c          |  6 ++---
> >  xen/arch/x86/hvm/viridian.c     | 12 ++++-----
> >  xen/arch/x86/hvm/vmx/vmx.c      |  2 +-
> >  xen/arch/x86/mm/guest_walk.c    |  4 +--
> >  xen/arch/x86/mm/hap/hap.c       |  4 +--
> >  xen/arch/x86/mm/p2m-ept.c       |  6 ++---
> >  xen/arch/x86/mm/p2m-pod.c       | 18 ++++++-------
> >  xen/arch/x86/mm/p2m-pt.c        | 18 ++++++-------
> >  xen/arch/x86/mm/p2m.c           | 54 +++++++++++++++++++-------------------
> >  xen/arch/x86/mm/paging.c        | 12 ++++-----
> >  xen/arch/x86/mm/shadow/common.c | 43 +++++++++++++++---------------
> >  xen/arch/x86/mm/shadow/multi.c  | 36 ++++++++++++-------------
> >  xen/common/domain.c             |  6 ++---
> >  xen/common/grant_table.c        |  6 ++---
> >  xen/include/xen/mm.h            |  2 +-
> >  18 files changed, 147 insertions(+), 146 deletions(-)
> >
> >diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> >index 34563bb..d690602 100644
> >--- a/xen/arch/arm/p2m.c
> >+++ b/xen/arch/arm/p2m.c
> >@@ -1461,7 +1461,7 @@ int relinquish_p2m_mapping(struct domain *d)
> >      return apply_p2m_changes(d, RELINQUISH,
> >                                pfn_to_paddr(p2m->lowest_mapped_gfn),
> >                                pfn_to_paddr(p2m->max_mapped_gfn),
> >-                              pfn_to_paddr(INVALID_MFN),
> >+                              pfn_to_paddr(mfn_x(INVALID_MFN)),
> >                                MATTR_MEM, 0, p2m_invalid,
> >                                d->arch.p2m.default_access);
> >  }
> >@@ -1476,7 +1476,7 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> >      return apply_p2m_changes(d, CACHEFLUSH,
> >                               pfn_to_paddr(start_mfn),
> >                               pfn_to_paddr(end_mfn),
> >-                             pfn_to_paddr(INVALID_MFN),
> >+                             pfn_to_paddr(mfn_x(INVALID_MFN)),
> >                               MATTR_MEM, 0, p2m_invalid,
> >                               d->arch.p2m.default_access);
> >  }
> >diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
> >index edcbe48..2695b0c 100644
> >--- a/xen/arch/x86/cpu/mcheck/mce.c
> >+++ b/xen/arch/x86/cpu/mcheck/mce.c
> >@@ -1455,7 +1455,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
> >                  gfn = PFN_DOWN(gaddr);
> >                  mfn = mfn_x(get_gfn(d, gfn, &t));
> >
> >-                if ( mfn == INVALID_MFN )
> >+                if ( mfn == mfn_x(INVALID_MFN) )
> >                  {
> >                      put_gfn(d, gfn);
> >                      put_domain(d);
> >diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> >index 58cae22..9213ea7 100644
> >--- a/xen/arch/x86/debug.c
> >+++ b/xen/arch/x86/debug.c
> >@@ -43,11 +43,11 @@ typedef unsigned long dbgva_t;
> >  typedef unsigned char dbgbyte_t;
> >
> >  /* Returns: mfn for the given (hvm guest) vaddr */
> >-static unsigned long
> >+static mfn_t
> >  dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >                  unsigned long *gfn)
> >  {
> >-    unsigned long mfn;
> >+    mfn_t mfn;
> >      uint32_t pfec = PFEC_page_present;
> >      p2m_type_t gfntype;
> >
> >@@ -60,16 +60,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >          return INVALID_MFN;
> >      }
> >
> >-    mfn = mfn_x(get_gfn(dp, *gfn, &gfntype));
> >+    mfn = get_gfn(dp, *gfn, &gfntype);
> >      if ( p2m_is_readonly(gfntype) && toaddr )
> >      {
> >          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> >          mfn = INVALID_MFN;
> >      }
> >      else
> >-        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> >+        DBGP2("X: vaddr:%lx domid:%d mfn:%#"PRI_mfn"\n",
> >+              vaddr, dp->domain_id, mfn_x(mfn));
> >
> >-    if ( mfn == INVALID_MFN )
> >+    if ( mfn_eq(mfn, INVALID_MFN) )
> >      {
> >          put_gfn(dp, *gfn);
> >          *gfn = INVALID_GFN;
> >@@ -91,7 +92,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >   *       mode.
> >   * Returns: mfn for the given (pv guest) vaddr
> >   */
> >-static unsigned long
> >+static mfn_t
> >  dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
> >  {
> >      l4_pgentry_t l4e, *l4t;
> >@@ -99,31 +100,31 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
> >      l2_pgentry_t l2e, *l2t;
> >      l1_pgentry_t l1e, *l1t;
> >      unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
> >-    unsigned long mfn = cr3 >> PAGE_SHIFT;
> >+    mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
> >
> >      DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
> >            cr3, pgd3val);
> >
> >      if ( pgd3val == 0 )
> >      {
> >-        l4t = map_domain_page(_mfn(mfn));
> >+        l4t = map_domain_page(mfn);
> >          l4e = l4t[l4_table_offset(vaddr)];
> >          unmap_domain_page(l4t);
> >-        mfn = l4e_get_pfn(l4e);
> >-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t,
> >-              l4_table_offset(vaddr), l4e, mfn);
> >+        mfn = _mfn(l4e_get_pfn(l4e));
> >+        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%#"PRI_mfn"\n", l4t,
> >+              l4_table_offset(vaddr), l4e, mfn_x(mfn));
> >          if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
> >          {
> >              DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
> >              return INVALID_MFN;
> >          }
> >
> >-        l3t = map_domain_page(_mfn(mfn));
> >+        l3t = map_domain_page(mfn);
> >          l3e = l3t[l3_table_offset(vaddr)];
> >          unmap_domain_page(l3t);
> >-        mfn = l3e_get_pfn(l3e);
> >-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t,
> >-              l3_table_offset(vaddr), l3e, mfn);
> >+        mfn = _mfn(l3e_get_pfn(l3e));
> >+        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%#"PRI_mfn"\n", l3t,
> >+              l3_table_offset(vaddr), l3e, mfn_x(mfn));
> >          if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
> >               (l3e_get_flags(l3e) & _PAGE_PSE) )
> >          {
> >@@ -132,26 +133,26 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
> >          }
> >      }
> >
> >-    l2t = map_domain_page(_mfn(mfn));
> >+    l2t = map_domain_page(mfn);
> >      l2e = l2t[l2_table_offset(vaddr)];
> >      unmap_domain_page(l2t);
> >-    mfn = l2e_get_pfn(l2e);
> >-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
> >-          l2e, mfn);
> >+    mfn = _mfn(l2e_get_pfn(l2e));
> >+    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%#"PRI_mfn"\n",
> >+          l2t, l2_table_offset(vaddr), l2e, mfn_x(mfn));
> >      if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
> >           (l2e_get_flags(l2e) & _PAGE_PSE) )
> >      {
> >          DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
> >          return INVALID_MFN;
> >      }
> >-    l1t = map_domain_page(_mfn(mfn));
> >+    l1t = map_domain_page(mfn);
> >      l1e = l1t[l1_table_offset(vaddr)];
> >      unmap_domain_page(l1t);
> >-    mfn = l1e_get_pfn(l1e);
> >-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
> >-          l1e, mfn);
> >+    mfn = _mfn(l1e_get_pfn(l1e));
> >+    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%#"PRI_mfn"\n", l1t, l1_table_offset(vaddr),
> >+          l1e, mfn_x(mfn));
> >
> >-    return mfn_valid(mfn) ? mfn : INVALID_MFN;
> >+    return mfn_valid(mfn_x(mfn)) ? mfn : INVALID_MFN;
> >  }
> >
> >  /* Returns: number of bytes remaining to be copied */
> >@@ -163,23 +164,24 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> >      {
> >          char *va;
> >          unsigned long addr = (unsigned long)gaddr;
> >-        unsigned long mfn, gfn = INVALID_GFN, pagecnt;
> >+        mfn_t mfn;
> >+        unsigned long gfn = INVALID_GFN, pagecnt;
> >
> >          pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
> >
> >          mfn = (has_hvm_container_domain(dp)
> >                 ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
> >                 : dbg_pv_va2mfn(addr, dp, pgd3));
> >-        if ( mfn == INVALID_MFN )
> >+        if ( mfn_eq(mfn, INVALID_MFN) )
> >              break;
> >
> >-        va = map_domain_page(_mfn(mfn));
> >+        va = map_domain_page(mfn);
> >          va = va + (addr & (PAGE_SIZE-1));
> >
> >          if ( toaddr )
> >          {
> >              copy_from_user(va, buf, pagecnt);    /* va = buf */
> >-            paging_mark_dirty(dp, mfn);
> >+            paging_mark_dirty(dp, mfn_x(mfn));
> >          }
> >          else
> >          {
> >diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >index c89ab6e..f3faf2e 100644
> >--- a/xen/arch/x86/hvm/hvm.c
> >+++ b/xen/arch/x86/hvm/hvm.c
> >@@ -1796,7 +1796,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
> >          p2m = hostp2m;
> >
> >      /* Check access permissions first, then handle faults */
> >-    if ( mfn_x(mfn) != INVALID_MFN )
> >+    if ( !mfn_eq(mfn, INVALID_MFN) )
> >      {
> >          bool_t violation;
> >
> >@@ -5299,8 +5299,8 @@ static int do_altp2m_op(
> >              rc = -EINVAL;
> >
> >          if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> >-             (mfn_x(get_gfn_query_unlocked(curr->domain,
> >-                    a.u.enable_notify.gfn, &p2mt)) == INVALID_MFN) )
> >+             mfn_eq(get_gfn_query_unlocked(curr->domain,
> >+                    a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
> >              return -EINVAL;
> >
> >          vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
> >diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> >index 8253fd0..1734b7e 100644
> >--- a/xen/arch/x86/hvm/viridian.c
> >+++ b/xen/arch/x86/hvm/viridian.c
> >@@ -195,8 +195,8 @@ static void enable_hypercall_page(struct domain *d)
> >      {
> >          if ( page )
> >              put_page(page);
> >-        gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> >-                 page ? page_to_mfn(page) : INVALID_MFN);
> >+        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> >+                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> >          return;
> >      }
> >
> >@@ -268,8 +268,8 @@ static void initialize_apic_assist(struct vcpu *v)
> >      return;
> >
> >   fail:
> >-    gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> >-             page ? page_to_mfn(page) : INVALID_MFN);
> >+    gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", gmfn,
> >+             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> >  }
> >
> >  static void teardown_apic_assist(struct vcpu *v)
> >@@ -348,8 +348,8 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
> >      {
> >          if ( page )
> >              put_page(page);
> >-        gdprintk(XENLOG_WARNING, "Bad GMFN %lx (MFN %lx)\n", gmfn,
> >-                 page ? page_to_mfn(page) : INVALID_MFN);
> >+        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> >+                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> >          return;
> >      }
> >
> >diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >index df19579..a061420 100644
> >--- a/xen/arch/x86/hvm/vmx/vmx.c
> >+++ b/xen/arch/x86/hvm/vmx/vmx.c
> >@@ -2025,7 +2025,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
> >
> >              mfn = get_gfn_query_unlocked(d, gfn_x(vcpu_altp2m(v).veinfo_gfn), &t);
> >
> >-            if ( mfn_x(mfn) != INVALID_MFN )
> >+            if ( !mfn_eq(mfn, INVALID_MFN) )
> >                  __vmwrite(VIRT_EXCEPTION_INFO, mfn_x(mfn) << PAGE_SHIFT);
> >              else
> >                  v->arch.hvm_vmx.secondary_exec_control &=
> >diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
> >index e850502..868e909 100644
> >--- a/xen/arch/x86/mm/guest_walk.c
> >+++ b/xen/arch/x86/mm/guest_walk.c
> >@@ -281,7 +281,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
> >          start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
> >                       ((va >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
> >          gw->l1e = guest_l1e_from_gfn(start, flags);
> >-        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
> >+        gw->l2mfn = gw->l1mfn = INVALID_MFN;
> >          goto set_ad;
> >      }
> >
> >@@ -356,7 +356,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
> >          start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
> >                       guest_l1_table_offset(va));
> >          gw->l1e = guest_l1e_from_gfn(start, flags);
> >-        gw->l1mfn = _mfn(INVALID_MFN);
> >+        gw->l1mfn = INVALID_MFN;
> >      }
> >      else
> >      {
> >diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> >index 9c2cd49..3218fa2 100644
> >--- a/xen/arch/x86/mm/hap/hap.c
> >+++ b/xen/arch/x86/mm/hap/hap.c
> >@@ -430,7 +430,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
> >   oom:
> >      HAP_ERROR("out of memory building monitor pagetable\n");
> >      domain_crash(d);
> >-    return _mfn(INVALID_MFN);
> >+    return INVALID_MFN;
> >  }
> >
> >  static void hap_destroy_monitor_table(struct vcpu* v, mfn_t mmfn)
> >@@ -509,7 +509,7 @@ int hap_enable(struct domain *d, u32 mode)
> >          }
> >
> >          for ( i = 0; i < MAX_EPTP; i++ )
> >-            d->arch.altp2m_eptp[i] = INVALID_MFN;
> >+            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
> >
> >          for ( i = 0; i < MAX_ALTP2M; i++ )
> >          {
> >diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> >index 7166c71..6d03736 100644
> >--- a/xen/arch/x86/mm/p2m-ept.c
> >+++ b/xen/arch/x86/mm/p2m-ept.c
> >@@ -50,7 +50,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
> >                                    int level)
> >  {
> >      int rc;
> >-    unsigned long oldmfn = INVALID_MFN;
> >+    unsigned long oldmfn = mfn_x(INVALID_MFN);
> >      bool_t check_foreign = (new.mfn != entryptr->mfn ||
> >                              new.sa_p2mt != entryptr->sa_p2mt);
> >
> >@@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
> >
> >      write_atomic(&entryptr->epte, new.epte);
> >
> >-    if ( unlikely(oldmfn != INVALID_MFN) )
> >+    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
> >          put_page(mfn_to_page(oldmfn));
> >
> >      rc = 0;
> >@@ -887,7 +887,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
> >      int i;
> >      int ret = 0;
> >      bool_t recalc = 0;
> >-    mfn_t mfn = _mfn(INVALID_MFN);
> >+    mfn_t mfn = INVALID_MFN;
> >      struct ept_data *ept = &p2m->ept;
> >
> >      *t = p2m_mmio_dm;
> >diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> >index b7ab169..f384589 100644
> >--- a/xen/arch/x86/mm/p2m-pod.c
> >+++ b/xen/arch/x86/mm/p2m-pod.c
> >@@ -559,7 +559,7 @@ p2m_pod_decrease_reservation(struct domain *d,
> >      {
> >          /* All PoD: Mark the whole region invalid and tell caller
> >           * we're done. */
> >-        p2m_set_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid,
> >+        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
> >                        p2m->default_access);
> >          p2m->pod.entry_count-=(1<<order);
> >          BUG_ON(p2m->pod.entry_count < 0);
> >@@ -602,7 +602,7 @@ p2m_pod_decrease_reservation(struct domain *d,
> >          n = 1UL << cur_order;
> >          if ( t == p2m_populate_on_demand )
> >          {
> >-            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> >+            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
> >                            p2m_invalid, p2m->default_access);
> >              p2m->pod.entry_count -= n;
> >              BUG_ON(p2m->pod.entry_count < 0);
> >@@ -624,7 +624,7 @@ p2m_pod_decrease_reservation(struct domain *d,
> >
> >              page = mfn_to_page(mfn);
> >
> >-            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> >+            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
> >                            p2m_invalid, p2m->default_access);
> >              p2m_tlb_flush_sync(p2m);
> >              for ( j = 0; j < n; ++j )
> >@@ -671,7 +671,7 @@ void p2m_pod_dump_data(struct domain *d)
> >  static int
> >  p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
> >  {
> >-    mfn_t mfn, mfn0 = _mfn(INVALID_MFN);
> >+    mfn_t mfn, mfn0 = INVALID_MFN;
> >      p2m_type_t type, type0 = 0;
> >      unsigned long * map = NULL;
> >      int ret=0, reset = 0;
> >@@ -754,7 +754,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
> >      }
> >
> >      /* Try to remove the page, restoring old mapping if it fails. */
> >-    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_2M,
> >+    p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_2M,
> >                    p2m_populate_on_demand, p2m->default_access);
> >      p2m_tlb_flush_sync(p2m);
> >
> >@@ -871,7 +871,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
> >          }
> >
> >          /* Try to remove the page, restoring old mapping if it fails. */
> >-        p2m_set_entry(p2m, gfns[i], _mfn(INVALID_MFN), PAGE_ORDER_4K,
> >+        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
> >                        p2m_populate_on_demand, p2m->default_access);
> >
> >          /* See if the page was successfully unmapped.  (Allow one refcount
> >@@ -1073,7 +1073,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
> >           * NOTE: In a fine-grained p2m locking scenario this operation
> >           * may need to promote its locking from gfn->1g superpage
> >           */
> >-        p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
> >+        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
> >                        p2m_populate_on_demand, p2m->default_access);
> >          return 0;
> >      }
> >@@ -1157,7 +1157,7 @@ remap_and_retry:
> >       * need promoting the gfn lock from gfn->2M superpage */
> >      gfn_aligned = (gfn>>order)<<order;
> >      for(i=0; i<(1<<order); i++)
> >-        p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> >+        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
> >                        p2m_populate_on_demand, p2m->default_access);
> >      if ( tb_init_done )
> >      {
> >@@ -1215,7 +1215,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
> >      }
> >
> >      /* Now, actually do the two-way mapping */
> >-    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order,
> >+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
> >                         p2m_populate_on_demand, p2m->default_access);
> >      if ( rc == 0 )
> >      {
> >diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> >index 4980934..2b6e89e 100644
> >--- a/xen/arch/x86/mm/p2m-pt.c
> >+++ b/xen/arch/x86/mm/p2m-pt.c
> >@@ -511,7 +511,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
> >       * the intermediate one might be).
> >       */
> >      unsigned int flags, iommu_old_flags = 0;
> >-    unsigned long old_mfn = INVALID_MFN;
> >+    unsigned long old_mfn = mfn_x(INVALID_MFN);
> >
> >      ASSERT(sve != 0);
> >
> >@@ -764,7 +764,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
> >                       p2m->max_mapped_pfn )
> >                      break;
> >          }
> >-        return _mfn(INVALID_MFN);
> >+        return INVALID_MFN;
> >      }
> >
> >      mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
> >@@ -777,7 +777,7 @@ p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
> >          if ( (l4e_get_flags(*l4e) & _PAGE_PRESENT) == 0 )
> >          {
> >              unmap_domain_page(l4e);
> >-            return _mfn(INVALID_MFN);
> >+            return INVALID_MFN;
> >          }
> >          mfn = _mfn(l4e_get_pfn(*l4e));
> >          recalc = needs_recalc(l4, *l4e);
> >@@ -805,7 +805,7 @@ pod_retry_l3:
> >                      *t = p2m_populate_on_demand;
> >              }
> >              unmap_domain_page(l3e);
> >-            return _mfn(INVALID_MFN);
> >+            return INVALID_MFN;
> >          }
> >          if ( flags & _PAGE_PSE )
> >          {
> >@@ -817,7 +817,7 @@ pod_retry_l3:
> >              unmap_domain_page(l3e);
> >
> >              ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
> >-            return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
> >+            return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
> >          }
> >
> >          mfn = _mfn(l3e_get_pfn(*l3e));
> >@@ -846,7 +846,7 @@ pod_retry_l2:
> >          }
> >
> >          unmap_domain_page(l2e);
> >-        return _mfn(INVALID_MFN);
> >+        return INVALID_MFN;
> >      }
> >      if ( flags & _PAGE_PSE )
> >      {
> >@@ -856,7 +856,7 @@ pod_retry_l2:
> >          unmap_domain_page(l2e);
> >
> >          ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
> >-        return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
> >+        return (p2m_is_valid(*t)) ? mfn : INVALID_MFN;
> >      }
> >
> >      mfn = _mfn(l2e_get_pfn(*l2e));
> >@@ -885,14 +885,14 @@ pod_retry_l1:
> >          }
> >
> >          unmap_domain_page(l1e);
> >-        return _mfn(INVALID_MFN);
> >+        return INVALID_MFN;
> >      }
> >      mfn = _mfn(l1e_get_pfn(*l1e));
> >      *t = recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
> >      unmap_domain_page(l1e);
> >
> >      ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
> >-    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : _mfn(INVALID_MFN);
> >+    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
> >  }
> >
> >  static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
> >diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> >index 6258a5b..b93c8a2 100644
> >--- a/xen/arch/x86/mm/p2m.c
> >+++ b/xen/arch/x86/mm/p2m.c
> >@@ -388,7 +388,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
> >      if (unlikely((p2m_is_broken(*t))))
> >      {
> >          /* Return invalid_mfn to avoid caller's access */
> >-        mfn = _mfn(INVALID_MFN);
> >+        mfn = INVALID_MFN;
> >          if ( q & P2M_ALLOC )
> >              domain_crash(p2m->domain);
> >      }
> >@@ -493,8 +493,8 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
> >              rc = set_rc;
> >
> >          gfn += 1ul << order;
> >-        if ( mfn_x(mfn) != INVALID_MFN )
> >-            mfn = _mfn(mfn_x(mfn) + (1ul << order));
> >+        if ( !mfn_eq(mfn, INVALID_MFN) )
> >+            mfn = mfn_add(mfn, 1ul << order);
> >          todo -= 1ul << order;
> >      }
> >
> >@@ -580,7 +580,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
> >
> >      /* Initialise physmap tables for slot zero. Other code assumes this. */
> >      p2m->defer_nested_flush = 1;
> >-    rc = p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> >+    rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K,
> >                         p2m_invalid, p2m->default_access);
> >      p2m->defer_nested_flush = 0;
> >      p2m_unlock(p2m);
> >@@ -670,7 +670,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
> >              ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
> >          }
> >      }
> >-    return p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid,
> >+    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
> >                           p2m->default_access);
> >  }
> >
> >@@ -840,7 +840,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
> >      {
> >          gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
> >                   gfn_x(gfn), mfn_x(mfn));
> >-        rc = p2m_set_entry(p2m, gfn_x(gfn), _mfn(INVALID_MFN), page_order,
> >+        rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order,
> >                             p2m_invalid, p2m->default_access);
> >          if ( rc == 0 )
> >          {
> >@@ -1107,7 +1107,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
> >      }
> >
> >      /* Do not use mfn_valid() here as it will usually fail for MMIO pages. */
> >-    if ( (INVALID_MFN == mfn_x(actual_mfn)) || (t != p2m_mmio_direct) )
> >+    if ( mfn_eq(actual_mfn, INVALID_MFN) || (t != p2m_mmio_direct) )
> >      {
> >          gdprintk(XENLOG_ERR,
> >                   "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
> >@@ -1117,7 +1117,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
> >          gdprintk(XENLOG_WARNING,
> >                   "no mapping between mfn %08lx and gfn %08lx\n",
> >                   mfn_x(mfn), gfn);
> >-    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), order, p2m_invalid,
> >+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
> >                         p2m->default_access);
> >
> >   out:
> >@@ -1146,7 +1146,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
> >      mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
> >      if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
> >      {
> >-        ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> >+        ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
> >                              p2m_invalid, p2m->default_access);
> >          gfn_unlock(p2m, gfn, 0);
> >      }
> >@@ -1316,7 +1316,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
> >          put_page(page);
> >
> >      /* Remove mapping from p2m table */
> >-    ret = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K,
> >+    ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
> >                          p2m_ram_paged, a);
> >
> >      /* Clear content before returning the page to Xen */
> >@@ -1844,7 +1844,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> >      if ( altp2m_idx )
> >      {
> >          if ( altp2m_idx >= MAX_ALTP2M ||
> >-             d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN )
> >+             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
> >              return -EINVAL;
> >
> >          ap2m = d->arch.altp2m_p2m[altp2m_idx];
> >@@ -1942,7 +1942,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> >      mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
> >      gfn_unlock(p2m, gfn, 0);
> >
> >-    if ( mfn_x(mfn) == INVALID_MFN )
> >+    if ( mfn_eq(mfn, INVALID_MFN) )
> >          return -ESRCH;
> >
> >      if ( (unsigned) a >= ARRAY_SIZE(memaccess) )
> >@@ -2288,7 +2288,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
> >
> >      for ( i = 0; i < MAX_ALTP2M; i++ )
> >      {
> >-        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
> >+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
> >              continue;
> >
> >          p2m = d->arch.altp2m_p2m[i];
> >@@ -2315,7 +2315,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
> >
> >      altp2m_list_lock(d);
> >
> >-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> >+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
> >      {
> >          if ( idx != vcpu_altp2m(v).p2midx )
> >          {
> >@@ -2359,14 +2359,14 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
> >                                0, &page_order);
> >      __put_gfn(*ap2m, gfn_x(gfn));
> >
> >-    if ( mfn_x(mfn) != INVALID_MFN )
> >+    if ( !mfn_eq(mfn, INVALID_MFN) )
> >          return 0;
> >
> >      mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
> >                                P2M_ALLOC | P2M_UNSHARE, &page_order);
> >      __put_gfn(hp2m, gfn_x(gfn));
> >
> >-    if ( mfn_x(mfn) == INVALID_MFN )
> >+    if ( mfn_eq(mfn, INVALID_MFN) )
> >          return 0;
> >
> >      p2m_lock(*ap2m);
> >@@ -2404,7 +2404,7 @@ void p2m_flush_altp2m(struct domain *d)
> >          /* Uninit and reinit ept to force TLB shootdown */
> >          ept_p2m_uninit(d->arch.altp2m_p2m[i]);
> >          ept_p2m_init(d->arch.altp2m_p2m[i]);
> >-        d->arch.altp2m_eptp[i] = INVALID_MFN;
> >+        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
> >      }
> >
> >      altp2m_list_unlock(d);
> >@@ -2431,7 +2431,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
> >
> >      altp2m_list_lock(d);
> >
> >-    if ( d->arch.altp2m_eptp[idx] == INVALID_MFN )
> >+    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
> >      {
> >          p2m_init_altp2m_helper(d, idx);
> >          rc = 0;
> >@@ -2450,7 +2450,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
> >
> >      for ( i = 0; i < MAX_ALTP2M; i++ )
> >      {
> >-        if ( d->arch.altp2m_eptp[i] != INVALID_MFN )
> >+        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> >              continue;
> >
> >          p2m_init_altp2m_helper(d, i);
> >@@ -2476,7 +2476,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
> >
> >      altp2m_list_lock(d);
> >
> >-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> >+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
> >      {
> >          p2m = d->arch.altp2m_p2m[idx];
> >
> >@@ -2486,7 +2486,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
> >              /* Uninit and reinit ept to force TLB shootdown */
> >              ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
> >              ept_p2m_init(d->arch.altp2m_p2m[idx]);
> >-            d->arch.altp2m_eptp[idx] = INVALID_MFN;
> >+            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
> >              rc = 0;
> >          }
> >      }
> >@@ -2510,7 +2510,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
> >
> >      altp2m_list_lock(d);
> >
> >-    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
> >+    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
> >      {
> >          for_each_vcpu( d, v )
> >              if ( idx != vcpu_altp2m(v).p2midx )
> >@@ -2541,7 +2541,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
> >      unsigned int page_order;
> >      int rc = -EINVAL;
> >
> >-    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN )
> >+    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
> >          return rc;
> >
> >      hp2m = p2m_get_hostp2m(d);
> >@@ -2636,14 +2636,14 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
> >
> >      for ( i = 0; i < MAX_ALTP2M; i++ )
> >      {
> >-        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
> >+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
> >              continue;
> >
> >          p2m = d->arch.altp2m_p2m[i];
> >          m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
> >
> >          /* Check for a dropped page that may impact this altp2m */
> >-        if ( mfn_x(mfn) == INVALID_MFN &&
> >+        if ( mfn_eq(mfn, INVALID_MFN) &&
> >               gfn_x(gfn) >= p2m->min_remapped_gfn &&
> >               gfn_x(gfn) <= p2m->max_remapped_gfn )
> >          {
> >@@ -2660,7 +2660,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
> >                  for ( i = 0; i < MAX_ALTP2M; i++ )
> >                  {
> >                      if ( i == last_reset_idx ||
> >-                         d->arch.altp2m_eptp[i] == INVALID_MFN )
> >+                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
> >                          continue;
> >
> >                      p2m = d->arch.altp2m_p2m[i];
> >@@ -2672,7 +2672,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
> >                  goto out;
> >              }
> >          }
> >-        else if ( mfn_x(m) != INVALID_MFN )
> >+        else if ( !mfn_eq(m, INVALID_MFN) )
> >              p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma);
> >
> >          __put_gfn(p2m, gfn_x(gfn));
> >diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> >index 8219bb6..107fc8c 100644
> >--- a/xen/arch/x86/mm/paging.c
> >+++ b/xen/arch/x86/mm/paging.c
> >@@ -67,7 +67,7 @@ static mfn_t paging_new_log_dirty_page(struct domain *d)
> >      if ( unlikely(page == NULL) )
> >      {
> >          d->arch.paging.log_dirty.failed_allocs++;
> >-        return _mfn(INVALID_MFN);
> >+        return INVALID_MFN;
> >      }
> >
> >      d->arch.paging.log_dirty.allocs++;
> >@@ -95,7 +95,7 @@ static mfn_t paging_new_log_dirty_node(struct domain *d)
> >          int i;
> >          mfn_t *node = map_domain_page(mfn);
> >          for ( i = 0; i < LOGDIRTY_NODE_ENTRIES; i++ )
> >-            node[i] = _mfn(INVALID_MFN);
> >+            node[i] = INVALID_MFN;
> >          unmap_domain_page(node);
> >      }
> >      return mfn;
> >@@ -167,7 +167,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
> >
> >              unmap_domain_page(l2);
> >              paging_free_log_dirty_page(d, l3[i3]);
> >-            l3[i3] = _mfn(INVALID_MFN);
> >+            l3[i3] = INVALID_MFN;
> >
> >              if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
> >              {
> >@@ -182,7 +182,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
> >          if ( rc )
> >              break;
> >          paging_free_log_dirty_page(d, l4[i4]);
> >-        l4[i4] = _mfn(INVALID_MFN);
> >+        l4[i4] = INVALID_MFN;
> >
> >          if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
> >          {
> >@@ -198,7 +198,7 @@ static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
> >      if ( !rc )
> >      {
> >          paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
> >-        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> >+        d->arch.paging.log_dirty.top = INVALID_MFN;
> >
> >          ASSERT(d->arch.paging.log_dirty.allocs == 0);
> >          d->arch.paging.log_dirty.failed_allocs = 0;
> >@@ -660,7 +660,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags)
> >      /* This must be initialized separately from the rest of the
> >       * log-dirty init code as that can be called more than once and we
> >       * don't want to leak any active log-dirty bitmaps */
> >-    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> >+    d->arch.paging.log_dirty.top = INVALID_MFN;
> >
> >      /*
> >       * Shadow pagetables are the default, but we will use
> >diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> >index 226e32d..1c0b6cd 100644
> >--- a/xen/arch/x86/mm/shadow/common.c
> >+++ b/xen/arch/x86/mm/shadow/common.c
> >@@ -88,10 +88,10 @@ void shadow_vcpu_init(struct vcpu *v)
> >
> >      for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
> >      {
> >-        v->arch.paging.shadow.oos[i] = _mfn(INVALID_MFN);
> >-        v->arch.paging.shadow.oos_snapshot[i] = _mfn(INVALID_MFN);
> >+        v->arch.paging.shadow.oos[i] = INVALID_MFN;
> >+        v->arch.paging.shadow.oos_snapshot[i] = INVALID_MFN;
> >          for ( j = 0; j < SHADOW_OOS_FIXUPS; j++ )
> >-            v->arch.paging.shadow.oos_fixup[i].smfn[j] = _mfn(INVALID_MFN);
> >+            v->arch.paging.shadow.oos_fixup[i].smfn[j] = INVALID_MFN;
> >      }
> >  #endif
> >
> >@@ -593,12 +593,12 @@ static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
> >      int i;
> >      for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
> >      {
> >-        if ( mfn_x(fixup->smfn[i]) != INVALID_MFN )
> >+        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
> >          {
> >              sh_remove_write_access_from_sl1p(d, gmfn,
> >                                               fixup->smfn[i],
> >                                               fixup->off[i]);
> >-            fixup->smfn[i] = _mfn(INVALID_MFN);
> >+            fixup->smfn[i] = INVALID_MFN;
> >          }
> >      }
> >
> >@@ -636,7 +636,7 @@ void oos_fixup_add(struct domain *d, mfn_t gmfn,
> >
> >              next = oos_fixup[idx].next;
> >
> >-            if ( mfn_x(oos_fixup[idx].smfn[next]) != INVALID_MFN )
> >+            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
> >              {
> >                  TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
> >
> >@@ -757,7 +757,7 @@ static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
> >      struct oos_fixup fixup = { .next = 0 };
> >
> >      for (i = 0; i < SHADOW_OOS_FIXUPS; i++ )
> >-        fixup.smfn[i] = _mfn(INVALID_MFN);
> >+        fixup.smfn[i] = INVALID_MFN;
> >
> >      idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
> >      oidx = idx;
> >@@ -807,7 +807,7 @@ static void oos_hash_remove(struct domain *d, mfn_t gmfn)
> >              idx = (idx + 1) % SHADOW_OOS_PAGES;
> >          if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
> >          {
> >-            oos[idx] = _mfn(INVALID_MFN);
> >+            oos[idx] = INVALID_MFN;
> >              return;
> >          }
> >      }
> >@@ -838,7 +838,6 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
> >
> >      SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
> >      BUG();
> >-    return _mfn(INVALID_MFN);

Can compiler be unhappy about this?

> >  }
> >
> >  /* Pull a single guest page back into sync */
> >@@ -862,7 +861,7 @@ void sh_resync(struct domain *d, mfn_t gmfn)
> >          if ( mfn_x(oos[idx]) == mfn_x(gmfn) )
> >          {
> >              _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
> >-            oos[idx] = _mfn(INVALID_MFN);
> >+            oos[idx] = INVALID_MFN;
> >              return;
> >          }
> >      }
> >@@ -914,7 +913,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
> >          {
> >              /* Write-protect and sync contents */
> >              _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
> >-            oos[idx] = _mfn(INVALID_MFN);
> >+            oos[idx] = INVALID_MFN;
> >          }
> >
> >   resync_others:
> >@@ -948,7 +947,7 @@ void sh_resync_all(struct vcpu *v, int skip, int this, int others)
> >              {
> >                  /* Write-protect and sync contents */
> >                  _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
> >-                oos[idx] = _mfn(INVALID_MFN);
> >+                oos[idx] = INVALID_MFN;
> >              }
> >          }
> >      }
> >@@ -1784,7 +1783,7 @@ void *sh_emulate_map_dest(struct vcpu *v, unsigned long vaddr,
> >      if ( likely(((vaddr + bytes - 1) & PAGE_MASK) == (vaddr & PAGE_MASK)) )
> >      {
> >          /* Whole write fits on a single page. */
> >-        sh_ctxt->mfn[1] = _mfn(INVALID_MFN);
> >+        sh_ctxt->mfn[1] = INVALID_MFN;
> >          map = map_domain_page(sh_ctxt->mfn[0]) + (vaddr & ~PAGE_MASK);
> >      }
> >      else if ( !is_hvm_domain(d) )
> >@@ -2086,7 +2085,7 @@ mfn_t shadow_hash_lookup(struct domain *d, unsigned long n, unsigned int t)
> >      }
> >
> >      perfc_incr(shadow_hash_lookup_miss);
> >-    return _mfn(INVALID_MFN);
> >+    return INVALID_MFN;
> >  }
> >
> >  void shadow_hash_insert(struct domain *d, unsigned long n, unsigned int t,
> >@@ -2910,7 +2909,7 @@ void sh_reset_l3_up_pointers(struct vcpu *v)
> >      };
> >      static const unsigned int callback_mask = SHF_L3_64;
> >
> >-    hash_vcpu_foreach(v, callback_mask, callbacks, _mfn(INVALID_MFN));
> >+    hash_vcpu_foreach(v, callback_mask, callbacks, INVALID_MFN);
> >  }
> >
> >
> >@@ -2940,7 +2939,7 @@ static void sh_update_paging_modes(struct vcpu *v)
> >  #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */
> >
> >  #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
> >-    if ( mfn_x(v->arch.paging.shadow.oos_snapshot[0]) == INVALID_MFN )
> >+    if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
> >      {
> >          int i;
> >          for(i = 0; i < SHADOW_OOS_PAGES; i++)
> >@@ -3284,7 +3283,7 @@ void shadow_teardown(struct domain *d, int *preempted)
> >                  if ( mfn_valid(oos_snapshot[i]) )
> >                  {
> >                      shadow_free(d, oos_snapshot[i]);
> >-                    oos_snapshot[i] = _mfn(INVALID_MFN);
> >+                    oos_snapshot[i] = INVALID_MFN;
> >                  }
> >          }
> >  #endif /* OOS */
> >@@ -3449,7 +3448,7 @@ static int shadow_one_bit_disable(struct domain *d, u32 mode)
> >                      if ( mfn_valid(oos_snapshot[i]) )
> >                      {
> >                          shadow_free(d, oos_snapshot[i]);
> >-                        oos_snapshot[i] = _mfn(INVALID_MFN);
> >+                        oos_snapshot[i] = INVALID_MFN;
> >                      }
> >              }
> >  #endif /* OOS */
> >@@ -3744,7 +3743,7 @@ int shadow_track_dirty_vram(struct domain *d,
> >          memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
> >      else
> >      {
> >-        unsigned long map_mfn = INVALID_MFN;
> >+        unsigned long map_mfn = mfn_x(INVALID_MFN);
> >          void *map_sl1p = NULL;
> >
> >          /* Iterate over VRAM to track dirty bits. */
> >@@ -3754,7 +3753,7 @@ int shadow_track_dirty_vram(struct domain *d,
> >              int dirty = 0;
> >              paddr_t sl1ma = dirty_vram->sl1ma[i];
> >
> >-            if (mfn_x(mfn) == INVALID_MFN)
> >+            if ( !mfn_eq(mfn, INVALID_MFN) )
> >              {
> >                  dirty = 1;
> >              }
> >@@ -3830,7 +3829,7 @@ int shadow_track_dirty_vram(struct domain *d,
> >              for ( i = begin_pfn; i < end_pfn; i++ )
> >              {
> >                  mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
> >-                if ( mfn_x(mfn) != INVALID_MFN )
> >+                if ( !mfn_eq(mfn, INVALID_MFN) )
> >                      flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
> >              }
> >              dirty_vram->last_dirty = -1;
> >@@ -3968,7 +3967,7 @@ void shadow_audit_tables(struct vcpu *v)
> >          }
> >      }
> >
> >-    hash_vcpu_foreach(v, mask, callbacks, _mfn(INVALID_MFN));
> >+    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN_T);

What is INVALID_MFN_T?

> >  }
> >
> >  #endif /* Shadow audit */
> >diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> >index dfe59a2..f892e2f 100644
> >--- a/xen/arch/x86/mm/shadow/multi.c
> >+++ b/xen/arch/x86/mm/shadow/multi.c
> >@@ -177,7 +177,7 @@ sh_walk_guest_tables(struct vcpu *v, unsigned long va, walk_t *gw,
> >  {
> >      return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
> >  #if GUEST_PAGING_LEVELS == 3 /* PAE */
> >-                             _mfn(INVALID_MFN),
> >+                             INVALID_MFN,
> >                               v->arch.paging.shadow.gl3e
> >  #else /* 32 or 64 */
> >                               pagetable_get_mfn(v->arch.guest_table),
> >@@ -336,32 +336,32 @@ static void sh_audit_gw(struct vcpu *v, walk_t *gw)
> >      if ( mfn_valid(gw->l4mfn)
> >           && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
> >                                                  SH_type_l4_shadow))) )
> >-        (void) sh_audit_l4_table(v, smfn, _mfn(INVALID_MFN));
> >+        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
> >      if ( mfn_valid(gw->l3mfn)
> >           && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
> >                                                  SH_type_l3_shadow))) )
> >-        (void) sh_audit_l3_table(v, smfn, _mfn(INVALID_MFN));
> >+        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
> >  #endif /* PAE or 64... */
> >      if ( mfn_valid(gw->l2mfn) )
> >      {
> >          if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
> >                                                   SH_type_l2_shadow))) )
> >-            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
> >+            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
> >  #if GUEST_PAGING_LEVELS == 3
> >          if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
> >                                                   SH_type_l2h_shadow))) )
> >-            (void) sh_audit_l2_table(v, smfn, _mfn(INVALID_MFN));
> >+            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
> >  #endif
> >      }
> >      if ( mfn_valid(gw->l1mfn)
> >           && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
> >                                                  SH_type_l1_shadow))) )
> >-        (void) sh_audit_l1_table(v, smfn, _mfn(INVALID_MFN));
> >+        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
> >      else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
> >                && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
> >                && mfn_valid(
> >                (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
> >-        (void) sh_audit_fl1_table(v, smfn, _mfn(INVALID_MFN));
> >+        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
> >  }
> >
> >  #else
> >@@ -1752,7 +1752,7 @@ static shadow_l2e_t * shadow_get_and_create_l2e(struct vcpu *v,
> >  {
> >  #if GUEST_PAGING_LEVELS >= 4 /* 64bit... */
> >      struct domain *d = v->domain;
> >-    mfn_t sl3mfn = _mfn(INVALID_MFN);
> >+    mfn_t sl3mfn = INVALID_MFN;
> >      shadow_l3e_t *sl3e;
> >      if ( !mfn_valid(gw->l2mfn) ) return NULL; /* No guest page. */
> >      /* Get the l3e */
> >@@ -2158,7 +2158,7 @@ static int validate_gl4e(struct vcpu *v, void *new_ge, mfn_t sl4mfn, void *se)
> >      shadow_l4e_t new_sl4e;
> >      guest_l4e_t new_gl4e = *(guest_l4e_t *)new_ge;
> >      shadow_l4e_t *sl4p = se;
> >-    mfn_t sl3mfn = _mfn(INVALID_MFN);
> >+    mfn_t sl3mfn = INVALID_MFN;
> >      struct domain *d = v->domain;
> >      p2m_type_t p2mt;
> >      int result = 0;
> >@@ -2217,7 +2217,7 @@ static int validate_gl3e(struct vcpu *v, void *new_ge, mfn_t sl3mfn, void *se)
> >      shadow_l3e_t new_sl3e;
> >      guest_l3e_t new_gl3e = *(guest_l3e_t *)new_ge;
> >      shadow_l3e_t *sl3p = se;
> >-    mfn_t sl2mfn = _mfn(INVALID_MFN);
> >+    mfn_t sl2mfn = INVALID_MFN;
> >      p2m_type_t p2mt;
> >      int result = 0;
> >
> >@@ -2250,7 +2250,7 @@ static int validate_gl2e(struct vcpu *v, void *new_ge, mfn_t sl2mfn, void *se)
> >      shadow_l2e_t new_sl2e;
> >      guest_l2e_t new_gl2e = *(guest_l2e_t *)new_ge;
> >      shadow_l2e_t *sl2p = se;
> >-    mfn_t sl1mfn = _mfn(INVALID_MFN);
> >+    mfn_t sl1mfn = INVALID_MFN;
> >      p2m_type_t p2mt;
> >      int result = 0;
> >
> >@@ -2608,7 +2608,7 @@ static inline void check_for_early_unshadow(struct vcpu *v, mfn_t gmfn)
> >  static inline void reset_early_unshadow(struct vcpu *v)
> >  {
> >  #if SHADOW_OPTIMIZATIONS & SHOPT_EARLY_UNSHADOW
> >-    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = INVALID_MFN;
> >+    v->arch.paging.shadow.last_emulated_mfn_for_unshadow = mfn_x(INVALID_MFN);
> >  #endif
> >  }
> >
> >@@ -4105,10 +4105,10 @@ sh_update_cr3(struct vcpu *v, int do_locking)
> >                                             ? SH_type_l2h_shadow
> >                                             : SH_type_l2_shadow);
> >                  else
> >-                    sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
> >+                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
> >              }
> >              else
> >-                sh_set_toplevel_shadow(v, i, _mfn(INVALID_MFN), 0);
> >+                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
> >          }
> >      }
> >  #elif GUEST_PAGING_LEVELS == 4
> >@@ -4531,7 +4531,7 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
> >
> >          if ( fast_path ) {
> >              if ( pagetable_is_null(v->arch.shadow_table[i]) )
> >-                smfn = _mfn(INVALID_MFN);
> >+                smfn = INVALID_MFN;
> >              else
> >                  smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i]));
> >          }
> >@@ -4540,8 +4540,8 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
> >              /* retrieving the l2s */
> >              gmfn = get_gfn_query_unlocked(d, gfn_x(guest_l3e_get_gfn(gl3e[i])),
> >                                            &p2mt);
> >-            smfn = unlikely(mfn_x(gmfn) == INVALID_MFN)
> >-                   ? _mfn(INVALID_MFN)
> >+            smfn = unlikely(mfn_eq(gmfn, INVALID_MFN))
> >+                   ? INVALID_MFN
> >                     : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
> >          }
> >
> >@@ -4846,7 +4846,7 @@ int sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
> >  {
> >      guest_l1e_t *gl1e, e;
> >      shadow_l1e_t *sl1e;
> >-    mfn_t gl1mfn = _mfn(INVALID_MFN);
> >+    mfn_t gl1mfn = INVALID_MFN;
> >      int f;
> >      int done = 0;
> >
> >diff --git a/xen/common/domain.c b/xen/common/domain.c
> >index 45273d4..42c07ee 100644
> >--- a/xen/common/domain.c
> >+++ b/xen/common/domain.c
> >@@ -117,7 +117,7 @@ static void vcpu_info_reset(struct vcpu *v)
> >      v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
> >                      ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
> >                      : &dummy_vcpu_info);
> >-    v->vcpu_info_mfn = INVALID_MFN;
> >+    v->vcpu_info_mfn = mfn_x(INVALID_MFN);
> >  }
> >
> >  struct vcpu *alloc_vcpu(
> >@@ -1141,7 +1141,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
> >      if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
> >          return -EINVAL;
> >
> >-    if ( v->vcpu_info_mfn != INVALID_MFN )
> >+    if ( v->vcpu_info_mfn != mfn_x(INVALID_MFN) )
> >          return -EINVAL;
> >
> >      /* Run this command on yourself or on other offline VCPUS. */
> >@@ -1205,7 +1205,7 @@ void unmap_vcpu_info(struct vcpu *v)
> >  {
> >      unsigned long mfn;
> >
> >-    if ( v->vcpu_info_mfn == INVALID_MFN )
> >+    if ( v->vcpu_info_mfn == mfn_x(INVALID_MFN) )
> >          return;
> >
> >      mfn = v->vcpu_info_mfn;
> >diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> >index 3f15543..ecace07 100644
> >--- a/xen/common/grant_table.c
> >+++ b/xen/common/grant_table.c
> >@@ -244,7 +244,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
> >                                (readonly) ? P2M_ALLOC : P2M_UNSHARE);
> >      if ( !(*page) )
> >      {
> >-        *frame = INVALID_MFN;
> >+        *frame = mfn_x(INVALID_MFN);
> >          if ( p2m_is_shared(p2mt) )
> >              return GNTST_eagain;
> >          if ( p2m_is_paging(p2mt) )
> >@@ -260,7 +260,7 @@ static int __get_paged_frame(unsigned long gfn, unsigned long *frame, struct pag
> >      *page = mfn_valid(*frame) ? mfn_to_page(*frame) : NULL;
> >      if ( (!(*page)) || (!get_page(*page, rd)) )
> >      {
> >-        *frame = INVALID_MFN;
> >+        *frame = mfn_x(INVALID_MFN);
> >          *page = NULL;
> >          rc = GNTST_bad_page;
> >      }
> >@@ -1785,7 +1785,7 @@ gnttab_transfer(
> >              p2m_type_t __p2mt;
> >              mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &__p2mt));
> >              if ( p2m_is_shared(__p2mt) || !p2m_is_valid(__p2mt) )
> >-                mfn = INVALID_MFN;
> >+                mfn = mfn_x(INVALID_MFN);
> >          }
> >  #else
> >          mfn = mfn_x(gfn_to_mfn(d, _gfn(gop.mfn)));
> >diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> >index afbb1a1..7f207ec 100644
> >--- a/xen/include/xen/mm.h
> >+++ b/xen/include/xen/mm.h
> >@@ -55,7 +55,7 @@
> >
> >  TYPE_SAFE(unsigned long, mfn);
> >  #define PRI_mfn          "05lx"
> >-#define INVALID_MFN      (~0UL)
> >+#define INVALID_MFN      _mfn(~0UL)
> >
> >  #ifndef mfn_t
> >  #define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
> >
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN
  2016-07-06 13:05   ` Julien Grall
@ 2016-07-08 22:05     ` Elena Ufimtseva
  0 siblings, 0 replies; 23+ messages in thread
From: Elena Ufimtseva @ 2016-07-08 22:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, sstabellini, Feng Wu, Suravee Suthikulpanit,
	Jun Nakajima, Andrew Cooper, Tim Deegan, xen-devel,
	George Dunlap, Paul Durrant, Jan Beulich, Boris Ostrovsky

On Wed, Jul 06, 2016 at 02:05:24PM +0100, Julien Grall wrote:
> (CC Elena)
> 
Thanks Julien!

> On 06/07/16 14:01, Julien Grall wrote:
> >Also take the opportunity to convert arch/x86/debug.c to the typesafe gfn.
> >
> >Signed-off-by: Julien Grall <julien.grall@arm.com>
> >Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> >
> >---
> >Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> I forgot to update the CC list since GDSX maintainership was taken over by
> Elena. Sorry for that.
> 
> Regards,
> 
> >Cc: Jan Beulich <jbeulich@suse.com>
> >Cc: Paul Durrant <paul.durrant@citrix.com>
> >Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> >Cc: Jun Nakajima <jun.nakajima@intel.com>
> >Cc: Kevin Tian <kevin.tian@intel.com>
> >Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >Cc: Tim Deegan <tim@xen.org>
> >Cc: Feng Wu <feng.wu@intel.com>
> >
> >     Changes in v6:
> >         - Add Stefano's acked-by for ARM bits
> >         - Remove set of brackets when it is not necessary
> >         - Add Andrew's reviewed-by
> >
> >     Changes in v5:
> >         - Patch added
> >---
> >  xen/arch/arm/p2m.c                      |  4 ++--
> >  xen/arch/x86/debug.c                    | 18 +++++++++---------
> >  xen/arch/x86/domain.c                   |  2 +-
> >  xen/arch/x86/hvm/emulate.c              |  7 ++++---
> >  xen/arch/x86/hvm/hvm.c                  |  6 +++---
> >  xen/arch/x86/hvm/ioreq.c                |  8 ++++----
> >  xen/arch/x86/hvm/svm/nestedsvm.c        |  2 +-
> >  xen/arch/x86/hvm/vmx/vmx.c              |  6 +++---
> >  xen/arch/x86/mm/altp2m.c                |  2 +-
> >  xen/arch/x86/mm/hap/guest_walk.c        | 10 +++++-----
> >  xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
> >  xen/arch/x86/mm/p2m-pod.c               |  6 +++---
> >  xen/arch/x86/mm/p2m.c                   | 18 +++++++++---------
> >  xen/arch/x86/mm/shadow/common.c         |  2 +-
> >  xen/arch/x86/mm/shadow/multi.c          |  2 +-
> >  xen/arch/x86/mm/shadow/private.h        |  2 +-
> >  xen/drivers/passthrough/amd/iommu_map.c |  2 +-
> >  xen/drivers/passthrough/vtd/iommu.c     |  4 ++--
> >  xen/drivers/passthrough/x86/iommu.c     |  2 +-
> >  xen/include/asm-x86/guest_pt.h          |  4 ++--
> >  xen/include/asm-x86/p2m.h               |  2 +-
> >  xen/include/xen/mm.h                    |  2 +-
> >  22 files changed, 57 insertions(+), 56 deletions(-)
> >
> >diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> >index d690602..c938dde 100644
> >--- a/xen/arch/arm/p2m.c
> >+++ b/xen/arch/arm/p2m.c
> >@@ -479,7 +479,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> >      }
> >
> >      /* If request to get default access. */
> >-    if ( gfn_x(gfn) == INVALID_GFN )
> >+    if ( gfn_eq(gfn, INVALID_GFN) )
> >      {
> >          *access = memaccess[p2m->default_access];
> >          return 0;
> >@@ -1879,7 +1879,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> >      p2m->mem_access_enabled = true;
> >
> >      /* If request to set default access. */
> >-    if ( gfn_x(gfn) == INVALID_GFN )
> >+    if ( gfn_eq(gfn, INVALID_GFN) )
> >      {
> >          p2m->default_access = a;
> >          return 0;
> >diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> >index 9213ea7..3030022 100644
> >--- a/xen/arch/x86/debug.c
> >+++ b/xen/arch/x86/debug.c
> >@@ -44,8 +44,7 @@ typedef unsigned char dbgbyte_t;
> >
> >  /* Returns: mfn for the given (hvm guest) vaddr */
> >  static mfn_t
> >-dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >-                unsigned long *gfn)
> >+dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr, gfn_t *gfn)
> >  {
> >      mfn_t mfn;
> >      uint32_t pfec = PFEC_page_present;
> >@@ -53,14 +52,14 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >
> >      DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
> >
> >-    *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
> >-    if ( *gfn == INVALID_GFN )
> >+    *gfn = _gfn(paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec));
> >+    if ( gfn_eq(*gfn, INVALID_GFN) )
> >      {
> >          DBGP2("kdb:bad gfn from gva_to_gfn\n");
> >          return INVALID_MFN;
> >      }
> >
> >-    mfn = get_gfn(dp, *gfn, &gfntype);
> >+    mfn = get_gfn(dp, gfn_x(*gfn), &gfntype);
> >      if ( p2m_is_readonly(gfntype) && toaddr )
> >      {
> >          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> >@@ -72,7 +71,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >
> >      if ( mfn_eq(mfn, INVALID_MFN) )
> >      {
> >-        put_gfn(dp, *gfn);
> >+        put_gfn(dp, gfn_x(*gfn));
> >          *gfn = INVALID_GFN;
> >      }
> >
> >@@ -165,7 +164,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> >          char *va;
> >          unsigned long addr = (unsigned long)gaddr;
> >          mfn_t mfn;
> >-        unsigned long gfn = INVALID_GFN, pagecnt;
> >+        gfn_t gfn = INVALID_GFN;
> >+        unsigned long pagecnt;
> >
> >          pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
> >
> >@@ -189,8 +189,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> >          }
> >
> >          unmap_domain_page(va);
> >-        if ( gfn != INVALID_GFN )
> >-            put_gfn(dp, gfn);
> >+        if ( !gfn_eq(gfn, INVALID_GFN) )
> >+            put_gfn(dp, gfn_x(gfn));
> >
> >          addr += pagecnt;
> >          buf += pagecnt;
> >diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> >index bb59247..c8c7e2d 100644
> >--- a/xen/arch/x86/domain.c
> >+++ b/xen/arch/x86/domain.c
> >@@ -783,7 +783,7 @@ int arch_domain_soft_reset(struct domain *d)
> >       * gfn == INVALID_GFN indicates that the shared_info page was never mapped
> >       * to the domain's address space and there is nothing to replace.
> >       */
> >-    if ( gfn == INVALID_GFN )
> >+    if ( gfn == gfn_x(INVALID_GFN) )
> >          goto exit_put_page;
> >
> >      if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
> >diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> >index 855af4d..c55ad7b 100644
> >--- a/xen/arch/x86/hvm/emulate.c
> >+++ b/xen/arch/x86/hvm/emulate.c
> >@@ -455,7 +455,7 @@ static int hvmemul_linear_to_phys(
> >              return rc;
> >          pfn = _paddr >> PAGE_SHIFT;
> >      }
> >-    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == INVALID_GFN )
> >+    else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) )
> >      {
> >          if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
> >              return X86EMUL_RETRY;
> >@@ -472,7 +472,8 @@ static int hvmemul_linear_to_phys(
> >          npfn = paging_gva_to_gfn(curr, addr, &pfec);
> >
> >          /* Is it contiguous with the preceding PFNs? If not then we're done. */
> >-        if ( (npfn == INVALID_GFN) || (npfn != (pfn + (reverse ? -i : i))) )
> >+        if ( (npfn == gfn_x(INVALID_GFN)) ||
> >+             (npfn != (pfn + (reverse ? -i : i))) )
> >          {
> >              if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
> >                  return X86EMUL_RETRY;
> >@@ -480,7 +481,7 @@ static int hvmemul_linear_to_phys(
> >              if ( done == 0 )
> >              {
> >                  ASSERT(!reverse);
> >-                if ( npfn != INVALID_GFN )
> >+                if ( npfn != gfn_x(INVALID_GFN) )
> >                      return X86EMUL_UNHANDLEABLE;
> >                  hvm_inject_page_fault(pfec, addr & PAGE_MASK);
> >                  return X86EMUL_EXCEPTION;
> >diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >index f3faf2e..bb39d5f 100644
> >--- a/xen/arch/x86/hvm/hvm.c
> >+++ b/xen/arch/x86/hvm/hvm.c
> >@@ -3039,7 +3039,7 @@ static enum hvm_copy_result __hvm_copy(
> >          if ( flags & HVMCOPY_virt )
> >          {
> >              gfn = paging_gva_to_gfn(curr, addr, &pfec);
> >-            if ( gfn == INVALID_GFN )
> >+            if ( gfn == gfn_x(INVALID_GFN) )
> >              {
> >                  if ( pfec & PFEC_page_paged )
> >                      return HVMCOPY_gfn_paged_out;
> >@@ -3154,7 +3154,7 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, int size)
> >          count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
> >
> >          gfn = paging_gva_to_gfn(curr, addr, &pfec);
> >-        if ( gfn == INVALID_GFN )
> >+        if ( gfn == gfn_x(INVALID_GFN) )
> >          {
> >              if ( pfec & PFEC_page_paged )
> >                  return HVMCOPY_gfn_paged_out;
> >@@ -5298,7 +5298,7 @@ static int do_altp2m_op(
> >               a.u.enable_notify.vcpu_id != curr->vcpu_id )
> >              rc = -EINVAL;
> >
> >-        if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> >+        if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
> >               mfn_eq(get_gfn_query_unlocked(curr->domain,
> >                      a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
> >              return -EINVAL;
> >diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> >index 7148ac4..d2245e2 100644
> >--- a/xen/arch/x86/hvm/ioreq.c
> >+++ b/xen/arch/x86/hvm/ioreq.c
> >@@ -204,7 +204,7 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
> >  {
> >      unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
> >
> >-    if ( gmfn != INVALID_GFN )
> >+    if ( gmfn != gfn_x(INVALID_GFN) )
> >          set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> >  }
> >
> >@@ -420,7 +420,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
> >      if ( rc )
> >          return rc;
> >
> >-    if ( bufioreq_pfn != INVALID_GFN )
> >+    if ( bufioreq_pfn != gfn_x(INVALID_GFN) )
> >          rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
> >
> >      if ( rc )
> >@@ -434,8 +434,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
> >                                          bool_t handle_bufioreq)
> >  {
> >      struct domain *d = s->domain;
> >-    unsigned long ioreq_pfn = INVALID_GFN;
> >-    unsigned long bufioreq_pfn = INVALID_GFN;
> >+    unsigned long ioreq_pfn = gfn_x(INVALID_GFN);
> >+    unsigned long bufioreq_pfn = gfn_x(INVALID_GFN);
> >      int rc;
> >
> >      if ( is_default )
> >diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
> >index 9d2ac09..f9b38ab 100644
> >--- a/xen/arch/x86/hvm/svm/nestedsvm.c
> >+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> >@@ -1200,7 +1200,7 @@ nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
> >      /* Walk the guest-supplied NPT table, just as if it were a pagetable */
> >      gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
> >
> >-    if ( gfn == INVALID_GFN )
> >+    if ( gfn == gfn_x(INVALID_GFN) )
> >          return NESTEDHVM_PAGEFAULT_INJECT;
> >
> >      *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
> >diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >index a061420..088f454 100644
> >--- a/xen/arch/x86/hvm/vmx/vmx.c
> >+++ b/xen/arch/x86/hvm/vmx/vmx.c
> >@@ -2057,13 +2057,13 @@ static int vmx_vcpu_emulate_vmfunc(struct cpu_user_regs *regs)
> >  static bool_t vmx_vcpu_emulate_ve(struct vcpu *v)
> >  {
> >      bool_t rc = 0, writable;
> >-    unsigned long gfn = gfn_x(vcpu_altp2m(v).veinfo_gfn);
> >+    gfn_t gfn = vcpu_altp2m(v).veinfo_gfn;
> >      ve_info_t *veinfo;
> >
> >-    if ( gfn == INVALID_GFN )
> >+    if ( gfn_eq(gfn, INVALID_GFN) )
> >          return 0;
> >
> >-    veinfo = hvm_map_guest_frame_rw(gfn, 0, &writable);
> >+    veinfo = hvm_map_guest_frame_rw(gfn_x(gfn), 0, &writable);
> >      if ( !veinfo )
> >          return 0;
> >      if ( !writable || veinfo->semaphore != 0 )
> >diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
> >index 10605c8..930bdc2 100644
> >--- a/xen/arch/x86/mm/altp2m.c
> >+++ b/xen/arch/x86/mm/altp2m.c
> >@@ -26,7 +26,7 @@ altp2m_vcpu_reset(struct vcpu *v)
> >      struct altp2mvcpu *av = &vcpu_altp2m(v);
> >
> >      av->p2midx = INVALID_ALTP2M;
> >-    av->veinfo_gfn = _gfn(INVALID_GFN);
> >+    av->veinfo_gfn = INVALID_GFN;
> >  }
> >
> >  void
> >diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
> >index d2716f9..1b1a15d 100644
> >--- a/xen/arch/x86/mm/hap/guest_walk.c
> >+++ b/xen/arch/x86/mm/hap/guest_walk.c
> >@@ -70,14 +70,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> >          if ( top_page )
> >              put_page(top_page);
> >          p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
> >-        return INVALID_GFN;
> >+        return gfn_x(INVALID_GFN);
> >      }
> >      if ( p2m_is_shared(p2mt) )
> >      {
> >          pfec[0] = PFEC_page_shared;
> >          if ( top_page )
> >              put_page(top_page);
> >-        return INVALID_GFN;
> >+        return gfn_x(INVALID_GFN);
> >      }
> >      if ( !top_page )
> >      {
> >@@ -110,12 +110,12 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> >              ASSERT(p2m_is_hostp2m(p2m));
> >              pfec[0] = PFEC_page_paged;
> >              p2m_mem_paging_populate(p2m->domain, gfn_x(gfn));
> >-            return INVALID_GFN;
> >+            return gfn_x(INVALID_GFN);
> >          }
> >          if ( p2m_is_shared(p2mt) )
> >          {
> >              pfec[0] = PFEC_page_shared;
> >-            return INVALID_GFN;
> >+            return gfn_x(INVALID_GFN);
> >          }
> >
> >          if ( page_order )
> >@@ -147,7 +147,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> >      if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
> >          pfec[0] &= ~PFEC_insn_fetch;
> >
> >-    return INVALID_GFN;
> >+    return gfn_x(INVALID_GFN);
> >  }
> >
> >
> >diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
> >index 94cf832..02b27b1 100644
> >--- a/xen/arch/x86/mm/hap/nested_ept.c
> >+++ b/xen/arch/x86/mm/hap/nested_ept.c
> >@@ -236,7 +236,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
> >      ept_walk_t gw;
> >      rwx_acc &= EPTE_RWX_MASK;
> >
> >-    *l1gfn = INVALID_GFN;
> >+    *l1gfn = gfn_x(INVALID_GFN);
> >
> >      rc = nept_walk_tables(v, l2ga, &gw);
> >      switch ( rc )
> >diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> >index f384589..149f529 100644
> >--- a/xen/arch/x86/mm/p2m-pod.c
> >+++ b/xen/arch/x86/mm/p2m-pod.c
> >@@ -1003,7 +1003,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
> >          unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
> >          unsigned long gfn = mrp->list[idx];
> >
> >-        if ( gfn != INVALID_GFN )
> >+        if ( gfn != gfn_x(INVALID_GFN) )
> >          {
> >              if ( gfn & POD_LAST_SUPERPAGE )
> >              {
> >@@ -1020,7 +1020,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
> >              else
> >                  p2m_pod_zero_check(p2m, &gfn, 1);
> >
> >-            mrp->list[idx] = INVALID_GFN;
> >+            mrp->list[idx] = gfn_x(INVALID_GFN);
> >          }
> >
> >      } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
> >@@ -1031,7 +1031,7 @@ static void pod_eager_record(struct p2m_domain *p2m,
> >  {
> >      struct pod_mrp_list *mrp = &p2m->pod.mrp;
> >
> >-    ASSERT(gfn != INVALID_GFN);
> >+    ASSERT(gfn != gfn_x(INVALID_GFN));
> >
> >      mrp->list[mrp->idx++] =
> >          gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
> >diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> >index b93c8a2..ff0cce8 100644
> >--- a/xen/arch/x86/mm/p2m.c
> >+++ b/xen/arch/x86/mm/p2m.c
> >@@ -76,7 +76,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
> >      p2m->np2m_base = P2M_BASE_EADDR;
> >
> >      for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
> >-        p2m->pod.mrp.list[i] = INVALID_GFN;
> >+        p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
> >
> >      if ( hap_enabled(d) && cpu_has_vmx )
> >          ret = ept_p2m_init(p2m);
> >@@ -1863,7 +1863,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> >      }
> >
> >      /* If request to set default access. */
> >-    if ( gfn_x(gfn) == INVALID_GFN )
> >+    if ( gfn_eq(gfn, INVALID_GFN) )
> >      {
> >          p2m->default_access = a;
> >          return 0;
> >@@ -1932,7 +1932,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> >      };
> >
> >      /* If request to get default access. */
> >-    if ( gfn_x(gfn) == INVALID_GFN )
> >+    if ( gfn_eq(gfn, INVALID_GFN) )
> >      {
> >          *access = memaccess[p2m->default_access];
> >          return 0;
> >@@ -2113,8 +2113,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
> >          mode = paging_get_nestedmode(v);
> >          l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
> >
> >-        if ( l2_gfn == INVALID_GFN )
> >-            return INVALID_GFN;
> >+        if ( l2_gfn == gfn_x(INVALID_GFN) )
> >+            return gfn_x(INVALID_GFN);
> >
> >          /* translate l2 guest gfn into l1 guest gfn */
> >          rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
> >@@ -2123,7 +2123,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
> >                                     !!(*pfec & PFEC_insn_fetch));
> >
> >          if ( rv != NESTEDHVM_PAGEFAULT_DONE )
> >-            return INVALID_GFN;
> >+            return gfn_x(INVALID_GFN);
> >
> >          /*
> >           * Sanity check that l1_gfn can be used properly as a 4K mapping, even
> >@@ -2415,7 +2415,7 @@ static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
> >      struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
> >      struct ept_data *ept;
> >
> >-    p2m->min_remapped_gfn = INVALID_GFN;
> >+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
> >      p2m->max_remapped_gfn = 0;
> >      ept = &p2m->ept;
> >      ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> >@@ -2551,7 +2551,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
> >
> >      mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
> >
> >-    if ( gfn_x(new_gfn) == INVALID_GFN )
> >+    if ( gfn_eq(new_gfn, INVALID_GFN) )
> >      {
> >          if ( mfn_valid(mfn) )
> >              p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
> >@@ -2613,7 +2613,7 @@ static void p2m_reset_altp2m(struct p2m_domain *p2m)
> >      /* Uninit and reinit ept to force TLB shootdown */
> >      ept_p2m_uninit(p2m);
> >      ept_p2m_init(p2m);
> >-    p2m->min_remapped_gfn = INVALID_GFN;
> >+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
> >      p2m->max_remapped_gfn = 0;
> >  }
> >
> >diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> >index 1c0b6cd..61ccddf 100644
> >--- a/xen/arch/x86/mm/shadow/common.c
> >+++ b/xen/arch/x86/mm/shadow/common.c
> >@@ -1707,7 +1707,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
> >
> >      /* Translate the VA to a GFN. */
> >      gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
> >-    if ( gfn == INVALID_GFN )
> >+    if ( gfn == gfn_x(INVALID_GFN) )
> >      {
> >          if ( is_hvm_vcpu(v) )
> >              hvm_inject_page_fault(pfec, vaddr);
> >diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> >index f892e2f..e54c8b7 100644
> >--- a/xen/arch/x86/mm/shadow/multi.c
> >+++ b/xen/arch/x86/mm/shadow/multi.c
> >@@ -3660,7 +3660,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
> >           */
> >          if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
> >              pfec[0] &= ~PFEC_insn_fetch;
> >-        return INVALID_GFN;
> >+        return gfn_x(INVALID_GFN);
> >      }
> >      gfn = guest_walk_to_gfn(&gw);
> >
> >diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
> >index c424ad6..824796f 100644
> >--- a/xen/arch/x86/mm/shadow/private.h
> >+++ b/xen/arch/x86/mm/shadow/private.h
> >@@ -796,7 +796,7 @@ static inline unsigned long vtlb_lookup(struct vcpu *v,
> >                                          unsigned long va, uint32_t pfec)
> >  {
> >      unsigned long page_number = va >> PAGE_SHIFT;
> >-    unsigned long frame_number = INVALID_GFN;
> >+    unsigned long frame_number = gfn_x(INVALID_GFN);
> >      int i = vtlb_hash(page_number);
> >
> >      spin_lock(&v->arch.paging.vtlb_lock);
> >diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> >index c758459..b8c0a48 100644
> >--- a/xen/drivers/passthrough/amd/iommu_map.c
> >+++ b/xen/drivers/passthrough/amd/iommu_map.c
> >@@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
> >      unsigned long old_root_mfn;
> >      struct domain_iommu *hd = dom_iommu(d);
> >
> >-    if ( gfn == INVALID_GFN )
> >+    if ( gfn == gfn_x(INVALID_GFN) )
> >          return -EADDRNOTAVAIL;
> >      ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> >
> >diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> >index f010612..c322b9f 100644
> >--- a/xen/drivers/passthrough/vtd/iommu.c
> >+++ b/xen/drivers/passthrough/vtd/iommu.c
> >@@ -611,7 +611,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
> >          if ( iommu_domid == -1 )
> >              continue;
> >
> >-        if ( page_count != 1 || gfn == INVALID_GFN )
> >+        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
> >              rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
> >                                         0, flush_dev_iotlb);
> >          else
> >@@ -640,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
> >
> >  static int __must_check iommu_flush_iotlb_all(struct domain *d)
> >  {
> >-    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
> >+    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
> >  }
> >
> >  /* clear one page's page table */
> >diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> >index cd435d7..69cd6c5 100644
> >--- a/xen/drivers/passthrough/x86/iommu.c
> >+++ b/xen/drivers/passthrough/x86/iommu.c
> >@@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
> >              unsigned long mfn = page_to_mfn(page);
> >              unsigned long gfn = mfn_to_gmfn(d, mfn);
> >
> >-            if ( gfn != INVALID_GFN )
> >+            if ( gfn != gfn_x(INVALID_GFN) )
> >              {
> >                  ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> >                  BUG_ON(SHARED_M2P(gfn));
> >diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
> >index a8d980c..79ed4ff 100644
> >--- a/xen/include/asm-x86/guest_pt.h
> >+++ b/xen/include/asm-x86/guest_pt.h
> >@@ -32,7 +32,7 @@
> >  #error GUEST_PAGING_LEVELS not defined
> >  #endif
> >
> >-#define VALID_GFN(m) (m != INVALID_GFN)
> >+#define VALID_GFN(m) (m != gfn_x(INVALID_GFN))
> >
> >  static inline int
> >  valid_gfn(gfn_t m)
> >@@ -251,7 +251,7 @@ static inline gfn_t
> >  guest_walk_to_gfn(walk_t *gw)
> >  {
> >      if ( !(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT) )
> >-        return _gfn(INVALID_GFN);
> >+        return INVALID_GFN;
> >      return guest_l1e_get_gfn(gw->l1e);
> >  }
> >
> >diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> >index 4ab3574..194020e 100644
> >--- a/xen/include/asm-x86/p2m.h
> >+++ b/xen/include/asm-x86/p2m.h
> >@@ -324,7 +324,7 @@ struct p2m_domain {
> >  #define NR_POD_MRP_ENTRIES 32
> >
> >  /* Encode ORDER_2M superpage in top bit of GFN */
> >-#define POD_LAST_SUPERPAGE (INVALID_GFN & ~(INVALID_GFN >> 1))
> >+#define POD_LAST_SUPERPAGE (gfn_x(INVALID_GFN) & ~(gfn_x(INVALID_GFN) >> 1))
> >
> >              unsigned long list[NR_POD_MRP_ENTRIES];
> >              unsigned int idx;
> >diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> >index 7f207ec..58bc0b8 100644
> >--- a/xen/include/xen/mm.h
> >+++ b/xen/include/xen/mm.h
> >@@ -84,7 +84,7 @@ static inline bool_t mfn_eq(mfn_t x, mfn_t y)
> >
> >  TYPE_SAFE(unsigned long, gfn);
> >  #define PRI_gfn          "05lx"
> >-#define INVALID_GFN      (~0UL)
> >+#define INVALID_GFN      _gfn(~0UL)
> >
> >  #ifndef gfn_t
> >  #define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
> >
> 
> -- 
> Julien Grall

Acked-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN
  2016-07-08 19:20       ` Andrew Cooper
@ 2016-07-09  0:21         ` Elena Ufimtseva
  0 siblings, 0 replies; 23+ messages in thread
From: Elena Ufimtseva @ 2016-07-09  0:21 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, sstabellini, Jan Beulich, George Dunlap, Liu Jinsong,
	Christoph Egger, Tim Deegan, xen-devel, Julien Grall,
	Paul Durrant, Jun Nakajima

On Fri, Jul 08, 2016 at 08:20:03PM +0100, Andrew Cooper wrote:
> On 08/07/2016 23:01, Elena Ufimtseva wrote:
> >
> >>> @@ -838,7 +838,6 @@ mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
> >>>
> >>>      SHADOW_ERROR("gmfn %lx was OOS but not in hash table\n", mfn_x(gmfn));
> >>>      BUG();
> >>> -    return _mfn(INVALID_MFN);
> > Can compiler be unhappy about this?
> 
> This was my suggestion, from a previous round of review.

Ah! Thanks for explanation.
> 
> A while ago, I annotated BUG() with unreachable(), as as execution will
> not continue from a bugframe, but the shadow code is definitely older
> than my change.
> 
> As such, compilers will have been dropping this return statement as part
> of dead-code-elimination anyway.
> 
> This option is better than just replacing one bit of dead code with a
> different bit of dead code.
> 
> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping
  2016-07-06 13:01 ` [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
@ 2016-07-11 16:16   ` Julien Grall
  0 siblings, 0 replies; 23+ messages in thread
From: Julien Grall @ 2016-07-11 16:16 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini

Hi,

On 06/07/16 14:01, Julien Grall wrote:
>   int map_mmio_regions(struct domain *d,
> @@ -1189,12 +1207,8 @@ int map_mmio_regions(struct domain *d,
>                        unsigned long nr,
>                        mfn_t mfn)
>   {
> -    return apply_p2m_changes(d, INSERT,
> -                             pfn_to_paddr(gfn_x(start_gfn)),
> -                             pfn_to_paddr(gfn_x(start_gfn) + nr),
> -                             pfn_to_paddr(mfn_x(mfn)),
> -                             MATTR_DEV, 0, p2m_mmio_direct,
> -                             d->arch.p2m.default_access);
> +    return p2m_insert_mapping(d, start_gfn, nr, mfn,
> +                              MATTR_MEM, p2m_mmio_direct);

This should be MATTR_DEV and not MATTR_MEM. I will fix it in the next 
version.

>   }

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-07-11 16:16 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 01/14] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
2016-07-06 13:01 ` [PATCH v6 02/14] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
2016-07-06 13:01 ` [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN Julien Grall
2016-07-06 13:04   ` Julien Grall
2016-07-08 22:01     ` Elena Ufimtseva
2016-07-08 19:20       ` Andrew Cooper
2016-07-09  0:21         ` Elena Ufimtseva
2016-07-08 19:39       ` Julien Grall
2016-07-06 13:01 ` [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN Julien Grall
2016-07-06 13:05   ` Julien Grall
2016-07-08 22:05     ` Elena Ufimtseva
2016-07-06 13:01 ` [PATCH v6 05/14] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 06/14] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 07/14] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
2016-07-06 13:01 ` [PATCH v6 08/14] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
2016-07-06 13:01 ` [PATCH v6 09/14] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
2016-07-06 13:01 ` [PATCH v6 10/14] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
2016-07-06 13:01 ` [PATCH v6 11/14] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
2016-07-06 13:01 ` [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
2016-07-11 16:16   ` Julien Grall
2016-07-06 13:01 ` [PATCH v6 13/14] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 14/14] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).