All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
@ 2017-10-05 17:42 Julien Grall
  2017-10-05 17:42 ` [PATCH v2 1/9] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
                   ` (8 more replies)
  0 siblings, 9 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Elena Ufimtseva, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Julien Grall, Ian Jackson,
	Tim Deegan, Julien Grall, Paul Durrant, Tamas K Lengyel,
	Jan Beulich, Shane Wang, Suravee Suthikulpanit, Boris Ostrovsky,
	Gang Wei

Hi all,

Most of the users of page_to_mfn and mfn_to_page are either overriding
the macros to make them work with mfn_t or use mfn_x/_mfn becaue the rest
of the function use mfn_t.

So I think it is time to make __page_to_mfn and __mfn_to_page using typesafe
MFN.

The first 8 patches will convert of the code to use typesafe MFN, easing
the tree-wide conversion in patch 8.

Note that this was only build tested it on x86.

Cheers,

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Shane Wang <shane.wang@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

Julien Grall (9):
  xen/arm: domain_build: Clean-up insert_11_bank
  xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash
  xen/x86: mem_sharing: Use copy_domain_page in
    __mem_sharing_unshare_page
  xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >>
    PAGE_SHIFT
  xen/kimage: Remove defined but unused variables
  xen/kexec,kimage: Convert kexec and kimage to use typesafe mfn_t
  xen/xenoprof: Convert the file to use typesafe MFN
  xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
  xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN

 xen/arch/arm/domain_build.c             | 15 ++++++++-------
 xen/arch/arm/kernel.c                   |  2 +-
 xen/arch/arm/mem_access.c               |  2 +-
 xen/arch/arm/mm.c                       |  8 ++++----
 xen/arch/arm/p2m.c                      |  8 +-------
 xen/arch/x86/cpu/vpmu.c                 |  4 ++--
 xen/arch/x86/debug.c                    |  2 +-
 xen/arch/x86/domain.c                   | 21 +++++++++++----------
 xen/arch/x86/domain_page.c              |  6 +++---
 xen/arch/x86/domctl.c                   |  2 +-
 xen/arch/x86/hvm/dm.c                   |  2 +-
 xen/arch/x86/hvm/dom0_build.c           |  6 +++---
 xen/arch/x86/hvm/hvm.c                  | 14 +++++++-------
 xen/arch/x86/hvm/ioreq.c                |  6 +++---
 xen/arch/x86/hvm/stdvga.c               |  2 +-
 xen/arch/x86/hvm/svm/svm.c              |  4 ++--
 xen/arch/x86/hvm/viridian.c             |  6 +++---
 xen/arch/x86/hvm/vmx/vmcs.c             |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c              | 10 +++++-----
 xen/arch/x86/hvm/vmx/vvmx.c             |  6 +++---
 xen/arch/x86/mm.c                       |  6 ------
 xen/arch/x86/mm/guest_walk.c            |  6 +++---
 xen/arch/x86/mm/hap/guest_walk.c        |  2 +-
 xen/arch/x86/mm/hap/hap.c               |  6 ------
 xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c           | 12 +-----------
 xen/arch/x86/mm/p2m-ept.c               |  4 ++++
 xen/arch/x86/mm/p2m-pod.c               |  6 ------
 xen/arch/x86/mm/p2m.c                   |  6 ------
 xen/arch/x86/mm/paging.c                |  6 ------
 xen/arch/x86/mm/shadow/common.c         |  2 +-
 xen/arch/x86/mm/shadow/multi.c          |  6 +++---
 xen/arch/x86/mm/shadow/private.h        | 16 ++--------------
 xen/arch/x86/numa.c                     |  2 +-
 xen/arch/x86/physdev.c                  |  2 +-
 xen/arch/x86/pv/callback.c              |  6 ------
 xen/arch/x86/pv/descriptor-tables.c     | 10 ----------
 xen/arch/x86/pv/dom0_build.c            |  6 ++++++
 xen/arch/x86/pv/domain.c                |  6 ------
 xen/arch/x86/pv/emul-gate-op.c          |  6 ------
 xen/arch/x86/pv/emul-priv-op.c          | 10 ----------
 xen/arch/x86/pv/grant_table.c           |  6 ------
 xen/arch/x86/pv/mm.c                    |  2 +-
 xen/arch/x86/pv/ro-page-fault.c         |  6 ------
 xen/arch/x86/smpboot.c                  |  6 ------
 xen/arch/x86/tboot.c                    |  4 ++--
 xen/arch/x86/traps.c                    |  2 +-
 xen/arch/x86/x86_64/mm.c                |  6 ++++++
 xen/common/domain.c                     |  4 ++--
 xen/common/grant_table.c                |  6 ++++++
 xen/common/kexec.c                      | 16 ++++++++--------
 xen/common/kimage.c                     | 33 +++++++++++++++------------------
 xen/common/memory.c                     |  6 ++++++
 xen/common/page_alloc.c                 |  6 ++++++
 xen/common/tmem.c                       |  2 +-
 xen/common/tmem_xen.c                   | 26 ++++++++++++++------------
 xen/common/trace.c                      |  6 ++++++
 xen/common/vmap.c                       |  9 +++++----
 xen/common/xenoprof.c                   | 19 +++++++++++++------
 xen/drivers/passthrough/amd/iommu_map.c |  6 ++++++
 xen/drivers/passthrough/iommu.c         |  2 +-
 xen/drivers/passthrough/x86/iommu.c     |  2 +-
 xen/include/asm-arm/mm.h                | 22 ++++++++++++----------
 xen/include/asm-arm/p2m.h               |  4 ++--
 xen/include/asm-x86/mm.h                | 12 ++++++------
 xen/include/asm-x86/p2m.h               |  2 +-
 xen/include/asm-x86/page.h              | 32 ++++++++++++++++----------------
 xen/include/xen/domain_page.h           |  8 ++++----
 xen/include/xen/kimage.h                |  4 ++--
 xen/include/xen/tmem_xen.h              |  2 +-
 70 files changed, 230 insertions(+), 287 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2 1/9] xen/arm: domain_build: Clean-up insert_11_bank
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:42 ` [PATCH v2 2/9] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash Julien Grall
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini

    - Remove spurious ()
    - Add missing spaces
    - Turn 1 << to 1UL <<
    - Rename spfn to smfn and switch to mfn_t

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

Cc: Stefano Stabellini <sstabellini@kernel.org>

    Changes in v2:
        - Remove double space
        - s/spfn/smfn/ and switch to mfn_t
---
 xen/arch/arm/domain_build.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3723dc3f78..167711b4fa 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -50,6 +50,8 @@ struct map_range_data
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+#undef page_to_mfn
+#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
 
 //#define DEBUG_11_ALLOCATION
 #ifdef DEBUG_11_ALLOCATION
@@ -104,16 +106,16 @@ static bool insert_11_bank(struct domain *d,
                            unsigned int order)
 {
     int res, i;
-    paddr_t spfn;
+    mfn_t smfn;
     paddr_t start, size;
 
-    spfn = page_to_mfn(pg);
-    start = pfn_to_paddr(spfn);
-    size = pfn_to_paddr((1 << order));
+    smfn = page_to_mfn(pg);
+    start = mfn_to_maddr(smfn);
+    size = pfn_to_paddr(1UL << order);
 
     D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
              start, start + size,
-             1UL << (order+PAGE_SHIFT-20),
+             1UL << (order + PAGE_SHIFT - 20),
              /* Don't want format this as PRIpaddr (16 digit hex) */
              (unsigned long)(kinfo->unassigned_mem >> 20),
              order);
@@ -126,7 +128,7 @@ static bool insert_11_bank(struct domain *d,
         goto fail;
     }
 
-    res = guest_physmap_add_page(d, _gfn(spfn), _mfn(spfn), order);
+    res = guest_physmap_add_page(d, _gfn(mfn_x(smfn)), smfn, order);
     if ( res )
         panic("Failed map pages to DOM0: %d", res);
 
@@ -167,7 +169,8 @@ static bool insert_11_bank(struct domain *d,
          */
         if ( start + size < bank->start && kinfo->mem.nr_banks < NR_MEM_BANKS )
         {
-            memmove(bank + 1, bank, sizeof(*bank)*(kinfo->mem.nr_banks - i));
+            memmove(bank + 1, bank,
+                    sizeof(*bank) * (kinfo->mem.nr_banks - i));
             kinfo->mem.nr_banks++;
             bank->start = start;
             bank->size = size;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 2/9] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
  2017-10-05 17:42 ` [PATCH v2 1/9] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:42 ` [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page Julien Grall
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini

The arm32 version of the function is_xen_heap_page currently define a
variable _mfn. This will lead to a compiler when use typesafe MFN in a
folow-up patch:

called object '_mfn' is not a function or function pointer

Fix it by renaming the local variable _mfn to mfn_.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/include/asm-arm/mm.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index cd6dfb54b9..737a429409 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -140,9 +140,9 @@ extern vaddr_t xenheap_virt_start;
 #ifdef CONFIG_ARM_32
 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
 #define is_xen_heap_mfn(mfn) ({                                 \
-    unsigned long _mfn = (mfn);                                 \
-    (_mfn >= mfn_x(xenheap_mfn_start) &&                        \
-     _mfn < mfn_x(xenheap_mfn_end));                            \
+    unsigned long mfn_ = (mfn);                                 \
+    (mfn_ >= mfn_x(xenheap_mfn_start) &&                        \
+     mfn_ < mfn_x(xenheap_mfn_end));                            \
 })
 #else
 #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
  2017-10-05 17:42 ` [PATCH v2 1/9] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
  2017-10-05 17:42 ` [PATCH v2 2/9] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:49   ` Andrew Cooper
  2017-10-05 17:52   ` Tamas K Lengyel
  2017-10-05 17:42 ` [PATCH v2 4/9] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Julien Grall, Tamas K Lengyel, Jan Beulich

The function __mem_sharing_unshare_page contains an open-code version of
copy_domain_page. Use the function to simplify a bit the code.

At the same time replace _mfn(__page_to_mfn(...)) by page_to_mfn(...)
given that the file given already provides a typesafe version of page_to_mfn.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - New patch
---
 xen/arch/x86/mm/mem_sharing.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index b856028c02..6f4be95515 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1106,7 +1106,6 @@ int __mem_sharing_unshare_page(struct domain *d,
     p2m_type_t p2mt;
     mfn_t mfn;
     struct page_info *page, *old_page;
-    void *s, *t;
     int last_gfn;
     gfn_info_t *gfn_info = NULL;
    
@@ -1185,11 +1184,7 @@ int __mem_sharing_unshare_page(struct domain *d,
         return -ENOMEM;
     }
 
-    s = map_domain_page(_mfn(__page_to_mfn(old_page)));
-    t = map_domain_page(_mfn(__page_to_mfn(page)));
-    memcpy(t, s, PAGE_SIZE);
-    unmap_domain_page(s);
-    unmap_domain_page(t);
+    copy_domain_page(page_to_mfn(page), page_to_mfn(old_page));
 
     BUG_ON(set_shared_p2m_entry(d, gfn, page_to_mfn(page)));
     mem_sharing_gfn_destroy(old_page, d, gfn_info);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 4/9] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
                   ` (2 preceding siblings ...)
  2017-10-05 17:42 ` [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:42 ` [PATCH v2 5/9] xen/kimage: Remove defined but unused variables Julien Grall
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Elena Ufimtseva, George Dunlap, Andrew Cooper, Julien Grall,
	Tim Deegan, Jan Beulich

The constructions _mfn(... > PAGE_SHIFT) and mfn_to_page(... >> PAGE_SHIFT)
could respectively be replaced by maddr_to_mfn(...) and
maddr_to_page(...).

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

    Changes in v2:
        - Add Andrew's reviewed-by
---
 xen/arch/x86/debug.c            | 2 +-
 xen/arch/x86/mm/shadow/common.c | 2 +-
 xen/arch/x86/mm/shadow/multi.c  | 6 +++---
 xen/common/kimage.c             | 6 +++---
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 1c10b84a16..9159f32db4 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -98,7 +98,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2_pgentry_t l2e, *l2t;
     l1_pgentry_t l1e, *l1t;
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
-    mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
+    mfn_t mfn = maddr_to_mfn(cr3);
 
     DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
           cr3, pgd3val);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 86186cccdf..f65d2a6523 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2640,7 +2640,7 @@ static int sh_remove_shadow_via_pointer(struct domain *d, mfn_t smfn)
     ASSERT(sh_type_has_up_pointer(d, sp->u.sh.type));
 
     if (sp->up == 0) return 0;
-    pmfn = _mfn(sp->up >> PAGE_SHIFT);
+    pmfn = maddr_to_mfn(sp->up);
     ASSERT(mfn_valid(pmfn));
     vaddr = map_domain_page(pmfn);
     ASSERT(vaddr);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 28030acbf6..1e42e1d8ab 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2425,7 +2425,7 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
     sp = mfn_to_page(smfn);
     if ( sp->u.sh.count != 1 || !sp->up )
         return 0;
-    smfn = _mfn(sp->up >> PAGE_SHIFT);
+    smfn = maddr_to_mfn(sp->up);
     ASSERT(mfn_valid(smfn));
 
 #if (SHADOW_PAGING_LEVELS == 4)
@@ -2434,7 +2434,7 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
     ASSERT(sh_type_has_up_pointer(d, SH_type_l2_shadow));
     if ( sp->u.sh.count != 1 || !sp->up )
         return 0;
-    smfn = _mfn(sp->up >> PAGE_SHIFT);
+    smfn = maddr_to_mfn(sp->up);
     ASSERT(mfn_valid(smfn));
 
     /* up to l4 */
@@ -2442,7 +2442,7 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
     if ( sp->u.sh.count != 1
          || !sh_type_has_up_pointer(d, SH_type_l3_64_shadow) || !sp->up )
         return 0;
-    smfn = _mfn(sp->up >> PAGE_SHIFT);
+    smfn = maddr_to_mfn(sp->up);
     ASSERT(mfn_valid(smfn));
 #endif
 
diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index cf624d10fd..ebc71affd1 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -504,7 +504,7 @@ static void kimage_free_entry(kimage_entry_t entry)
 {
     struct page_info *page;
 
-    page = mfn_to_page(entry >> PAGE_SHIFT);
+    page = maddr_to_page(entry);
     free_domheap_page(page);
 }
 
@@ -636,8 +636,8 @@ static struct page_info *kimage_alloc_page(struct kexec_image *image,
         if ( old )
         {
             /* If so move it. */
-            mfn_t old_mfn = _mfn(*old >> PAGE_SHIFT);
-            mfn_t mfn = _mfn(addr >> PAGE_SHIFT);
+            mfn_t old_mfn = maddr_to_mfn(*old);
+            mfn_t mfn = maddr_to_mfn(addr);
 
             copy_domain_page(mfn, old_mfn);
             clear_domain_page(old_mfn);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 5/9] xen/kimage: Remove defined but unused variables
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
                   ` (3 preceding siblings ...)
  2017-10-05 17:42 ` [PATCH v2 4/9] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:42 ` [PATCH v2 6/9] xen/kexec, kimage: Convert kexec and kimage to use typesafe mfn_t Julien Grall
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Julien Grall

In the function kimage_alloc_normal_control_page, the variables mfn and
emfn are defined but not used. Remove them.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v3:
        - Add Andrew's reviewed-by
---
 xen/common/kimage.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index ebc71affd1..07587896a4 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -310,14 +310,11 @@ static struct page_info *kimage_alloc_normal_control_page(
      * destination page.
      */
     do {
-        unsigned long mfn, emfn;
         paddr_t addr, eaddr;
 
         page = kimage_alloc_zeroed_page(memflags);
         if ( !page )
             break;
-        mfn   = page_to_mfn(page);
-        emfn  = mfn + 1;
         addr  = page_to_maddr(page);
         eaddr = addr + PAGE_SIZE;
         if ( kimage_is_destination_range(image, addr, eaddr) )
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 6/9] xen/kexec, kimage: Convert kexec and kimage to use typesafe mfn_t
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
                   ` (4 preceding siblings ...)
  2017-10-05 17:42 ` [PATCH v2 5/9] xen/kimage: Remove defined but unused variables Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:51   ` Andrew Cooper
  2017-10-05 17:42 ` [PATCH v2 7/9] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall

At the same time, correctly align one the prototype changed.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/common/kexec.c       | 16 ++++++++--------
 xen/common/kimage.c      | 30 ++++++++++++++++++------------
 xen/include/xen/kimage.h |  4 ++--
 3 files changed, 28 insertions(+), 22 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index fcc68bd4d8..c14cbb2b9c 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -905,11 +905,11 @@ static uint16_t kexec_load_v1_arch(void)
 #endif
 }
 
-static int kexec_segments_add_segment(
-    unsigned int *nr_segments, xen_kexec_segment_t *segments,
-    unsigned long mfn)
+static int kexec_segments_add_segment(unsigned int *nr_segments,
+                                      xen_kexec_segment_t *segments,
+                                      mfn_t mfn)
 {
-    paddr_t maddr = (paddr_t)mfn << PAGE_SHIFT;
+    paddr_t maddr = mfn_to_maddr(mfn);
     unsigned int n = *nr_segments;
 
     /* Need a new segment? */
@@ -930,7 +930,7 @@ static int kexec_segments_add_segment(
     return 0;
 }
 
-static int kexec_segments_from_ind_page(unsigned long mfn,
+static int kexec_segments_from_ind_page(mfn_t mfn,
                                         unsigned int *nr_segments,
                                         xen_kexec_segment_t *segments,
                                         bool_t compat)
@@ -939,7 +939,7 @@ static int kexec_segments_from_ind_page(unsigned long mfn,
     kimage_entry_t *entry;
     int ret = 0;
 
-    page = map_domain_page(_mfn(mfn));
+    page = map_domain_page(mfn);
 
     /*
      * Walk the indirection page list, adding destination pages to the
@@ -961,7 +961,7 @@ static int kexec_segments_from_ind_page(unsigned long mfn,
             break;
         case IND_INDIRECTION:
             unmap_domain_page(page);
-            entry = page = map_domain_page(_mfn(mfn));
+            entry = page = map_domain_page(mfn);
             continue;
         case IND_DONE:
             goto done;
@@ -990,7 +990,7 @@ static int kexec_do_load_v1(xen_kexec_load_v1_t *load, int compat)
     xen_kexec_segment_t *segments;
     uint16_t arch;
     unsigned int nr_segments = 0;
-    unsigned long ind_mfn = load->image.indirection_page >> PAGE_SHIFT;
+    mfn_t ind_mfn = maddr_to_mfn(load->image.indirection_page);
     int ret;
 
     arch = kexec_load_v1_arch();
diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index 07587896a4..afd8292cc1 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -23,6 +23,12 @@
 
 #include <asm/page.h>
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
+#undef page_to_mfn
+#define page_to_mfn(pg)  _mfn(__page_to_mfn(pg))
+
 /*
  * When kexec transitions to the new kernel there is a one-to-one
  * mapping between physical and virtual addresses.  On processors
@@ -76,7 +82,7 @@ static struct page_info *kimage_alloc_zeroed_page(unsigned memflags)
     if ( !page )
         return NULL;
 
-    clear_domain_page(_mfn(page_to_mfn(page)));
+    clear_domain_page(page_to_mfn(page));
 
     return page;
 }
@@ -405,7 +411,7 @@ static struct page_info *kimage_alloc_crash_control_page(struct kexec_image *ima
     if ( page )
     {
         image->next_crash_page = hole_end;
-        clear_domain_page(_mfn(page_to_mfn(page)));
+        clear_domain_page(page_to_mfn(page));
     }
 
     return page;
@@ -641,7 +647,7 @@ static struct page_info *kimage_alloc_page(struct kexec_image *image,
             *old = (addr & ~PAGE_MASK) | IND_SOURCE;
             unmap_domain_page(old);
 
-            page = mfn_to_page(mfn_x(old_mfn));
+            page = mfn_to_page(old_mfn);
             break;
         }
         else
@@ -840,11 +846,11 @@ kimage_entry_t *kimage_entry_next(kimage_entry_t *entry, bool_t compat)
     return entry + 1;
 }
 
-unsigned long kimage_entry_mfn(kimage_entry_t *entry, bool_t compat)
+mfn_t kimage_entry_mfn(kimage_entry_t *entry, bool_t compat)
 {
     if ( compat )
-        return *(uint32_t *)entry >> PAGE_SHIFT;
-    return *entry >> PAGE_SHIFT;
+        return maddr_to_mfn(*(uint32_t *)entry);
+    return maddr_to_mfn(*entry);
 }
 
 unsigned long kimage_entry_ind(kimage_entry_t *entry, bool_t compat)
@@ -854,7 +860,7 @@ unsigned long kimage_entry_ind(kimage_entry_t *entry, bool_t compat)
     return *entry & 0xf;
 }
 
-int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
+int kimage_build_ind(struct kexec_image *image, mfn_t ind_mfn,
                      bool_t compat)
 {
     void *page;
@@ -862,7 +868,7 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
     int ret = 0;
     paddr_t dest = KIMAGE_NO_DEST;
 
-    page = map_domain_page(_mfn(ind_mfn));
+    page = map_domain_page(ind_mfn);
     if ( !page )
         return -ENOMEM;
 
@@ -873,7 +879,7 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
     for ( entry = page; ;  )
     {
         unsigned long ind;
-        unsigned long mfn;
+        mfn_t mfn;
 
         ind = kimage_entry_ind(entry, compat);
         mfn = kimage_entry_mfn(entry, compat);
@@ -881,14 +887,14 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
         switch ( ind )
         {
         case IND_DESTINATION:
-            dest = (paddr_t)mfn << PAGE_SHIFT;
+            dest = mfn_to_maddr(mfn);
             ret = kimage_set_destination(image, dest);
             if ( ret < 0 )
                 goto done;
             break;
         case IND_INDIRECTION:
             unmap_domain_page(page);
-            page = map_domain_page(_mfn(mfn));
+            page = map_domain_page(mfn);
             entry = page;
             continue;
         case IND_DONE:
@@ -913,7 +919,7 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
                 goto done;
             }
 
-            copy_domain_page(_mfn(page_to_mfn(xen_page)), _mfn(mfn));
+            copy_domain_page(page_to_mfn(xen_page), mfn);
             put_page(guest_page);
 
             ret = kimage_add_page(image, page_to_maddr(xen_page));
diff --git a/xen/include/xen/kimage.h b/xen/include/xen/kimage.h
index d10ebf7844..cbfb9e9054 100644
--- a/xen/include/xen/kimage.h
+++ b/xen/include/xen/kimage.h
@@ -48,9 +48,9 @@ struct page_info *kimage_alloc_control_page(struct kexec_image *image,
                                             unsigned memflags);
 
 kimage_entry_t *kimage_entry_next(kimage_entry_t *entry, bool_t compat);
-unsigned long kimage_entry_mfn(kimage_entry_t *entry, bool_t compat);
+mfn_t kimage_entry_mfn(kimage_entry_t *entry, bool_t compat);
 unsigned long kimage_entry_ind(kimage_entry_t *entry, bool_t compat);
-int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
+int kimage_build_ind(struct kexec_image *image, mfn_t ind_mfn,
                      bool_t compat);
 
 #endif /* __ASSEMBLY__ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 7/9] xen/xenoprof: Convert the file to use typesafe MFN
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
                   ` (5 preceding siblings ...)
  2017-10-05 17:42 ` [PATCH v2 6/9] xen/kexec, kimage: Convert kexec and kimage to use typesafe mfn_t Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:42 ` [PATCH v2 8/9] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
  8 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Julien Grall, Ian Jackson,
	Tim Deegan, Jan Beulich

The file common/xenoprof.c is now converted to use typesafe. This is
requiring to override the macros virt_to_mfn and mfn_to_page to make
them work with mfn_t.

Also, add a couple of missing newlines in the code modified.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

    Changes in v2:
        - Add missing newlines
        - Add Andrew's reviewed-by
---
 xen/common/xenoprof.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index a5fe6204a5..5acdde5691 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -19,6 +19,12 @@
 #include <xsm/xsm.h>
 #include <xen/hypercall.h>
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
+
 /* Limit amount of pages used for shared buffer (per domain) */
 #define MAX_OPROF_SHARED_PAGES 32
 
@@ -134,25 +140,27 @@ static void xenoprof_reset_buf(struct domain *d)
 }
 
 static int
-share_xenoprof_page_with_guest(struct domain *d, unsigned long mfn, int npages)
+share_xenoprof_page_with_guest(struct domain *d, mfn_t mfn, int npages)
 {
     int i;
 
     /* Check if previous page owner has released the page. */
     for ( i = 0; i < npages; i++ )
     {
-        struct page_info *page = mfn_to_page(mfn + i);
+        struct page_info *page = mfn_to_page(mfn_add(mfn, i));
+
         if ( (page->count_info & (PGC_allocated|PGC_count_mask)) != 0 )
         {
             printk(XENLOG_G_INFO "dom%d mfn %#lx page->count_info %#lx\n",
-                   d->domain_id, mfn + i, page->count_info);
+                   d->domain_id, mfn_x(mfn_add(mfn, i)), page->count_info);
             return -EBUSY;
         }
         page_set_owner(page, NULL);
     }
 
     for ( i = 0; i < npages; i++ )
-        share_xen_page_with_guest(mfn_to_page(mfn + i), d, XENSHARE_writable);
+        share_xen_page_with_guest(mfn_to_page(mfn_add(mfn, i)),
+                                  d, XENSHARE_writable);
 
     return 0;
 }
@@ -161,11 +169,12 @@ static void
 unshare_xenoprof_page_with_guest(struct xenoprof *x)
 {
     int i, npages = x->npages;
-    unsigned long mfn = virt_to_mfn(x->rawbuf);
+    mfn_t mfn = virt_to_mfn(x->rawbuf);
 
     for ( i = 0; i < npages; i++ )
     {
-        struct page_info *page = mfn_to_page(mfn + i);
+        struct page_info *page = mfn_to_page(mfn_add(mfn, i));
+
         BUG_ON(page_get_owner(page) != current->domain);
         if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
             put_page(page);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 8/9] xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
                   ` (6 preceding siblings ...)
  2017-10-05 17:42 ` [PATCH v2 7/9] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
  8 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Konrad Rzeszutek Wilk

The file common/tmem_xen.c is now converted to use typesafe. This is
requiring to override the macro page_to_mfn to make it work with mfn_t.

Note that all variables converted to mfn_t havem there initial value,
when set, switch from 0 to INVALID_MFN. This is fine because the initial
values was always overriden before used.

Also add a couple of missing newlines suggested by Andrew in the code.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

    Changes in v2:
        - Add missing newlines
        - Add Andrew's reviewed-by
---
 xen/common/tmem_xen.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 20f74b268f..bd52e44faf 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -14,6 +14,10 @@
 #include <xen/cpu.h>
 #include <xen/init.h>
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef page_to_mfn
+#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
+
 bool __read_mostly opt_tmem;
 boolean_param("tmem", opt_tmem);
 
@@ -31,7 +35,7 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, dstmem);
 static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page);
 
 #if defined(CONFIG_ARM)
-static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
+static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn,
                                  struct page_info **pcli_pfp, bool cli_write)
 {
     ASSERT_UNREACHABLE();
@@ -39,14 +43,14 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
 }
 
 static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
-                                unsigned long cli_mfn, bool mark_dirty)
+                                mfn_t cli_mfn, bool mark_dirty)
 {
     ASSERT_UNREACHABLE();
 }
 #else
 #include <asm/p2m.h>
 
-static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
+static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn,
                                  struct page_info **pcli_pfp, bool cli_write)
 {
     p2m_type_t t;
@@ -68,16 +72,17 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
 
     *pcli_mfn = page_to_mfn(page);
     *pcli_pfp = page;
-    return map_domain_page(_mfn(*pcli_mfn));
+
+    return map_domain_page(*pcli_mfn);
 }
 
 static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
-                                unsigned long cli_mfn, bool mark_dirty)
+                                mfn_t cli_mfn, bool mark_dirty)
 {
     if ( mark_dirty )
     {
         put_page_and_type(cli_pfp);
-        paging_mark_dirty(current->domain, _mfn(cli_mfn));
+        paging_mark_dirty(current->domain, cli_mfn);
     }
     else
         put_page(cli_pfp);
@@ -88,14 +93,14 @@ static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
 int tmem_copy_from_client(struct page_info *pfp,
     xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
 {
-    unsigned long tmem_mfn, cli_mfn = 0;
+    mfn_t tmem_mfn, cli_mfn = INVALID_MFN;
     char *tmem_va, *cli_va = NULL;
     struct page_info *cli_pfp = NULL;
     int rc = 1;
 
     ASSERT(pfp != NULL);
     tmem_mfn = page_to_mfn(pfp);
-    tmem_va = map_domain_page(_mfn(tmem_mfn));
+    tmem_va = map_domain_page(tmem_mfn);
     if ( guest_handle_is_null(clibuf) )
     {
         cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 0);
@@ -125,7 +130,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn,
     unsigned char *wmem = this_cpu(workmem);
     char *scratch = this_cpu(scratch_page);
     struct page_info *cli_pfp = NULL;
-    unsigned long cli_mfn = 0;
+    mfn_t cli_mfn = INVALID_MFN;
     void *cli_va = NULL;
 
     if ( dmem == NULL || wmem == NULL )
@@ -152,7 +157,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn,
 int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
     tmem_cli_va_param_t clibuf)
 {
-    unsigned long tmem_mfn, cli_mfn = 0;
+    mfn_t tmem_mfn, cli_mfn = INVALID_MFN;
     char *tmem_va, *cli_va = NULL;
     struct page_info *cli_pfp = NULL;
     int rc = 1;
@@ -165,7 +170,8 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
             return -EFAULT;
     }
     tmem_mfn = page_to_mfn(pfp);
-    tmem_va = map_domain_page(_mfn(tmem_mfn));
+    tmem_va = map_domain_page(tmem_mfn);
+
     if ( cli_va )
     {
         memcpy(cli_va, tmem_va, PAGE_SIZE);
@@ -181,7 +187,7 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
 int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
                                     size_t size, tmem_cli_va_param_t clibuf)
 {
-    unsigned long cli_mfn = 0;
+    mfn_t cli_mfn = INVALID_MFN;
     struct page_info *cli_pfp = NULL;
     void *cli_va = NULL;
     char *scratch = this_cpu(scratch_page);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
                   ` (7 preceding siblings ...)
  2017-10-05 17:42 ` [PATCH v2 8/9] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
@ 2017-10-05 17:42 ` Julien Grall
  2017-10-05 17:59   ` Andrew Cooper
                     ` (5 more replies)
  8 siblings, 6 replies; 27+ messages in thread
From: Julien Grall @ 2017-10-05 17:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Jun Nakajima, Kevin Tian, Stefano Stabellini, Wei Liu,
	Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Julien Grall, Ian Jackson,
	Tim Deegan, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Shane Wang, Boris Ostrovsky, Gang Wei, Paul Durrant

Most of the users of page_to_mfn and mfn_to_page are either overriding
the macros to make them work with mfn_t or use mfn_x/_mfn because the
rest of the function use mfn_t.

So make __page_to_mfn and __mfn_to_page return mfn_t by default.

Only reasonable clean-ups are done in this patch because it is
already quite big. So some of the files now override page_to_mfn and
mfn_to_page to avoid using mfn_t.

Lastly, domain_page_to_mfn is also converted to use mfn_t given that
most of the callers are now switched to _mfn(domain_page_to_mfn(...)).

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

Andrew suggested to drop IS_VALID_PAGE in xen/tmem_xen.h. His comment
was:

"/sigh  This is tautological.  The definition of a "valid mfn" in this
case is one for which we have frametable entry, and by having a struct
page_info in our hands, this is by definition true (unless you have a
wild pointer, at which point your bug is elsewhere).

IS_VALID_PAGE() is only ever used in assertions and never usefully, so
instead I would remove it entirely rather than trying to fix it up."

I can remove the function in a separate patch at the begining of the
series if Konrad (TMEM maintainer) is happy with that.

Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: Shane Wang <shane.wang@intel.com>

    Changes in v2:
        - Some part have been moved in separate patch
        - Remove one spurious comment
        - Convert domain_page_to_mfn to use mfn_t
---
 xen/arch/arm/domain_build.c             |  2 --
 xen/arch/arm/kernel.c                   |  2 +-
 xen/arch/arm/mem_access.c               |  2 +-
 xen/arch/arm/mm.c                       |  8 ++++----
 xen/arch/arm/p2m.c                      |  8 +-------
 xen/arch/x86/cpu/vpmu.c                 |  4 ++--
 xen/arch/x86/domain.c                   | 21 +++++++++++----------
 xen/arch/x86/domain_page.c              |  6 +++---
 xen/arch/x86/domctl.c                   |  2 +-
 xen/arch/x86/hvm/dm.c                   |  2 +-
 xen/arch/x86/hvm/dom0_build.c           |  6 +++---
 xen/arch/x86/hvm/hvm.c                  | 14 +++++++-------
 xen/arch/x86/hvm/ioreq.c                |  6 +++---
 xen/arch/x86/hvm/stdvga.c               |  2 +-
 xen/arch/x86/hvm/svm/svm.c              |  4 ++--
 xen/arch/x86/hvm/viridian.c             |  6 +++---
 xen/arch/x86/hvm/vmx/vmcs.c             |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c              | 10 +++++-----
 xen/arch/x86/hvm/vmx/vvmx.c             |  6 +++---
 xen/arch/x86/mm.c                       |  6 ------
 xen/arch/x86/mm/guest_walk.c            |  6 +++---
 xen/arch/x86/mm/hap/guest_walk.c        |  2 +-
 xen/arch/x86/mm/hap/hap.c               |  6 ------
 xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c           |  5 -----
 xen/arch/x86/mm/p2m-ept.c               |  4 ++++
 xen/arch/x86/mm/p2m-pod.c               |  6 ------
 xen/arch/x86/mm/p2m.c                   |  6 ------
 xen/arch/x86/mm/paging.c                |  6 ------
 xen/arch/x86/mm/shadow/private.h        | 16 ++--------------
 xen/arch/x86/numa.c                     |  2 +-
 xen/arch/x86/physdev.c                  |  2 +-
 xen/arch/x86/pv/callback.c              |  6 ------
 xen/arch/x86/pv/descriptor-tables.c     | 10 ----------
 xen/arch/x86/pv/dom0_build.c            |  6 ++++++
 xen/arch/x86/pv/domain.c                |  6 ------
 xen/arch/x86/pv/emul-gate-op.c          |  6 ------
 xen/arch/x86/pv/emul-priv-op.c          | 10 ----------
 xen/arch/x86/pv/grant_table.c           |  6 ------
 xen/arch/x86/pv/mm.c                    |  2 +-
 xen/arch/x86/pv/ro-page-fault.c         |  6 ------
 xen/arch/x86/smpboot.c                  |  6 ------
 xen/arch/x86/tboot.c                    |  4 ++--
 xen/arch/x86/traps.c                    |  2 +-
 xen/arch/x86/x86_64/mm.c                |  6 ++++++
 xen/common/domain.c                     |  4 ++--
 xen/common/grant_table.c                |  6 ++++++
 xen/common/kimage.c                     |  6 ------
 xen/common/memory.c                     |  6 ++++++
 xen/common/page_alloc.c                 |  6 ++++++
 xen/common/tmem.c                       |  2 +-
 xen/common/tmem_xen.c                   |  4 ----
 xen/common/trace.c                      |  6 ++++++
 xen/common/vmap.c                       |  9 +++++----
 xen/common/xenoprof.c                   |  2 --
 xen/drivers/passthrough/amd/iommu_map.c |  6 ++++++
 xen/drivers/passthrough/iommu.c         |  2 +-
 xen/drivers/passthrough/x86/iommu.c     |  2 +-
 xen/include/asm-arm/mm.h                | 16 +++++++++-------
 xen/include/asm-arm/p2m.h               |  4 ++--
 xen/include/asm-x86/mm.h                | 12 ++++++------
 xen/include/asm-x86/p2m.h               |  2 +-
 xen/include/asm-x86/page.h              | 32 ++++++++++++++++----------------
 xen/include/xen/domain_page.h           |  8 ++++----
 xen/include/xen/tmem_xen.h              |  2 +-
 65 files changed, 161 insertions(+), 234 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 167711b4fa..a6b471e2f6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -50,8 +50,6 @@ struct map_range_data
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
 
 //#define DEBUG_11_ALLOCATION
 #ifdef DEBUG_11_ALLOCATION
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 9c183f96da..f391938640 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -295,7 +295,7 @@ static __init int kernel_decompress(struct bootmodule *mod)
         iounmap(input);
         return -ENOMEM;
     }
-    mfn = _mfn(page_to_mfn(pages));
+    mfn = page_to_mfn(pages);
     output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
 
     rc = perform_gunzip(output, input, size);
diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index 0f2cbb81d3..112e291cba 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -210,7 +210,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
     if ( t != p2m_ram_rw )
         goto err;
 
-    page = mfn_to_page(mfn_x(mfn));
+    page = mfn_to_page(mfn);
 
     if ( unlikely(!get_page(page, v->domain)) )
         page = NULL;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9a37f29ce6..ebce2320ec 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -477,7 +477,7 @@ void unmap_domain_page(const void *va)
     local_irq_restore(flags);
 }
 
-unsigned long domain_page_map_to_mfn(const void *ptr)
+mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
     lpae_t *map = this_cpu(xen_dommap);
@@ -485,12 +485,12 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
     unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
-        return __virt_to_mfn(va);
+        return virt_to_mfn(va);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
     ASSERT(map[slot].pt.avail != 0);
 
-    return map[slot].pt.base + offset;
+    return _mfn(map[slot].pt.base + offset);
 }
 #endif
 
@@ -1286,7 +1286,7 @@ int xenmem_add_to_physmap_one(
             return -EINVAL;
         }
 
-        mfn = _mfn(page_to_mfn(page));
+        mfn = page_to_mfn(page);
         t = p2m_map_foreign;
 
         rcu_unlock_domain(od);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0410b1e86b..1e7a0c6c40 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -38,12 +38,6 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 
 #define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
 
-/* Override macros from asm/mm.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 unsigned int __read_mostly p2m_ipa_bits;
 
 /* Helpers to lookup the properties of each level */
@@ -98,7 +92,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
     printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
     printk("P2M @ %p mfn:0x%lx\n",
-           p2m->root, __page_to_mfn(p2m->root));
+           p2m->root, mfn_x(page_to_mfn(p2m->root)));
 
     dump_pt_walk(page_to_maddr(p2m->root), addr,
                  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index fd2fcacc26..376e80b6c7 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -657,7 +657,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
 {
     struct vcpu *v;
     struct vpmu_struct *vpmu;
-    uint64_t mfn;
+    mfn_t mfn;
     void *xenpmu_data;
 
     if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) )
@@ -679,7 +679,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
     if ( xenpmu_data )
     {
         mfn = domain_page_map_to_mfn(xenpmu_data);
-        ASSERT(mfn_valid(_mfn(mfn)));
+        ASSERT(mfn_valid(mfn));
         unmap_domain_page_global(xenpmu_data);
         put_page_and_type(mfn_to_page(mfn));
     }
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index bb1ffa3222..395ef6145a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -186,7 +186,7 @@ void dump_pageframe_info(struct domain *d)
                 }
             }
             printk("    DomPage %p: caf=%08lx, taf=%" PRtype_info "\n",
-                   _p(page_to_mfn(page)),
+                   _p(mfn_x(page_to_mfn(page))),
                    page->count_info, page->u.inuse.type_info);
         }
         spin_unlock(&d->page_alloc_lock);
@@ -199,7 +199,7 @@ void dump_pageframe_info(struct domain *d)
     page_list_for_each ( page, &d->xenpage_list )
     {
         printk("    XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
-               _p(page_to_mfn(page)),
+               _p(mfn_x(page_to_mfn(page))),
                page->count_info, page->u.inuse.type_info);
     }
     spin_unlock(&d->page_alloc_lock);
@@ -621,7 +621,8 @@ int arch_domain_soft_reset(struct domain *d)
     struct page_info *page = virt_to_page(d->shared_info), *new_page;
     int ret = 0;
     struct domain *owner;
-    unsigned long mfn, gfn;
+    mfn_t mfn;
+    unsigned long gfn;
     p2m_type_t p2mt;
     unsigned int i;
 
@@ -655,7 +656,7 @@ int arch_domain_soft_reset(struct domain *d)
     ASSERT( owner == d );
 
     mfn = page_to_mfn(page);
-    gfn = mfn_to_gmfn(d, mfn);
+    gfn = mfn_to_gmfn(d, mfn_x(mfn));
 
     /*
      * gfn == INVALID_GFN indicates that the shared_info page was never mapped
@@ -664,7 +665,7 @@ int arch_domain_soft_reset(struct domain *d)
     if ( gfn == gfn_x(INVALID_GFN) )
         goto exit_put_page;
 
-    if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
+    if ( !mfn_eq(get_gfn_query(d, gfn, &p2mt), mfn) )
     {
         printk(XENLOG_G_ERR "Failed to get Dom%d's shared_info GFN (%lx)\n",
                d->domain_id, gfn);
@@ -681,7 +682,7 @@ int arch_domain_soft_reset(struct domain *d)
         goto exit_put_gfn;
     }
 
-    ret = guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), PAGE_ORDER_4K);
+    ret = guest_physmap_remove_page(d, _gfn(gfn), mfn, PAGE_ORDER_4K);
     if ( ret )
     {
         printk(XENLOG_G_ERR "Failed to remove Dom%d's shared_info frame %lx\n",
@@ -690,7 +691,7 @@ int arch_domain_soft_reset(struct domain *d)
         goto exit_put_gfn;
     }
 
-    ret = guest_physmap_add_page(d, _gfn(gfn), _mfn(page_to_mfn(new_page)),
+    ret = guest_physmap_add_page(d, _gfn(gfn), page_to_mfn(new_page),
                                  PAGE_ORDER_4K);
     if ( ret )
     {
@@ -988,7 +989,7 @@ int arch_set_info_guest(
                 {
                     if ( (page->u.inuse.type_info & PGT_type_mask) ==
                          PGT_l4_page_table )
-                        done = !fill_ro_mpt(_mfn(page_to_mfn(page)));
+                        done = !fill_ro_mpt(page_to_mfn(page));
 
                     page_unlock(page);
                 }
@@ -1114,7 +1115,7 @@ int arch_set_info_guest(
         l4_pgentry_t *l4tab;
 
         l4tab = map_domain_page(_mfn(pagetable_get_pfn(v->arch.guest_table)));
-        *l4tab = l4e_from_pfn(page_to_mfn(cr3_page),
+        *l4tab = l4e_from_pfn(mfn_x(page_to_mfn(cr3_page)),
             _PAGE_PRESENT|_PAGE_RW|_PAGE_USER|_PAGE_ACCESSED);
         unmap_domain_page(l4tab);
     }
@@ -1941,7 +1942,7 @@ int domain_relinquish_resources(struct domain *d)
         if ( d->arch.pirq_eoi_map != NULL )
         {
             unmap_domain_page_global(d->arch.pirq_eoi_map);
-            put_page_and_type(mfn_to_page(d->arch.pirq_eoi_map_mfn));
+            put_page_and_type(mfn_to_page(_mfn(d->arch.pirq_eoi_map_mfn)));
             d->arch.pirq_eoi_map = NULL;
             d->arch.auto_unmask = 0;
         }
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 3432a854dd..88046b39c9 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -331,13 +331,13 @@ void unmap_domain_page_global(const void *ptr)
 }
 
 /* Translate a map-domain-page'd address to the underlying MFN */
-unsigned long domain_page_map_to_mfn(const void *ptr)
+mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
     const l1_pgentry_t *pl1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
-        return virt_to_mfn(ptr);
+        return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
     {
@@ -350,5 +350,5 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
         pl1e = &__linear_l1_table[l1_linear_offset(va)];
     }
 
-    return l1e_get_pfn(*pl1e);
+    return l1e_get_mfn(*pl1e);
 }
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 540ba089d7..9292ae5118 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -429,7 +429,7 @@ long arch_do_domctl(
         {
             if ( i >= max_pfns )
                 break;
-            mfn = page_to_mfn(page);
+            mfn = mfn_x(page_to_mfn(page));
             if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
                                       i, &mfn, 1) )
             {
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 9cf53b551c..1a83f27c0b 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -219,7 +219,7 @@ static int modified_memory(struct domain *d,
             page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
             if ( page )
             {
-                mfn_t gmfn = _mfn(page_to_mfn(page));
+                mfn_t gmfn = page_to_mfn(page);
 
                 paging_mark_dirty(d, gmfn);
                 /*
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index e8f746c70b..7789f6e571 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -120,7 +120,7 @@ static int __init pvh_populate_memory_range(struct domain *d,
             continue;
         }
 
-        rc = guest_physmap_add_page(d, _gfn(start), _mfn(page_to_mfn(page)),
+        rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
                                     order);
         if ( rc != 0 )
         {
@@ -270,7 +270,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
     }
     write_32bit_pse_identmap(ident_pt);
     unmap_domain_page(ident_pt);
-    put_page(mfn_to_page(mfn_x(mfn)));
+    put_page(mfn_to_page(mfn));
     d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr;
     if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) )
             printk("Unable to set identity page tables as reserved in the memory map\n");
@@ -288,7 +288,7 @@ static void __init pvh_steal_low_ram(struct domain *d, unsigned long start,
 
     for ( mfn = start; mfn < start + nr_pages; mfn++ )
     {
-        struct page_info *pg = mfn_to_page(mfn);
+        struct page_info *pg = mfn_to_page(_mfn(mfn));
         int rc;
 
         rc = unshare_xen_page_with_guest(pg, dom_io);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 205b4cb685..dc7a018d1d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2211,7 +2211,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
             v->arch.guest_table = pagetable_from_page(page);
 
             HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx, mfn = %lx",
-                        v->arch.hvm_vcpu.guest_cr[3], page_to_mfn(page));
+                        v->arch.hvm_vcpu.guest_cr[3], mfn_x(page_to_mfn(page)));
         }
     }
     else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
@@ -2546,7 +2546,7 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent,
         if ( unlikely(p2m_is_discard_write(p2mt)) )
             *writable = 0;
         else if ( !permanent )
-            paging_mark_dirty(d, _mfn(page_to_mfn(page)));
+            paging_mark_dirty(d, page_to_mfn(page));
     }
 
     if ( !permanent )
@@ -2588,7 +2588,7 @@ void *hvm_map_guest_frame_ro(unsigned long gfn, bool_t permanent)
 
 void hvm_unmap_guest_frame(void *p, bool_t permanent)
 {
-    unsigned long mfn;
+    mfn_t mfn;
     struct page_info *page;
 
     if ( !p )
@@ -2609,7 +2609,7 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent)
         list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
             if ( track->page == page )
             {
-                paging_mark_dirty(d, _mfn(mfn));
+                paging_mark_dirty(d, mfn);
                 list_del(&track->list);
                 xfree(track);
                 break;
@@ -2626,7 +2626,7 @@ void hvm_mapped_guest_frames_mark_dirty(struct domain *d)
 
     spin_lock(&d->arch.hvm_domain.write_map.lock);
     list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
-        paging_mark_dirty(d, _mfn(page_to_mfn(track->page)));
+        paging_mark_dirty(d, page_to_mfn(track->page));
     spin_unlock(&d->arch.hvm_domain.write_map.lock);
 }
 
@@ -3201,7 +3201,7 @@ static enum hvm_translation_result __hvm_copy(
                 if ( xchg(&lastpage, gfn_x(gfn)) != gfn_x(gfn) )
                     dprintk(XENLOG_G_DEBUG,
                             "%pv attempted write to read-only gfn %#lx (mfn=%#lx)\n",
-                            v, gfn_x(gfn), page_to_mfn(page));
+                            v, gfn_x(gfn), mfn_x(page_to_mfn(page)));
             }
             else
             {
@@ -3209,7 +3209,7 @@ static enum hvm_translation_result __hvm_copy(
                     memcpy(p, buf, count);
                 else
                     memset(p, 0, count);
-                paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
+                paging_mark_dirty(v->domain, page_to_mfn(page));
             }
         }
         else
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index f2e0b3f74a..5bd5cd788e 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -268,7 +268,7 @@ static void hvm_remove_ioreq_gfn(
     struct domain *d, struct hvm_ioreq_page *iorp)
 {
     if ( guest_physmap_remove_page(d, _gfn(iorp->gfn),
-                                   _mfn(page_to_mfn(iorp->page)), 0) )
+                                   page_to_mfn(iorp->page), 0) )
         domain_crash(d);
     clear_page(iorp->va);
 }
@@ -281,9 +281,9 @@ static int hvm_add_ioreq_gfn(
     clear_page(iorp->va);
 
     rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
-                                _mfn(page_to_mfn(iorp->page)), 0);
+                                page_to_mfn(iorp->page), 0);
     if ( rc == 0 )
-        paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
+        paging_mark_dirty(d, page_to_mfn(iorp->page));
 
     return rc;
 }
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index 088fbdf8ce..925bab2438 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -590,7 +590,7 @@ void stdvga_init(struct domain *d)
         if ( pg == NULL )
             break;
         s->vram_page[i] = pg;
-        clear_domain_page(_mfn(page_to_mfn(pg)));
+        clear_domain_page(page_to_mfn(pg));
     }
 
     if ( i == ARRAY_SIZE(s->vram_page) )
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index b9cf423fd9..f50f931598 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1521,7 +1521,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
         if ( !pg )
             goto err;
 
-        clear_domain_page(_mfn(page_to_mfn(pg)));
+        clear_domain_page(page_to_mfn(pg));
         *this_hsa = page_to_maddr(pg);
     }
 
@@ -1531,7 +1531,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
         if ( !pg )
             goto err;
 
-        clear_domain_page(_mfn(page_to_mfn(pg)));
+        clear_domain_page(page_to_mfn(pg));
         *this_vmcb = page_to_maddr(pg);
     }
 
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index f0fa59d7d5..070551e1ab 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -354,7 +354,7 @@ static void enable_hypercall_page(struct domain *d)
         if ( page )
             put_page(page);
         gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
-                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
+                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
         return;
     }
 
@@ -414,7 +414,7 @@ static void initialize_vp_assist(struct vcpu *v)
 
  fail:
     gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", gmfn,
-             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
+             mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
 }
 
 static void teardown_vp_assist(struct vcpu *v)
@@ -494,7 +494,7 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
         if ( page )
             put_page(page);
         gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
-                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
+                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
         return;
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index f62fe7e217..471d224539 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1441,7 +1441,7 @@ int vmx_vcpu_enable_pml(struct vcpu *v)
 
     vmx_vmcs_enter(v);
 
-    __vmwrite(PML_ADDRESS, page_to_mfn(v->arch.hvm_vmx.pml_pg) << PAGE_SHIFT);
+    __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm_vmx.pml_pg));
     __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
 
     v->arch.hvm_vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_PML;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9cfa9b6965..40b91933bf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2951,7 +2951,7 @@ gp_fault:
 static int vmx_alloc_vlapic_mapping(struct domain *d)
 {
     struct page_info *pg;
-    unsigned long mfn;
+    mfn_t mfn;
 
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
@@ -2960,10 +2960,10 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
     if ( !pg )
         return -ENOMEM;
     mfn = page_to_mfn(pg);
-    clear_domain_page(_mfn(mfn));
+    clear_domain_page(mfn);
     share_xen_page_with_guest(pg, d, XENSHARE_writable);
-    d->arch.hvm_domain.vmx.apic_access_mfn = mfn;
-    set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), _mfn(mfn),
+    d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn);
+    set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
                        PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access);
 
     return 0;
@@ -2974,7 +2974,7 @@ static void vmx_free_vlapic_mapping(struct domain *d)
     unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn;
 
     if ( mfn != 0 )
-        free_shared_domheap_page(mfn_to_page(mfn));
+        free_shared_domheap_page(mfn_to_page(_mfn(mfn)));
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cd0ee0a307..35f7cde81a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -84,7 +84,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         }
         v->arch.hvm_vmx.vmread_bitmap = vmread_bitmap;
 
-        clear_domain_page(_mfn(page_to_mfn(vmread_bitmap)));
+        clear_domain_page(page_to_mfn(vmread_bitmap));
 
         vmwrite_bitmap = alloc_domheap_page(NULL, 0);
         if ( !vmwrite_bitmap )
@@ -1704,7 +1704,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
                 nvcpu->nv_vvmcx = vvmcx;
                 nvcpu->nv_vvmcxaddr = gpa;
                 v->arch.hvm_vmx.vmcs_shadow_maddr =
-                    pfn_to_paddr(domain_page_map_to_mfn(vvmcx));
+                    mfn_to_maddr(domain_page_map_to_mfn(vvmcx));
             }
             else
             {
@@ -1790,7 +1790,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs *regs)
         {
             if ( writable )
                 clear_vvmcs_launched(&nvmx->launched_list,
-                                     domain_page_map_to_mfn(vvmcs));
+                                     mfn_x(domain_page_map_to_mfn(vvmcs)));
             else
                 rc = VMFAIL_VALID;
             hvm_unmap_guest_frame(vvmcs, 0);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d9df5ca69f..39038723ce 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -129,12 +129,6 @@
 
 #include "pv/mm.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 /* Mapping of the fixmap space needed early. */
 l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
     l1_fixmap[L1_PAGETABLE_ENTRIES];
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 6055fec1ad..f67aeda3d0 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -469,20 +469,20 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     if ( l3p )
     {
         unmap_domain_page(l3p);
-        put_page(mfn_to_page(mfn_x(gw->l3mfn)));
+        put_page(mfn_to_page(gw->l3mfn));
     }
 #endif
 #if GUEST_PAGING_LEVELS >= 3
     if ( l2p )
     {
         unmap_domain_page(l2p);
-        put_page(mfn_to_page(mfn_x(gw->l2mfn)));
+        put_page(mfn_to_page(gw->l2mfn));
     }
 #endif
     if ( l1p )
     {
         unmap_domain_page(l1p);
-        put_page(mfn_to_page(mfn_x(gw->l1mfn)));
+        put_page(mfn_to_page(gw->l1mfn));
     }
 
     return walk_ok;
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index c550017ba4..cb3f9cebe7 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -83,7 +83,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
         *pfec &= ~PFEC_page_present;
         goto out_tweak_pfec;
     }
-    top_mfn = _mfn(page_to_mfn(top_page));
+    top_mfn = page_to_mfn(top_page);
 
     /* Map the top-level table and call the tree-walker */
     ASSERT(mfn_valid(top_mfn));
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index dc85e828cd..e45c1a1913 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -42,12 +42,6 @@
 
 #include "private.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
 /************************************************/
 /*          HAP VRAM TRACKING SUPPORT           */
 /************************************************/
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 14b1bb01e9..1738df69f6 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -173,7 +173,7 @@ nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
             goto map_err;
         gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
         unmap_domain_page(lxp);
-        put_page(mfn_to_page(mfn_x(lxmfn)));
+        put_page(mfn_to_page(lxmfn));
 
         if ( nept_non_present_check(gw->lxe[lvl]) )
             goto non_present;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 6f4be95515..6ecf0b27d5 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -152,11 +152,6 @@ static inline shr_handle_t get_next_handle(void)
 #define mem_sharing_enabled(d) \
     (is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled)
 
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
 static atomic_t nr_saved_mfns   = ATOMIC_INIT(0); 
 static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 054827aa88..24de202a1b 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -33,6 +33,10 @@
 
 #include "mm-locks.h"
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 #define atomic_read_ept_entry(__pepte)                              \
     ( (ept_entry_t) { .epte = read_atomic(&(__pepte)->epte) } )
 
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 0a811ccf28..7a88074c31 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -29,12 +29,6 @@
 
 #include "mm-locks.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
 #define superpage_aligned(_x)  (((_x)&(SUPERPAGE_PAGES-1))==0)
 
 /* Enforce lock ordering when grabbing the "external" page_alloc lock */
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 3fbc537da6..2194b35bc7 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -47,12 +47,6 @@ bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1;
 boolean_param("hap_1gb", opt_hap_1gb);
 boolean_param("hap_2mb", opt_hap_2mb);
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
 DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
 
 /* Init the datastructures for later use by the p2m code */
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 1e2c9ba4cc..cb97642cbc 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -47,12 +47,6 @@
 /* Per-CPU variable for enforcing the lock ordering */
 DEFINE_PER_CPU(int, mm_lock_level);
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
 /************************************************/
 /*              LOG DIRTY SUPPORT               */
 /************************************************/
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 6a03370402..b9cc680f4e 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -315,7 +315,7 @@ static inline int page_is_out_of_sync(struct page_info *p)
 
 static inline int mfn_is_out_of_sync(mfn_t gmfn)
 {
-    return page_is_out_of_sync(mfn_to_page(mfn_x(gmfn)));
+    return page_is_out_of_sync(mfn_to_page(gmfn));
 }
 
 static inline int page_oos_may_write(struct page_info *p)
@@ -326,7 +326,7 @@ static inline int page_oos_may_write(struct page_info *p)
 
 static inline int mfn_oos_may_write(mfn_t gmfn)
 {
-    return page_oos_may_write(mfn_to_page(mfn_x(gmfn)));
+    return page_oos_may_write(mfn_to_page(gmfn));
 }
 #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
 
@@ -465,18 +465,6 @@ void sh_reset_l3_up_pointers(struct vcpu *v);
  * MFN/page-info handling
  */
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
-/* Override pagetable_t <-> struct page_info conversions to work with mfn_t */
-#undef pagetable_get_page
-#define pagetable_get_page(x)   mfn_to_page(pagetable_get_mfn(x))
-#undef pagetable_from_page
-#define pagetable_from_page(pg) pagetable_from_mfn(page_to_mfn(pg))
-
 #define backpointer(sp) _mfn(pdx_to_pfn((unsigned long)(sp)->v.sh.back))
 static inline unsigned long __backpointer(const struct page_info *sp)
 {
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 4fc967f893..a87987da6f 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -430,7 +430,7 @@ static void dump_numa(unsigned char key)
         spin_lock(&d->page_alloc_lock);
         page_list_for_each(page, &d->page_list)
         {
-            i = phys_to_nid((paddr_t)page_to_mfn(page) << PAGE_SHIFT);
+            i = phys_to_nid(page_to_maddr(page));
             page_num_node[i]++;
         }
         spin_unlock(&d->page_alloc_lock);
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 0eb409758f..ba950af4a8 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -241,7 +241,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         }
 
         if ( cmpxchg(&currd->arch.pirq_eoi_map_mfn,
-                     0, page_to_mfn(page)) != 0 )
+                     0, mfn_x(page_to_mfn(page))) != 0 )
         {
             put_page_and_type(page);
             ret = -EBUSY;
diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
index 97d8438600..5957cb5085 100644
--- a/xen/arch/x86/pv/callback.c
+++ b/xen/arch/x86/pv/callback.c
@@ -31,12 +31,6 @@
 
 #include <public/callback.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 static int register_guest_nmi_callback(unsigned long address)
 {
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
index 81973af124..f2b20f9910 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -25,16 +25,6 @@
 #include <asm/p2m.h>
 #include <asm/pv/mm.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
-/*******************
- * Descriptor Tables
- */
-
 void pv_destroy_gdt(struct vcpu *v)
 {
     l1_pgentry_t *pl1e;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index dcbee43e8f..e9a893ba47 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -22,6 +22,12 @@
 
 #include "mm.h"
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 /* Allow ring-3 access in long mode as guest cannot use ring 1 ... */
 #define BASE_PROT (_PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_USER)
 #define L1_PROT (BASE_PROT|_PAGE_GUEST_KERNEL)
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 90d5569be1..4ca3205821 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -16,12 +16,6 @@
 
 #include "mm.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-op.c
index 0f89c91dff..5cdb54c937 100644
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -41,12 +41,6 @@
 
 #include "emulate.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 static int read_gate_descriptor(unsigned int gate_sel,
                                 const struct vcpu *v,
                                 unsigned int *sel,
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index dd90713acf..9ccbd021ef 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -43,16 +43,6 @@
 #include "emulate.h"
 #include "mm.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
-/***********************
- * I/O emulation support
- */
-
 struct priv_op_ctxt {
     struct x86_emulate_ctxt ctxt;
     struct {
diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c
index aaca228c6b..97323367c5 100644
--- a/xen/arch/x86/pv/grant_table.c
+++ b/xen/arch/x86/pv/grant_table.c
@@ -27,12 +27,6 @@
 
 #include "mm.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 static unsigned int grant_to_pte_flags(unsigned int grant_flags,
                                        unsigned int cache_flags)
 {
diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
index e45d628deb..bdb09bfa75 100644
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -169,7 +169,7 @@ void init_guest_l4_table(l4_pgentry_t l4tab[], const struct domain *d,
     BUILD_BUG_ON(root_pgt_pv_xen_slots != ROOT_PAGETABLE_PV_XEN_SLOTS);
 #endif
     l4tab[l4_table_offset(LINEAR_PT_VIRT_START)] =
-        l4e_from_pfn(domain_page_map_to_mfn(l4tab), __PAGE_HYPERVISOR_RW);
+        l4e_from_mfn(domain_page_map_to_mfn(l4tab), __PAGE_HYPERVISOR_RW);
     l4tab[l4_table_offset(PERDOMAIN_VIRT_START)] =
         l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
     if ( zap_ro_mpt || is_pv_32bit_domain(d) )
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index 6b2976d3df..a7b7eb5113 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -33,12 +33,6 @@
 #include "emulate.h"
 #include "mm.h"
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 /*********************
  * Writable Pagetables
  */
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 3ca716c59f..663966bc74 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -46,12 +46,6 @@
 #include <mach_wakecpu.h>
 #include <smpboot_hooks.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 #define setup_trampoline()    (bootsym_phys(trampoline_realmode_entry))
 
 unsigned long __read_mostly trampoline_phys;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 59d7c477f4..e9522f06ec 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -184,7 +184,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
 
     for ( mfn = 0; mfn < max_page; mfn++ )
     {
-        struct page_info *page = mfn_to_page(mfn);
+        struct page_info *page = mfn_to_page(_mfn(mfn));
 
         if ( !mfn_valid(_mfn(mfn)) )
             continue;
@@ -276,7 +276,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
     vmac_set_key((uint8_t *)key, &ctx);
     for ( mfn = 0; mfn < max_page; mfn++ )
     {
-        struct page_info *page = __mfn_to_page(mfn);
+        struct page_info *page = mfn_to_page(_mfn(mfn));
 
         if ( !mfn_valid(_mfn(mfn)) )
             continue;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 86506f3747..b85394d1f9 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
 
             gdprintk(XENLOG_WARNING,
                      "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
-                     gmfn, page ? page_to_mfn(page) : -1UL, base);
+                     gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);
             return 0;
         }
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 11746730b4..971ccfcbbe 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -40,6 +40,12 @@ asm(".file \"" __FILE__ "\"");
 #include <asm/mem_sharing.h>
 #include <public/memory.h>
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 unsigned int __read_mostly m2p_compat_vstart = __HYPERVISOR_COMPAT_VIRT_START;
 
 l2_pgentry_t *compat_idle_pg_table_l2;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 5aebcf265f..e8302e8e1b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1192,7 +1192,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     }
 
     v->vcpu_info = new_info;
-    v->vcpu_info_mfn = _mfn(page_to_mfn(page));
+    v->vcpu_info_mfn = page_to_mfn(page);
 
     /* Set new vcpu_info pointer /before/ setting pending flags. */
     smp_wmb();
@@ -1225,7 +1225,7 @@ void unmap_vcpu_info(struct vcpu *v)
 
     vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */
 
-    put_page_and_type(mfn_to_page(mfn_x(mfn)));
+    put_page_and_type(mfn_to_page(mfn));
 }
 
 int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 6d20b17739..2afde596d9 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -40,6 +40,12 @@
 #include <xsm/xsm.h>
 #include <asm/flushtlb.h>
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 /* Per-domain grant information. */
 struct grant_table {
     /*
diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index afd8292cc1..210241dfb7 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -23,12 +23,6 @@
 
 #include <asm/page.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg)  _mfn(__page_to_mfn(pg))
-
 /*
  * When kexec transitions to the new kernel there is a one-to-one
  * mapping between physical and virtual addresses.  On processors
diff --git a/xen/common/memory.c b/xen/common/memory.c
index ad987e0f29..e467f271c7 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -29,6 +29,12 @@
 #include <public/memory.h>
 #include <xsm/xsm.h>
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 struct memop_args {
     /* INPUT */
     struct domain *domain;     /* Domain to be affected. */
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 472c6fe329..5e7d74e274 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -150,6 +150,12 @@
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
  * e.g. 'badpage=0x3f45,0x8a321'.
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index c955cf7167..1adb96f00c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -243,7 +243,7 @@ static void tmem_persistent_pool_page_put(void *page_va)
     struct page_info *pi;
 
     ASSERT(IS_PAGE_ALIGNED(page_va));
-    pi = mfn_to_page(virt_to_mfn(page_va));
+    pi = mfn_to_page(_mfn(virt_to_mfn(page_va)));
     ASSERT(IS_VALID_PAGE(pi));
     __tmem_free_page_thispool(pi);
 }
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index bd52e44faf..bf7b14f79a 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -14,10 +14,6 @@
 #include <xen/cpu.h>
 #include <xen/init.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
 bool __read_mostly opt_tmem;
 boolean_param("tmem", opt_tmem);
 
diff --git a/xen/common/trace.c b/xen/common/trace.c
index 2e18702317..cf8f8b0997 100644
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -42,6 +42,12 @@ CHECK_t_buf;
 #define compat_t_rec t_rec
 #endif
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 /* opt_tbuf_size: trace buffer size (in pages) for each cpu */
 static unsigned int opt_tbuf_size;
 static unsigned int opt_tevt_mask;
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index 0b23f8fb97..10f32b29e0 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -36,7 +36,7 @@ void __init vm_init_type(enum vmap_region type, void *start, void *end)
     {
         struct page_info *pg = alloc_domheap_page(NULL, 0);
 
-        map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
+        map_pages_to_xen(va, mfn_x(page_to_mfn(pg)), 1, PAGE_HYPERVISOR);
         clear_page((void *)va);
     }
     bitmap_fill(vm_bitmap(type), vm_low[type]);
@@ -107,7 +107,8 @@ static void *vm_alloc(unsigned int nr, unsigned int align,
         {
             unsigned long va = (unsigned long)vm_bitmap(t) + vm_top[t] / 8;
 
-            if ( !map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR) )
+            if ( !map_pages_to_xen(va, mfn_x(page_to_mfn(pg)),
+                                   1, PAGE_HYPERVISOR) )
             {
                 clear_page((void *)va);
                 vm_top[t] += PAGE_SIZE * 8;
@@ -258,7 +259,7 @@ static void *vmalloc_type(size_t size, enum vmap_region type)
         pg = alloc_domheap_page(NULL, 0);
         if ( pg == NULL )
             goto error;
-        mfn[i] = _mfn(page_to_mfn(pg));
+        mfn[i] = page_to_mfn(pg);
     }
 
     va = __vmap(mfn, 1, pages, 1, PAGE_HYPERVISOR, type);
@@ -270,7 +271,7 @@ static void *vmalloc_type(size_t size, enum vmap_region type)
 
  error:
     while ( i-- )
-        free_domheap_page(mfn_to_page(mfn_x(mfn[i])));
+        free_domheap_page(mfn_to_page(mfn[i]));
     xfree(mfn);
     return NULL;
 }
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index 5acdde5691..fecdfb3697 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -22,8 +22,6 @@
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
 
 /* Limit amount of pages used for shared buffer (per domain) */
 #define MAX_OPROF_SHARED_PAGES 32
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index fd2327d3e5..bd62c2ce90 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -25,6 +25,12 @@
 #include "../ats.h"
 #include <xen/pci.h>
 
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
 /* Given pfn and page table level, return pde index */
 static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
 {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 1aecf7cf34..2c44fabf99 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -184,7 +184,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
 
         page_list_for_each ( page, &d->page_list )
         {
-            unsigned long mfn = page_to_mfn(page);
+            unsigned long mfn = mfn_x(page_to_mfn(page));
             unsigned long gfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
             int ret;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 0253823173..68182afd91 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -58,7 +58,7 @@ int arch_iommu_populate_page_table(struct domain *d)
         if ( is_hvm_domain(d) ||
             (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
         {
-            unsigned long mfn = page_to_mfn(page);
+            unsigned long mfn = mfn_x(page_to_mfn(page));
             unsigned long gfn = mfn_to_gmfn(d, mfn);
 
             if ( gfn != gfn_x(INVALID_GFN) )
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 737a429409..3eb4b68761 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -138,7 +138,7 @@ extern vaddr_t xenheap_virt_start;
 #endif
 
 #ifdef CONFIG_ARM_32
-#define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
+#define is_xen_heap_page(page) is_xen_heap_mfn(mfn_x(__page_to_mfn(page)))
 #define is_xen_heap_mfn(mfn) ({                                 \
     unsigned long mfn_ = (mfn);                                 \
     (mfn_ >= mfn_x(xenheap_mfn_start) &&                        \
@@ -220,12 +220,14 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 })
 
 /* Convert between machine frame numbers and page-info structures. */
-#define __mfn_to_page(mfn)  (frame_table + (pfn_to_pdx(mfn) - frametable_base_pdx))
-#define __page_to_mfn(pg)   pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_pdx)
+#define __mfn_to_page(mfn)                                          \
+    (frame_table + (pfn_to_pdx(mfn_x(mfn)) - frametable_base_pdx))
+#define __page_to_mfn(pg)                                           \
+    _mfn(pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_pdx))
 
 /* Convert between machine addresses and page-info structures. */
-#define maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
-#define page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT)
+#define maddr_to_page(ma) __mfn_to_page(maddr_to_mfn(ma))
+#define page_to_maddr(pg) (mfn_to_maddr(__page_to_mfn(pg)))
 
 /* Convert between frame number and address formats.  */
 #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
@@ -235,7 +237,7 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 #define gaddr_to_gfn(ga)    _gfn(paddr_to_pfn(ga))
 #define mfn_to_maddr(mfn)   pfn_to_paddr(mfn_x(mfn))
 #define maddr_to_mfn(ma)    _mfn(paddr_to_pfn(ma))
-#define vmap_to_mfn(va)     paddr_to_pfn(virt_to_maddr((vaddr_t)va))
+#define vmap_to_mfn(va)     maddr_to_mfn(virt_to_maddr((vaddr_t)va))
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 /* Page-align address and convert to frame number format */
@@ -309,7 +311,7 @@ static inline struct page_info *virt_to_page(const void *v)
 
 static inline void *page_to_virt(const struct page_info *pg)
 {
-    return mfn_to_virt(page_to_mfn(pg));
+    return mfn_to_virt(mfn_x(__page_to_mfn(pg)));
 }
 
 struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index faadcfe8fe..87c9994974 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -276,7 +276,7 @@ static inline struct page_info *get_page_from_gfn(
 {
     struct page_info *page;
     p2m_type_t p2mt;
-    unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
+    mfn_t mfn = p2m_lookup(d, _gfn(gfn), &p2mt);
 
     if (t)
         *t = p2mt;
@@ -284,7 +284,7 @@ static inline struct page_info *get_page_from_gfn(
     if ( !p2m_is_any_ram(p2mt) )
         return NULL;
 
-    if ( !mfn_valid(_mfn(mfn)) )
+    if ( !mfn_valid(mfn) )
         return NULL;
     page = mfn_to_page(mfn);
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index f2e0f498c4..984f54c3fa 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -259,7 +259,7 @@ struct page_info
 
 #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
 #define is_xen_heap_mfn(mfn) \
-    (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
+    (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(_mfn(mfn))))
 #define is_xen_fixed_mfn(mfn)                     \
     ((((mfn) << PAGE_SHIFT) >= __pa(&_stext)) &&  \
      (((mfn) << PAGE_SHIFT) <= __pa(&__2M_rwdata_end)))
@@ -369,7 +369,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner);
 
 static inline bool get_page_from_mfn(mfn_t mfn, struct domain *d)
 {
-    struct page_info *page = __mfn_to_page(mfn_x(mfn));
+    struct page_info *page = __mfn_to_page(mfn);
 
     if ( unlikely(!mfn_valid(mfn)) || unlikely(!get_page(page, d)) )
     {
@@ -463,10 +463,10 @@ extern paddr_t mem_hotplug;
 #define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
 
 #define compat_machine_to_phys_mapping ((unsigned int *)RDWR_COMPAT_MPT_VIRT_START)
-#define _set_gpfn_from_mfn(mfn, pfn) ({                        \
-    struct domain *d = page_get_owner(__mfn_to_page(mfn));     \
-    unsigned long entry = (d && (d == dom_cow)) ?              \
-        SHARED_M2P_ENTRY : (pfn);                              \
+#define _set_gpfn_from_mfn(mfn, pfn) ({                         \
+    struct domain *d = page_get_owner(__mfn_to_page(_mfn(mfn)));    \
+    unsigned long entry = (d && (d == dom_cow)) ?               \
+        SHARED_M2P_ENTRY : (pfn);                               \
     ((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 || \
             (compat_machine_to_phys_mapping[(mfn)] = (unsigned int)(entry))), \
      machine_to_phys_mapping[(mfn)] = (entry));                \
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 70f00c332f..18eac537c9 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -480,7 +480,7 @@ static inline struct page_info *get_page_from_gfn(
     /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
     if ( t )
         *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct;
-    page = __mfn_to_page(gfn);
+    page = __mfn_to_page(_mfn(gfn));
     return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
 }
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 45ca742678..8737ef16ff 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -88,10 +88,10 @@
     ((paddr_t)(((x).l4 & (PADDR_MASK&PAGE_MASK))))
 
 /* Get pointer to info structure of page mapped by pte (struct page_info *). */
-#define l1e_get_page(x)           (__mfn_to_page(l1e_get_pfn(x)))
-#define l2e_get_page(x)           (__mfn_to_page(l2e_get_pfn(x)))
-#define l3e_get_page(x)           (__mfn_to_page(l3e_get_pfn(x)))
-#define l4e_get_page(x)           (__mfn_to_page(l4e_get_pfn(x)))
+#define l1e_get_page(x)           (__mfn_to_page(l1e_get_mfn(x)))
+#define l2e_get_page(x)           (__mfn_to_page(l2e_get_mfn(x)))
+#define l3e_get_page(x)           (__mfn_to_page(l3e_get_mfn(x)))
+#define l4e_get_page(x)           (__mfn_to_page(l4e_get_mfn(x)))
 
 /* Get pte access flags (unsigned int). */
 #define l1e_get_flags(x)           (get_pte_flags((x).l1))
@@ -157,10 +157,10 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define l4e_from_intpte(intpte)    ((l4_pgentry_t) { (intpte_t)(intpte) })
 
 /* Construct a pte from a page pointer and access flags. */
-#define l1e_from_page(page, flags) l1e_from_pfn(__page_to_mfn(page), (flags))
-#define l2e_from_page(page, flags) l2e_from_pfn(__page_to_mfn(page), (flags))
-#define l3e_from_page(page, flags) l3e_from_pfn(__page_to_mfn(page), (flags))
-#define l4e_from_page(page, flags) l4e_from_pfn(__page_to_mfn(page), (flags))
+#define l1e_from_page(page, flags) l1e_from_mfn(__page_to_mfn(page), (flags))
+#define l2e_from_page(page, flags) l2e_from_mfn(__page_to_mfn(page), (flags))
+#define l3e_from_page(page, flags) l3e_from_mfn(__page_to_mfn(page), (flags))
+#define l4e_from_page(page, flags) l4e_from_mfn(__page_to_mfn(page), (flags))
 
 /* Add extra flags to an existing pte. */
 #define l1e_add_flags(x, flags)    ((x).l1 |= put_pte_flags(flags))
@@ -215,13 +215,13 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 /* Page-table type. */
 typedef struct { u64 pfn; } pagetable_t;
 #define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
-#define pagetable_get_page(x)   __mfn_to_page((x).pfn)
+#define pagetable_get_page(x)   __mfn_to_page(pagetable_get_mfn(x))
 #define pagetable_get_pfn(x)    ((x).pfn)
 #define pagetable_get_mfn(x)    _mfn(((x).pfn))
 #define pagetable_is_null(x)    ((x).pfn == 0)
 #define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) })
 #define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) })
-#define pagetable_from_page(pg) pagetable_from_pfn(__page_to_mfn(pg))
+#define pagetable_from_page(pg) pagetable_from_mfn(__page_to_mfn(pg))
 #define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT)
 #define pagetable_null()        pagetable_from_pfn(0)
 
@@ -240,12 +240,12 @@ void copy_page_sse2(void *, const void *);
 #define __mfn_to_virt(mfn)  (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
 
 /* Convert between machine frame numbers and page-info structures. */
-#define __mfn_to_page(mfn)  (frame_table + pfn_to_pdx(mfn))
-#define __page_to_mfn(pg)   pdx_to_pfn((unsigned long)((pg) - frame_table))
+#define __mfn_to_page(mfn)  (frame_table + pfn_to_pdx(mfn_x(mfn)))
+#define __page_to_mfn(pg)   _mfn(pdx_to_pfn((unsigned long)((pg) - frame_table)))
 
 /* Convert between machine addresses and page-info structures. */
-#define __maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
-#define __page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT)
+#define __maddr_to_page(ma) __mfn_to_page(maddr_to_mfn(ma))
+#define __page_to_maddr(pg) (mfn_to_maddr(__page_to_mfn(pg)))
 
 /* Convert between frame number and address formats.  */
 #define __pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
@@ -273,8 +273,8 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))
-#define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
+#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
+#define vmap_to_page(va)    __mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
 
diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
index 890bae5b9c..22ab65ba16 100644
--- a/xen/include/xen/domain_page.h
+++ b/xen/include/xen/domain_page.h
@@ -34,7 +34,7 @@ void unmap_domain_page(const void *va);
 /* 
  * Given a VA from map_domain_page(), return its underlying MFN.
  */
-unsigned long domain_page_map_to_mfn(const void *va);
+mfn_t domain_page_map_to_mfn(const void *va);
 
 /*
  * Similar to the above calls, except the mapping is accessible in all
@@ -44,11 +44,11 @@ unsigned long domain_page_map_to_mfn(const void *va);
 void *map_domain_page_global(mfn_t mfn);
 void unmap_domain_page_global(const void *va);
 
-#define __map_domain_page(pg)        map_domain_page(_mfn(__page_to_mfn(pg)))
+#define __map_domain_page(pg)        map_domain_page(__page_to_mfn(pg))
 
 static inline void *__map_domain_page_global(const struct page_info *pg)
 {
-    return map_domain_page_global(_mfn(__page_to_mfn(pg)));
+    return map_domain_page_global(page_to_mfn(pg));
 }
 
 #else /* !CONFIG_DOMAIN_PAGE */
@@ -56,7 +56,7 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
 #define map_domain_page(mfn)                __mfn_to_virt(mfn_x(mfn))
 #define __map_domain_page(pg)               page_to_virt(pg)
 #define unmap_domain_page(va)               ((void)(va))
-#define domain_page_map_to_mfn(va)          virt_to_mfn((unsigned long)(va))
+#define domain_page_map_to_mfn(va)          _mfn(virt_to_mfn((unsigned long)(va)))
 
 static inline void *map_domain_page_global(mfn_t mfn)
 {
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 542c0b3f20..8516a0b131 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -25,7 +25,7 @@
 typedef uint32_t pagesize_t;  /* like size_t, must handle largest PAGE_SIZE */
 
 #define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
-#define IS_VALID_PAGE(_pi)    mfn_valid(_mfn(page_to_mfn(_pi)))
+#define IS_VALID_PAGE(_pi)    mfn_valid(page_to_mfn(_pi))
 
 extern struct page_list_head tmem_page_list;
 extern spinlock_t tmem_page_list_lock;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page
  2017-10-05 17:42 ` [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page Julien Grall
@ 2017-10-05 17:49   ` Andrew Cooper
  2017-10-05 17:52   ` Tamas K Lengyel
  1 sibling, 0 replies; 27+ messages in thread
From: Andrew Cooper @ 2017-10-05 17:49 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Tamas K Lengyel, Jan Beulich

On 05/10/17 18:42, Julien Grall wrote:
> The function __mem_sharing_unshare_page contains an open-code version of
> copy_domain_page. Use the function to simplify a bit the code.
>
> At the same time replace _mfn(__page_to_mfn(...)) by page_to_mfn(...)
> given that the file given already provides a typesafe version of page_to_mfn.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 6/9] xen/kexec, kimage: Convert kexec and kimage to use typesafe mfn_t
  2017-10-05 17:42 ` [PATCH v2 6/9] xen/kexec, kimage: Convert kexec and kimage to use typesafe mfn_t Julien Grall
@ 2017-10-05 17:51   ` Andrew Cooper
  0 siblings, 0 replies; 27+ messages in thread
From: Andrew Cooper @ 2017-10-05 17:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel

On 05/10/17 18:42, Julien Grall wrote:
> At the same time, correctly align one the prototype changed.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page
  2017-10-05 17:42 ` [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page Julien Grall
  2017-10-05 17:49   ` Andrew Cooper
@ 2017-10-05 17:52   ` Tamas K Lengyel
  1 sibling, 0 replies; 27+ messages in thread
From: Tamas K Lengyel @ 2017-10-05 17:52 UTC (permalink / raw)
  To: Julien Grall; +Cc: George Dunlap, Andrew Cooper, Jan Beulich, Xen-devel

On Thu, Oct 5, 2017 at 11:42 AM, Julien Grall <julien.grall@linaro.org> wrote:
> The function __mem_sharing_unshare_page contains an open-code version of
> copy_domain_page. Use the function to simplify a bit the code.
>
> At the same time replace _mfn(__page_to_mfn(...)) by page_to_mfn(...)
> given that the file given already provides a typesafe version of page_to_mfn.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Tamas K Lengyel <tamas@tklengyel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
@ 2017-10-05 17:59   ` Andrew Cooper
  2017-10-05 18:31   ` Razvan Cojocaru
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Andrew Cooper @ 2017-10-05 17:59 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Jun Nakajima, Kevin Tian, Stefano Stabellini, Wei Liu,
	Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Tim Deegan, Ian Jackson, Julien Grall,
	Tamas K Lengyel, Jan Beulich, Shane Wang, Boris Ostrovsky,
	Gang Wei, Paul Durrant

On 05/10/17 18:42, Julien Grall wrote:
> @@ -1114,7 +1115,7 @@ int arch_set_info_guest(
>          l4_pgentry_t *l4tab;
>  
>          l4tab = map_domain_page(_mfn(pagetable_get_pfn(v->arch.guest_table)));
> -        *l4tab = l4e_from_pfn(page_to_mfn(cr3_page),
> +        *l4tab = l4e_from_pfn(mfn_x(page_to_mfn(cr3_page)),

Apologies for missing this before.  You can use l4e_from_mfn() and avoid
the unboxing.

Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> 
Everything else seems in order (but I've only skimmed it).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
  2017-10-05 17:59   ` Andrew Cooper
@ 2017-10-05 18:31   ` Razvan Cojocaru
  2017-10-06  9:11   ` Paul Durrant
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Razvan Cojocaru @ 2017-10-05 18:31 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Tim Deegan, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Suravee Suthikulpanit, Shane Wang, Boris Ostrovsky, Gang Wei,
	Paul Durrant

On 10/05/2017 08:42 PM, Julien Grall wrote:
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
> 
> So make __page_to_mfn and __mfn_to_page return mfn_t by default.
> 
> Only reasonable clean-ups are done in this patch because it is
> already quite big. So some of the files now override page_to_mfn and
> mfn_to_page to avoid using mfn_t.
> 
> Lastly, domain_page_to_mfn is also converted to use mfn_t given that
> most of the callers are now switched to _mfn(domain_page_to_mfn(...)).
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
  2017-10-05 17:59   ` Andrew Cooper
  2017-10-05 18:31   ` Razvan Cojocaru
@ 2017-10-06  9:11   ` Paul Durrant
  2017-10-06 10:49     ` Julien Grall
  2017-10-06 13:02   ` Boris Ostrovsky
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 27+ messages in thread
From: Paul Durrant @ 2017-10-06  9:11 UTC (permalink / raw)
  To: 'Julien Grall', xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Razvan Cojocaru, Konrad Rzeszutek Wilk, Jun Nakajima,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Shane Wang, Ian Jackson, Boris Ostrovsky, Gang Wei

> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@linaro.org]
> Sent: 05 October 2017 18:42
> To: xen-devel@lists.xen.org
> Cc: Julien Grall <julien.grall@linaro.org>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>; Andrew
> Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Jan
> Beulich <jbeulich@suse.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>; Wei Liu
> <wei.liu2@citrix.com>; Razvan Cojocaru <rcojocaru@bitdefender.com>;
> Tamas K Lengyel <tamas@tklengyel.com>; Paul Durrant
> <Paul.Durrant@citrix.com>; Boris Ostrovsky <boris.ostrovsky@oracle.com>;
> Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>; Jun Nakajima
> <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; George
> Dunlap <George.Dunlap@citrix.com>; Gang Wei <gang.wei@intel.com>;
> Shane Wang <shane.wang@intel.com>
> Subject: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page
> to use typesafe MFN
> 
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
> 
> So make __page_to_mfn and __mfn_to_page return mfn_t by default.
> 
> Only reasonable clean-ups are done in this patch because it is
> already quite big. So some of the files now override page_to_mfn and
> mfn_to_page to avoid using mfn_t.
> 
> Lastly, domain_page_to_mfn is also converted to use mfn_t given that
> most of the callers are now switched to _mfn(domain_page_to_mfn(...)).
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
> 
> Andrew suggested to drop IS_VALID_PAGE in xen/tmem_xen.h. His
> comment
> was:
> 
> "/sigh  This is tautological.  The definition of a "valid mfn" in this
> case is one for which we have frametable entry, and by having a struct
> page_info in our hands, this is by definition true (unless you have a
> wild pointer, at which point your bug is elsewhere).
> 
> IS_VALID_PAGE() is only ever used in assertions and never usefully, so
> instead I would remove it entirely rather than trying to fix it up."
> 
> I can remove the function in a separate patch at the begining of the
> series if Konrad (TMEM maintainer) is happy with that.
> 
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Gang Wei <gang.wei@intel.com>
> Cc: Shane Wang <shane.wang@intel.com>
> 
>     Changes in v2:
>         - Some part have been moved in separate patch
>         - Remove one spurious comment
>         - Convert domain_page_to_mfn to use mfn_t
> ---
>  xen/arch/arm/domain_build.c             |  2 --
>  xen/arch/arm/kernel.c                   |  2 +-
>  xen/arch/arm/mem_access.c               |  2 +-
>  xen/arch/arm/mm.c                       |  8 ++++----
>  xen/arch/arm/p2m.c                      |  8 +-------
>  xen/arch/x86/cpu/vpmu.c                 |  4 ++--
>  xen/arch/x86/domain.c                   | 21 +++++++++++----------
>  xen/arch/x86/domain_page.c              |  6 +++---
>  xen/arch/x86/domctl.c                   |  2 +-
>  xen/arch/x86/hvm/dm.c                   |  2 +-
>  xen/arch/x86/hvm/dom0_build.c           |  6 +++---
>  xen/arch/x86/hvm/hvm.c                  | 14 +++++++-------
>  xen/arch/x86/hvm/ioreq.c                |  6 +++---
>  xen/arch/x86/hvm/stdvga.c               |  2 +-
>  xen/arch/x86/hvm/svm/svm.c              |  4 ++--
>  xen/arch/x86/hvm/viridian.c             |  6 +++---
>  xen/arch/x86/hvm/vmx/vmcs.c             |  2 +-
>  xen/arch/x86/hvm/vmx/vmx.c              | 10 +++++-----
>  xen/arch/x86/hvm/vmx/vvmx.c             |  6 +++---
>  xen/arch/x86/mm.c                       |  6 ------
>  xen/arch/x86/mm/guest_walk.c            |  6 +++---
>  xen/arch/x86/mm/hap/guest_walk.c        |  2 +-
>  xen/arch/x86/mm/hap/hap.c               |  6 ------
>  xen/arch/x86/mm/hap/nested_ept.c        |  2 +-
>  xen/arch/x86/mm/mem_sharing.c           |  5 -----
>  xen/arch/x86/mm/p2m-ept.c               |  4 ++++
>  xen/arch/x86/mm/p2m-pod.c               |  6 ------
>  xen/arch/x86/mm/p2m.c                   |  6 ------
>  xen/arch/x86/mm/paging.c                |  6 ------
>  xen/arch/x86/mm/shadow/private.h        | 16 ++--------------
>  xen/arch/x86/numa.c                     |  2 +-
>  xen/arch/x86/physdev.c                  |  2 +-
>  xen/arch/x86/pv/callback.c              |  6 ------
>  xen/arch/x86/pv/descriptor-tables.c     | 10 ----------
>  xen/arch/x86/pv/dom0_build.c            |  6 ++++++
>  xen/arch/x86/pv/domain.c                |  6 ------
>  xen/arch/x86/pv/emul-gate-op.c          |  6 ------
>  xen/arch/x86/pv/emul-priv-op.c          | 10 ----------
>  xen/arch/x86/pv/grant_table.c           |  6 ------
>  xen/arch/x86/pv/mm.c                    |  2 +-
>  xen/arch/x86/pv/ro-page-fault.c         |  6 ------
>  xen/arch/x86/smpboot.c                  |  6 ------
>  xen/arch/x86/tboot.c                    |  4 ++--
>  xen/arch/x86/traps.c                    |  2 +-
>  xen/arch/x86/x86_64/mm.c                |  6 ++++++
>  xen/common/domain.c                     |  4 ++--
>  xen/common/grant_table.c                |  6 ++++++
>  xen/common/kimage.c                     |  6 ------
>  xen/common/memory.c                     |  6 ++++++
>  xen/common/page_alloc.c                 |  6 ++++++
>  xen/common/tmem.c                       |  2 +-
>  xen/common/tmem_xen.c                   |  4 ----
>  xen/common/trace.c                      |  6 ++++++
>  xen/common/vmap.c                       |  9 +++++----
>  xen/common/xenoprof.c                   |  2 --
>  xen/drivers/passthrough/amd/iommu_map.c |  6 ++++++
>  xen/drivers/passthrough/iommu.c         |  2 +-
>  xen/drivers/passthrough/x86/iommu.c     |  2 +-
>  xen/include/asm-arm/mm.h                | 16 +++++++++-------
>  xen/include/asm-arm/p2m.h               |  4 ++--
>  xen/include/asm-x86/mm.h                | 12 ++++++------
>  xen/include/asm-x86/p2m.h               |  2 +-
>  xen/include/asm-x86/page.h              | 32 ++++++++++++++++----------------
>  xen/include/xen/domain_page.h           |  8 ++++----
>  xen/include/xen/tmem_xen.h              |  2 +-
>  65 files changed, 161 insertions(+), 234 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 167711b4fa..a6b471e2f6 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -50,8 +50,6 @@ struct map_range_data
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef virt_to_mfn
>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> 
>  //#define DEBUG_11_ALLOCATION
>  #ifdef DEBUG_11_ALLOCATION
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 9c183f96da..f391938640 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -295,7 +295,7 @@ static __init int kernel_decompress(struct
> bootmodule *mod)
>          iounmap(input);
>          return -ENOMEM;
>      }
> -    mfn = _mfn(page_to_mfn(pages));
> +    mfn = page_to_mfn(pages);
>      output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR,
> VMAP_DEFAULT);
> 
>      rc = perform_gunzip(output, input, size);
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> index 0f2cbb81d3..112e291cba 100644
> --- a/xen/arch/arm/mem_access.c
> +++ b/xen/arch/arm/mem_access.c
> @@ -210,7 +210,7 @@ p2m_mem_access_check_and_get_page(vaddr_t
> gva, unsigned long flag,
>      if ( t != p2m_ram_rw )
>          goto err;
> 
> -    page = mfn_to_page(mfn_x(mfn));
> +    page = mfn_to_page(mfn);
> 
>      if ( unlikely(!get_page(page, v->domain)) )
>          page = NULL;
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 9a37f29ce6..ebce2320ec 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -477,7 +477,7 @@ void unmap_domain_page(const void *va)
>      local_irq_restore(flags);
>  }
> 
> -unsigned long domain_page_map_to_mfn(const void *ptr)
> +mfn_t domain_page_map_to_mfn(const void *ptr)
>  {
>      unsigned long va = (unsigned long)ptr;
>      lpae_t *map = this_cpu(xen_dommap);
> @@ -485,12 +485,12 @@ unsigned long domain_page_map_to_mfn(const
> void *ptr)
>      unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> 
>      if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> -        return __virt_to_mfn(va);
> +        return virt_to_mfn(va);
> 
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>      ASSERT(map[slot].pt.avail != 0);
> 
> -    return map[slot].pt.base + offset;
> +    return _mfn(map[slot].pt.base + offset);
>  }
>  #endif
> 
> @@ -1286,7 +1286,7 @@ int xenmem_add_to_physmap_one(
>              return -EINVAL;
>          }
> 
> -        mfn = _mfn(page_to_mfn(page));
> +        mfn = page_to_mfn(page);
>          t = p2m_map_foreign;
> 
>          rcu_unlock_domain(od);
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 0410b1e86b..1e7a0c6c40 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -38,12 +38,6 @@ static unsigned int __read_mostly max_vmid =
> MAX_VMID_8_BIT;
> 
>  #define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
> 
> -/* Override macros from asm/mm.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  unsigned int __read_mostly p2m_ipa_bits;
> 
>  /* Helpers to lookup the properties of each level */
> @@ -98,7 +92,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t
> addr)
>      printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
> 
>      printk("P2M @ %p mfn:0x%lx\n",
> -           p2m->root, __page_to_mfn(p2m->root));
> +           p2m->root, mfn_x(page_to_mfn(p2m->root)));

The format specifier should really be using PRI_mfn now. Same goes for others below.

> 
>      dump_pt_walk(page_to_maddr(p2m->root), addr,
>                   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
> diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
> index fd2fcacc26..376e80b6c7 100644
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -657,7 +657,7 @@ static void pvpmu_finish(struct domain *d,
> xen_pmu_params_t *params)
>  {
>      struct vcpu *v;
>      struct vpmu_struct *vpmu;
> -    uint64_t mfn;
> +    mfn_t mfn;
>      void *xenpmu_data;
> 
>      if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] ==
> NULL) )
> @@ -679,7 +679,7 @@ static void pvpmu_finish(struct domain *d,
> xen_pmu_params_t *params)
>      if ( xenpmu_data )
>      {
>          mfn = domain_page_map_to_mfn(xenpmu_data);
> -        ASSERT(mfn_valid(_mfn(mfn)));
> +        ASSERT(mfn_valid(mfn));
>          unmap_domain_page_global(xenpmu_data);
>          put_page_and_type(mfn_to_page(mfn));
>      }
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index bb1ffa3222..395ef6145a 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -186,7 +186,7 @@ void dump_pageframe_info(struct domain *d)
>                  }
>              }
>              printk("    DomPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> -                   _p(page_to_mfn(page)),
> +                   _p(mfn_x(page_to_mfn(page))),
>                     page->count_info, page->u.inuse.type_info);
>          }
>          spin_unlock(&d->page_alloc_lock);
> @@ -199,7 +199,7 @@ void dump_pageframe_info(struct domain *d)
>      page_list_for_each ( page, &d->xenpage_list )
>      {
>          printk("    XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> -               _p(page_to_mfn(page)),
> +               _p(mfn_x(page_to_mfn(page))),
>                 page->count_info, page->u.inuse.type_info);
>      }
>      spin_unlock(&d->page_alloc_lock);
> @@ -621,7 +621,8 @@ int arch_domain_soft_reset(struct domain *d)
>      struct page_info *page = virt_to_page(d->shared_info), *new_page;
>      int ret = 0;
>      struct domain *owner;
> -    unsigned long mfn, gfn;
> +    mfn_t mfn;
> +    unsigned long gfn;
>      p2m_type_t p2mt;
>      unsigned int i;
> 
> @@ -655,7 +656,7 @@ int arch_domain_soft_reset(struct domain *d)
>      ASSERT( owner == d );
> 
>      mfn = page_to_mfn(page);
> -    gfn = mfn_to_gmfn(d, mfn);
> +    gfn = mfn_to_gmfn(d, mfn_x(mfn));
> 
>      /*
>       * gfn == INVALID_GFN indicates that the shared_info page was never
> mapped
> @@ -664,7 +665,7 @@ int arch_domain_soft_reset(struct domain *d)
>      if ( gfn == gfn_x(INVALID_GFN) )
>          goto exit_put_page;
> 
> -    if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
> +    if ( !mfn_eq(get_gfn_query(d, gfn, &p2mt), mfn) )
>      {
>          printk(XENLOG_G_ERR "Failed to get Dom%d's shared_info GFN
> (%lx)\n",
>                 d->domain_id, gfn);
> @@ -681,7 +682,7 @@ int arch_domain_soft_reset(struct domain *d)
>          goto exit_put_gfn;
>      }
> 
> -    ret = guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn),
> PAGE_ORDER_4K);
> +    ret = guest_physmap_remove_page(d, _gfn(gfn), mfn,
> PAGE_ORDER_4K);
>      if ( ret )
>      {
>          printk(XENLOG_G_ERR "Failed to remove Dom%d's shared_info frame
> %lx\n",
> @@ -690,7 +691,7 @@ int arch_domain_soft_reset(struct domain *d)
>          goto exit_put_gfn;
>      }
> 
> -    ret = guest_physmap_add_page(d, _gfn(gfn),
> _mfn(page_to_mfn(new_page)),
> +    ret = guest_physmap_add_page(d, _gfn(gfn), page_to_mfn(new_page),
>                                   PAGE_ORDER_4K);
>      if ( ret )
>      {
> @@ -988,7 +989,7 @@ int arch_set_info_guest(
>                  {
>                      if ( (page->u.inuse.type_info & PGT_type_mask) ==
>                           PGT_l4_page_table )
> -                        done = !fill_ro_mpt(_mfn(page_to_mfn(page)));
> +                        done = !fill_ro_mpt(page_to_mfn(page));
> 
>                      page_unlock(page);
>                  }
> @@ -1114,7 +1115,7 @@ int arch_set_info_guest(
>          l4_pgentry_t *l4tab;
> 
>          l4tab = map_domain_page(_mfn(pagetable_get_pfn(v-
> >arch.guest_table)));
> -        *l4tab = l4e_from_pfn(page_to_mfn(cr3_page),
> +        *l4tab = l4e_from_pfn(mfn_x(page_to_mfn(cr3_page)),
>              _PAGE_PRESENT|_PAGE_RW|_PAGE_USER|_PAGE_ACCESSED);
>          unmap_domain_page(l4tab);
>      }
> @@ -1941,7 +1942,7 @@ int domain_relinquish_resources(struct domain *d)
>          if ( d->arch.pirq_eoi_map != NULL )
>          {
>              unmap_domain_page_global(d->arch.pirq_eoi_map);
> -            put_page_and_type(mfn_to_page(d->arch.pirq_eoi_map_mfn));
> +            put_page_and_type(mfn_to_page(_mfn(d-
> >arch.pirq_eoi_map_mfn)));
>              d->arch.pirq_eoi_map = NULL;
>              d->arch.auto_unmask = 0;
>          }
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index 3432a854dd..88046b39c9 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -331,13 +331,13 @@ void unmap_domain_page_global(const void *ptr)
>  }
> 
>  /* Translate a map-domain-page'd address to the underlying MFN */
> -unsigned long domain_page_map_to_mfn(const void *ptr)
> +mfn_t domain_page_map_to_mfn(const void *ptr)
>  {
>      unsigned long va = (unsigned long)ptr;
>      const l1_pgentry_t *pl1e;
> 
>      if ( va >= DIRECTMAP_VIRT_START )
> -        return virt_to_mfn(ptr);
> +        return _mfn(virt_to_mfn(ptr));
> 
>      if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
>      {
> @@ -350,5 +350,5 @@ unsigned long domain_page_map_to_mfn(const void
> *ptr)
>          pl1e = &__linear_l1_table[l1_linear_offset(va)];
>      }
> 
> -    return l1e_get_pfn(*pl1e);
> +    return l1e_get_mfn(*pl1e);
>  }
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 540ba089d7..9292ae5118 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -429,7 +429,7 @@ long arch_do_domctl(
>          {
>              if ( i >= max_pfns )
>                  break;
> -            mfn = page_to_mfn(page);
> +            mfn = mfn_x(page_to_mfn(page));
>              if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
>                                        i, &mfn, 1) )
>              {
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 9cf53b551c..1a83f27c0b 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -219,7 +219,7 @@ static int modified_memory(struct domain *d,
>              page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
>              if ( page )
>              {
> -                mfn_t gmfn = _mfn(page_to_mfn(page));
> +                mfn_t gmfn = page_to_mfn(page);
> 
>                  paging_mark_dirty(d, gmfn);
>                  /*
> diff --git a/xen/arch/x86/hvm/dom0_build.c
> b/xen/arch/x86/hvm/dom0_build.c
> index e8f746c70b..7789f6e571 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -120,7 +120,7 @@ static int __init pvh_populate_memory_range(struct
> domain *d,
>              continue;
>          }
> 
> -        rc = guest_physmap_add_page(d, _gfn(start),
> _mfn(page_to_mfn(page)),
> +        rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
>                                      order);
>          if ( rc != 0 )
>          {
> @@ -270,7 +270,7 @@ static int __init
> pvh_setup_vmx_realmode_helpers(struct domain *d)
>      }
>      write_32bit_pse_identmap(ident_pt);
>      unmap_domain_page(ident_pt);
> -    put_page(mfn_to_page(mfn_x(mfn)));
> +    put_page(mfn_to_page(mfn));
>      d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr;
>      if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) )
>              printk("Unable to set identity page tables as reserved in the memory
> map\n");
> @@ -288,7 +288,7 @@ static void __init pvh_steal_low_ram(struct domain
> *d, unsigned long start,
> 
>      for ( mfn = start; mfn < start + nr_pages; mfn++ )
>      {
> -        struct page_info *pg = mfn_to_page(mfn);
> +        struct page_info *pg = mfn_to_page(_mfn(mfn));
>          int rc;
> 
>          rc = unshare_xen_page_with_guest(pg, dom_io);
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 205b4cb685..dc7a018d1d 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2211,7 +2211,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
>              v->arch.guest_table = pagetable_from_page(page);
> 
>              HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx, mfn =
> %lx",
> -                        v->arch.hvm_vcpu.guest_cr[3], page_to_mfn(page));
> +                        v->arch.hvm_vcpu.guest_cr[3], mfn_x(page_to_mfn(page)));
>          }
>      }
>      else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
> @@ -2546,7 +2546,7 @@ static void *_hvm_map_guest_frame(unsigned
> long gfn, bool_t permanent,
>          if ( unlikely(p2m_is_discard_write(p2mt)) )
>              *writable = 0;
>          else if ( !permanent )
> -            paging_mark_dirty(d, _mfn(page_to_mfn(page)));
> +            paging_mark_dirty(d, page_to_mfn(page));
>      }
> 
>      if ( !permanent )
> @@ -2588,7 +2588,7 @@ void *hvm_map_guest_frame_ro(unsigned long
> gfn, bool_t permanent)
> 
>  void hvm_unmap_guest_frame(void *p, bool_t permanent)
>  {
> -    unsigned long mfn;
> +    mfn_t mfn;
>      struct page_info *page;
> 
>      if ( !p )
> @@ -2609,7 +2609,7 @@ void hvm_unmap_guest_frame(void *p, bool_t
> permanent)
>          list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
>              if ( track->page == page )
>              {
> -                paging_mark_dirty(d, _mfn(mfn));
> +                paging_mark_dirty(d, mfn);
>                  list_del(&track->list);
>                  xfree(track);
>                  break;
> @@ -2626,7 +2626,7 @@ void
> hvm_mapped_guest_frames_mark_dirty(struct domain *d)
> 
>      spin_lock(&d->arch.hvm_domain.write_map.lock);
>      list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
> -        paging_mark_dirty(d, _mfn(page_to_mfn(track->page)));
> +        paging_mark_dirty(d, page_to_mfn(track->page));
>      spin_unlock(&d->arch.hvm_domain.write_map.lock);
>  }
> 
> @@ -3201,7 +3201,7 @@ static enum hvm_translation_result __hvm_copy(
>                  if ( xchg(&lastpage, gfn_x(gfn)) != gfn_x(gfn) )
>                      dprintk(XENLOG_G_DEBUG,
>                              "%pv attempted write to read-only gfn %#lx (mfn=%#lx)\n",
> -                            v, gfn_x(gfn), page_to_mfn(page));
> +                            v, gfn_x(gfn), mfn_x(page_to_mfn(page)));
>              }
>              else
>              {
> @@ -3209,7 +3209,7 @@ static enum hvm_translation_result __hvm_copy(
>                      memcpy(p, buf, count);
>                  else
>                      memset(p, 0, count);
> -                paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
> +                paging_mark_dirty(v->domain, page_to_mfn(page));
>              }
>          }
>          else
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index f2e0b3f74a..5bd5cd788e 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -268,7 +268,7 @@ static void hvm_remove_ioreq_gfn(
>      struct domain *d, struct hvm_ioreq_page *iorp)
>  {
>      if ( guest_physmap_remove_page(d, _gfn(iorp->gfn),
> -                                   _mfn(page_to_mfn(iorp->page)), 0) )
> +                                   page_to_mfn(iorp->page), 0) )
>          domain_crash(d);
>      clear_page(iorp->va);
>  }
> @@ -281,9 +281,9 @@ static int hvm_add_ioreq_gfn(
>      clear_page(iorp->va);
> 
>      rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
> -                                _mfn(page_to_mfn(iorp->page)), 0);
> +                                page_to_mfn(iorp->page), 0);
>      if ( rc == 0 )
> -        paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
> +        paging_mark_dirty(d, page_to_mfn(iorp->page));
> 
>      return rc;
>  }
> diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
> index 088fbdf8ce..925bab2438 100644
> --- a/xen/arch/x86/hvm/stdvga.c
> +++ b/xen/arch/x86/hvm/stdvga.c
> @@ -590,7 +590,7 @@ void stdvga_init(struct domain *d)
>          if ( pg == NULL )
>              break;
>          s->vram_page[i] = pg;
> -        clear_domain_page(_mfn(page_to_mfn(pg)));
> +        clear_domain_page(page_to_mfn(pg));
>      }
> 
>      if ( i == ARRAY_SIZE(s->vram_page) )
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index b9cf423fd9..f50f931598 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1521,7 +1521,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
>          if ( !pg )
>              goto err;
> 
> -        clear_domain_page(_mfn(page_to_mfn(pg)));
> +        clear_domain_page(page_to_mfn(pg));
>          *this_hsa = page_to_maddr(pg);
>      }
> 
> @@ -1531,7 +1531,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
>          if ( !pg )
>              goto err;
> 
> -        clear_domain_page(_mfn(page_to_mfn(pg)));
> +        clear_domain_page(page_to_mfn(pg));
>          *this_vmcb = page_to_maddr(pg);
>      }
> 
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index f0fa59d7d5..070551e1ab 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -354,7 +354,7 @@ static void enable_hypercall_page(struct domain *d)
>          if ( page )
>              put_page(page);
>          gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN
> %#"PRI_mfn")\n",
> -                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> +                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
>          return;
>      }
> 
> @@ -414,7 +414,7 @@ static void initialize_vp_assist(struct vcpu *v)
> 
>   fail:
>      gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN
> %#"PRI_mfn")\n", gmfn,
> -             page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> +             mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
>  }
> 
>  static void teardown_vp_assist(struct vcpu *v)
> @@ -494,7 +494,7 @@ static void update_reference_tsc(struct domain *d,
> bool_t initialize)
>          if ( page )
>              put_page(page);
>          gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN
> %#"PRI_mfn")\n",
> -                 gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
> +                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
>          return;
>      }
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index f62fe7e217..471d224539 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1441,7 +1441,7 @@ int vmx_vcpu_enable_pml(struct vcpu *v)
> 
>      vmx_vmcs_enter(v);
> 
> -    __vmwrite(PML_ADDRESS, page_to_mfn(v->arch.hvm_vmx.pml_pg) <<
> PAGE_SHIFT);
> +    __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm_vmx.pml_pg));
>      __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
> 
>      v->arch.hvm_vmx.secondary_exec_control |=
> SECONDARY_EXEC_ENABLE_PML;
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 9cfa9b6965..40b91933bf 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2951,7 +2951,7 @@ gp_fault:
>  static int vmx_alloc_vlapic_mapping(struct domain *d)
>  {
>      struct page_info *pg;
> -    unsigned long mfn;
> +    mfn_t mfn;
> 
>      if ( !cpu_has_vmx_virtualize_apic_accesses )
>          return 0;
> @@ -2960,10 +2960,10 @@ static int vmx_alloc_vlapic_mapping(struct
> domain *d)
>      if ( !pg )
>          return -ENOMEM;
>      mfn = page_to_mfn(pg);
> -    clear_domain_page(_mfn(mfn));
> +    clear_domain_page(mfn);
>      share_xen_page_with_guest(pg, d, XENSHARE_writable);
> -    d->arch.hvm_domain.vmx.apic_access_mfn = mfn;
> -    set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
> _mfn(mfn),
> +    d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn);
> +    set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
> mfn,
>                         PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access);
> 
>      return 0;
> @@ -2974,7 +2974,7 @@ static void vmx_free_vlapic_mapping(struct
> domain *d)
>      unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn;
> 
>      if ( mfn != 0 )
> -        free_shared_domheap_page(mfn_to_page(mfn));
> +        free_shared_domheap_page(mfn_to_page(_mfn(mfn)));
>  }
> 
>  static void vmx_install_vlapic_mapping(struct vcpu *v)
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
> b/xen/arch/x86/hvm/vmx/vvmx.c
> index cd0ee0a307..35f7cde81a 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -84,7 +84,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
>          }
>          v->arch.hvm_vmx.vmread_bitmap = vmread_bitmap;
> 
> -        clear_domain_page(_mfn(page_to_mfn(vmread_bitmap)));
> +        clear_domain_page(page_to_mfn(vmread_bitmap));
> 
>          vmwrite_bitmap = alloc_domheap_page(NULL, 0);
>          if ( !vmwrite_bitmap )
> @@ -1704,7 +1704,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs
> *regs)
>                  nvcpu->nv_vvmcx = vvmcx;
>                  nvcpu->nv_vvmcxaddr = gpa;
>                  v->arch.hvm_vmx.vmcs_shadow_maddr =
> -                    pfn_to_paddr(domain_page_map_to_mfn(vvmcx));
> +                    mfn_to_maddr(domain_page_map_to_mfn(vvmcx));
>              }
>              else
>              {
> @@ -1790,7 +1790,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs
> *regs)
>          {
>              if ( writable )
>                  clear_vvmcs_launched(&nvmx->launched_list,
> -                                     domain_page_map_to_mfn(vvmcs));
> +                                     mfn_x(domain_page_map_to_mfn(vvmcs)));
>              else
>                  rc = VMFAIL_VALID;
>              hvm_unmap_guest_frame(vvmcs, 0);
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index d9df5ca69f..39038723ce 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -129,12 +129,6 @@
> 
>  #include "pv/mm.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  /* Mapping of the fixmap space needed early. */
>  l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>      l1_fixmap[L1_PAGETABLE_ENTRIES];
> diff --git a/xen/arch/x86/mm/guest_walk.c
> b/xen/arch/x86/mm/guest_walk.c
> index 6055fec1ad..f67aeda3d0 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -469,20 +469,20 @@ guest_walk_tables(struct vcpu *v, struct
> p2m_domain *p2m,
>      if ( l3p )
>      {
>          unmap_domain_page(l3p);
> -        put_page(mfn_to_page(mfn_x(gw->l3mfn)));
> +        put_page(mfn_to_page(gw->l3mfn));
>      }
>  #endif
>  #if GUEST_PAGING_LEVELS >= 3
>      if ( l2p )
>      {
>          unmap_domain_page(l2p);
> -        put_page(mfn_to_page(mfn_x(gw->l2mfn)));
> +        put_page(mfn_to_page(gw->l2mfn));
>      }
>  #endif
>      if ( l1p )
>      {
>          unmap_domain_page(l1p);
> -        put_page(mfn_to_page(mfn_x(gw->l1mfn)));
> +        put_page(mfn_to_page(gw->l1mfn));
>      }
> 
>      return walk_ok;
> diff --git a/xen/arch/x86/mm/hap/guest_walk.c
> b/xen/arch/x86/mm/hap/guest_walk.c
> index c550017ba4..cb3f9cebe7 100644
> --- a/xen/arch/x86/mm/hap/guest_walk.c
> +++ b/xen/arch/x86/mm/hap/guest_walk.c
> @@ -83,7 +83,7 @@ unsigned long
> hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
>          *pfec &= ~PFEC_page_present;
>          goto out_tweak_pfec;
>      }
> -    top_mfn = _mfn(page_to_mfn(top_page));
> +    top_mfn = page_to_mfn(top_page);
> 
>      /* Map the top-level table and call the tree-walker */
>      ASSERT(mfn_valid(top_mfn));
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index dc85e828cd..e45c1a1913 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -42,12 +42,6 @@
> 
>  #include "private.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
>  /************************************************/
>  /*          HAP VRAM TRACKING SUPPORT           */
>  /************************************************/
> diff --git a/xen/arch/x86/mm/hap/nested_ept.c
> b/xen/arch/x86/mm/hap/nested_ept.c
> index 14b1bb01e9..1738df69f6 100644
> --- a/xen/arch/x86/mm/hap/nested_ept.c
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -173,7 +173,7 @@ nept_walk_tables(struct vcpu *v, unsigned long l2ga,
> ept_walk_t *gw)
>              goto map_err;
>          gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
>          unmap_domain_page(lxp);
> -        put_page(mfn_to_page(mfn_x(lxmfn)));
> +        put_page(mfn_to_page(lxmfn));
> 
>          if ( nept_non_present_check(gw->lxe[lvl]) )
>              goto non_present;
> diff --git a/xen/arch/x86/mm/mem_sharing.c
> b/xen/arch/x86/mm/mem_sharing.c
> index 6f4be95515..6ecf0b27d5 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -152,11 +152,6 @@ static inline shr_handle_t get_next_handle(void)
>  #define mem_sharing_enabled(d) \
>      (is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled)
> 
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
>  static atomic_t nr_saved_mfns   = ATOMIC_INIT(0);
>  static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
> 
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 054827aa88..24de202a1b 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -33,6 +33,10 @@
> 
>  #include "mm-locks.h"
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  #define atomic_read_ept_entry(__pepte)                              \
>      ( (ept_entry_t) { .epte = read_atomic(&(__pepte)->epte) } )
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index 0a811ccf28..7a88074c31 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -29,12 +29,6 @@
> 
>  #include "mm-locks.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
>  #define superpage_aligned(_x)  (((_x)&(SUPERPAGE_PAGES-1))==0)
> 
>  /* Enforce lock ordering when grabbing the "external" page_alloc lock */
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 3fbc537da6..2194b35bc7 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -47,12 +47,6 @@ bool_t __initdata opt_hap_1gb = 1, __initdata
> opt_hap_2mb = 1;
>  boolean_param("hap_1gb", opt_hap_1gb);
>  boolean_param("hap_2mb", opt_hap_2mb);
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
>  DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
> 
>  /* Init the datastructures for later use by the p2m code */
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index 1e2c9ba4cc..cb97642cbc 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -47,12 +47,6 @@
>  /* Per-CPU variable for enforcing the lock ordering */
>  DEFINE_PER_CPU(int, mm_lock_level);
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
>  /************************************************/
>  /*              LOG DIRTY SUPPORT               */
>  /************************************************/
> diff --git a/xen/arch/x86/mm/shadow/private.h
> b/xen/arch/x86/mm/shadow/private.h
> index 6a03370402..b9cc680f4e 100644
> --- a/xen/arch/x86/mm/shadow/private.h
> +++ b/xen/arch/x86/mm/shadow/private.h
> @@ -315,7 +315,7 @@ static inline int page_is_out_of_sync(struct page_info
> *p)
> 
>  static inline int mfn_is_out_of_sync(mfn_t gmfn)
>  {
> -    return page_is_out_of_sync(mfn_to_page(mfn_x(gmfn)));
> +    return page_is_out_of_sync(mfn_to_page(gmfn));
>  }
> 
>  static inline int page_oos_may_write(struct page_info *p)
> @@ -326,7 +326,7 @@ static inline int page_oos_may_write(struct page_info
> *p)
> 
>  static inline int mfn_oos_may_write(mfn_t gmfn)
>  {
> -    return page_oos_may_write(mfn_to_page(mfn_x(gmfn)));
> +    return page_oos_may_write(mfn_to_page(gmfn));
>  }
>  #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
> 
> @@ -465,18 +465,6 @@ void sh_reset_l3_up_pointers(struct vcpu *v);
>   * MFN/page-info handling
>   */
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
> -#undef page_to_mfn
> -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
> -
> -/* Override pagetable_t <-> struct page_info conversions to work with
> mfn_t */
> -#undef pagetable_get_page
> -#define pagetable_get_page(x)   mfn_to_page(pagetable_get_mfn(x))
> -#undef pagetable_from_page
> -#define pagetable_from_page(pg)
> pagetable_from_mfn(page_to_mfn(pg))
> -
>  #define backpointer(sp) _mfn(pdx_to_pfn((unsigned long)(sp)->v.sh.back))
>  static inline unsigned long __backpointer(const struct page_info *sp)
>  {
> diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
> index 4fc967f893..a87987da6f 100644
> --- a/xen/arch/x86/numa.c
> +++ b/xen/arch/x86/numa.c
> @@ -430,7 +430,7 @@ static void dump_numa(unsigned char key)
>          spin_lock(&d->page_alloc_lock);
>          page_list_for_each(page, &d->page_list)
>          {
> -            i = phys_to_nid((paddr_t)page_to_mfn(page) << PAGE_SHIFT);
> +            i = phys_to_nid(page_to_maddr(page));
>              page_num_node[i]++;
>          }
>          spin_unlock(&d->page_alloc_lock);
> diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> index 0eb409758f..ba950af4a8 100644
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -241,7 +241,7 @@ ret_t do_physdev_op(int cmd,
> XEN_GUEST_HANDLE_PARAM(void) arg)
>          }
> 
>          if ( cmpxchg(&currd->arch.pirq_eoi_map_mfn,
> -                     0, page_to_mfn(page)) != 0 )
> +                     0, mfn_x(page_to_mfn(page))) != 0 )
>          {
>              put_page_and_type(page);
>              ret = -EBUSY;
> diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
> index 97d8438600..5957cb5085 100644
> --- a/xen/arch/x86/pv/callback.c
> +++ b/xen/arch/x86/pv/callback.c
> @@ -31,12 +31,6 @@
> 
>  #include <public/callback.h>
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  static int register_guest_nmi_callback(unsigned long address)
>  {
>      struct vcpu *curr = current;
> diff --git a/xen/arch/x86/pv/descriptor-tables.c
> b/xen/arch/x86/pv/descriptor-tables.c
> index 81973af124..f2b20f9910 100644
> --- a/xen/arch/x86/pv/descriptor-tables.c
> +++ b/xen/arch/x86/pv/descriptor-tables.c
> @@ -25,16 +25,6 @@
>  #include <asm/p2m.h>
>  #include <asm/pv/mm.h>
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> -/*******************
> - * Descriptor Tables
> - */
> -

Is the comment wrong?

>  void pv_destroy_gdt(struct vcpu *v)
>  {
>      l1_pgentry_t *pl1e;
> diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
> index dcbee43e8f..e9a893ba47 100644
> --- a/xen/arch/x86/pv/dom0_build.c
> +++ b/xen/arch/x86/pv/dom0_build.c
> @@ -22,6 +22,12 @@
> 
>  #include "mm.h"
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  /* Allow ring-3 access in long mode as guest cannot use ring 1 ... */
>  #define BASE_PROT
> (_PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_USER)
>  #define L1_PROT (BASE_PROT|_PAGE_GUEST_KERNEL)
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index 90d5569be1..4ca3205821 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -16,12 +16,6 @@
> 
>  #include "mm.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  static void noreturn continue_nonidle_domain(struct vcpu *v)
>  {
>      check_wakeup_from_wait();
> diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-
> op.c
> index 0f89c91dff..5cdb54c937 100644
> --- a/xen/arch/x86/pv/emul-gate-op.c
> +++ b/xen/arch/x86/pv/emul-gate-op.c
> @@ -41,12 +41,6 @@
> 
>  #include "emulate.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  static int read_gate_descriptor(unsigned int gate_sel,
>                                  const struct vcpu *v,
>                                  unsigned int *sel,
> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-
> op.c
> index dd90713acf..9ccbd021ef 100644
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -43,16 +43,6 @@
>  #include "emulate.h"
>  #include "mm.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> -/***********************
> - * I/O emulation support
> - */
> -

What's wrong with the comment?

>  struct priv_op_ctxt {
>      struct x86_emulate_ctxt ctxt;
>      struct {
> diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c
> index aaca228c6b..97323367c5 100644
> --- a/xen/arch/x86/pv/grant_table.c
> +++ b/xen/arch/x86/pv/grant_table.c
> @@ -27,12 +27,6 @@
> 
>  #include "mm.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  static unsigned int grant_to_pte_flags(unsigned int grant_flags,
>                                         unsigned int cache_flags)
>  {
> diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
> index e45d628deb..bdb09bfa75 100644
> --- a/xen/arch/x86/pv/mm.c
> +++ b/xen/arch/x86/pv/mm.c
> @@ -169,7 +169,7 @@ void init_guest_l4_table(l4_pgentry_t l4tab[], const
> struct domain *d,
>      BUILD_BUG_ON(root_pgt_pv_xen_slots !=
> ROOT_PAGETABLE_PV_XEN_SLOTS);
>  #endif
>      l4tab[l4_table_offset(LINEAR_PT_VIRT_START)] =
> -        l4e_from_pfn(domain_page_map_to_mfn(l4tab),
> __PAGE_HYPERVISOR_RW);
> +        l4e_from_mfn(domain_page_map_to_mfn(l4tab),
> __PAGE_HYPERVISOR_RW);
>      l4tab[l4_table_offset(PERDOMAIN_VIRT_START)] =
>          l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW);
>      if ( zap_ro_mpt || is_pv_32bit_domain(d) )
> diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-
> fault.c
> index 6b2976d3df..a7b7eb5113 100644
> --- a/xen/arch/x86/pv/ro-page-fault.c
> +++ b/xen/arch/x86/pv/ro-page-fault.c
> @@ -33,12 +33,6 @@
>  #include "emulate.h"
>  #include "mm.h"
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  /*********************
>   * Writable Pagetables
>   */
> diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
> index 3ca716c59f..663966bc74 100644
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -46,12 +46,6 @@
>  #include <mach_wakecpu.h>
>  #include <smpboot_hooks.h>
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  #define setup_trampoline()
> (bootsym_phys(trampoline_realmode_entry))
> 
>  unsigned long __read_mostly trampoline_phys;
> diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
> index 59d7c477f4..e9522f06ec 100644
> --- a/xen/arch/x86/tboot.c
> +++ b/xen/arch/x86/tboot.c
> @@ -184,7 +184,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
> 
>      for ( mfn = 0; mfn < max_page; mfn++ )
>      {
> -        struct page_info *page = mfn_to_page(mfn);
> +        struct page_info *page = mfn_to_page(_mfn(mfn));
> 
>          if ( !mfn_valid(_mfn(mfn)) )
>              continue;
> @@ -276,7 +276,7 @@ static void tboot_gen_xenheap_integrity(const
> uint8_t key[TB_KEY_SIZE],
>      vmac_set_key((uint8_t *)key, &ctx);
>      for ( mfn = 0; mfn < max_page; mfn++ )
>      {
> -        struct page_info *page = __mfn_to_page(mfn);
> +        struct page_info *page = mfn_to_page(_mfn(mfn));
> 
>          if ( !mfn_valid(_mfn(mfn)) )
>              continue;
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 86506f3747..b85394d1f9 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> 
>              gdprintk(XENLOG_WARNING,
>                       "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
> -                     gmfn, page ? page_to_mfn(page) : -1UL, base);
> +                     gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);

Would this not be better as mfn_x(page ? page_to_mfn(page) : INVALID_MFN), as you have done elsewhere?

  Paul

>              return 0;
>          }
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 11746730b4..971ccfcbbe 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -40,6 +40,12 @@ asm(".file \"" __FILE__ "\"");
>  #include <asm/mem_sharing.h>
>  #include <public/memory.h>
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  unsigned int __read_mostly m2p_compat_vstart =
> __HYPERVISOR_COMPAT_VIRT_START;
> 
>  l2_pgentry_t *compat_idle_pg_table_l2;
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 5aebcf265f..e8302e8e1b 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -1192,7 +1192,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long
> gfn, unsigned offset)
>      }
> 
>      v->vcpu_info = new_info;
> -    v->vcpu_info_mfn = _mfn(page_to_mfn(page));
> +    v->vcpu_info_mfn = page_to_mfn(page);
> 
>      /* Set new vcpu_info pointer /before/ setting pending flags. */
>      smp_wmb();
> @@ -1225,7 +1225,7 @@ void unmap_vcpu_info(struct vcpu *v)
> 
>      vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */
> 
> -    put_page_and_type(mfn_to_page(mfn_x(mfn)));
> +    put_page_and_type(mfn_to_page(mfn));
>  }
> 
>  int default_initialise_vcpu(struct vcpu *v,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 6d20b17739..2afde596d9 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -40,6 +40,12 @@
>  #include <xsm/xsm.h>
>  #include <asm/flushtlb.h>
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  /* Per-domain grant information. */
>  struct grant_table {
>      /*
> diff --git a/xen/common/kimage.c b/xen/common/kimage.c
> index afd8292cc1..210241dfb7 100644
> --- a/xen/common/kimage.c
> +++ b/xen/common/kimage.c
> @@ -23,12 +23,6 @@
> 
>  #include <asm/page.h>
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg)  _mfn(__page_to_mfn(pg))
> -
>  /*
>   * When kexec transitions to the new kernel there is a one-to-one
>   * mapping between physical and virtual addresses.  On processors
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index ad987e0f29..e467f271c7 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -29,6 +29,12 @@
>  #include <public/memory.h>
>  #include <xsm/xsm.h>
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  struct memop_args {
>      /* INPUT */
>      struct domain *domain;     /* Domain to be affected. */
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 472c6fe329..5e7d74e274 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,12 @@
>  #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>  #endif
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  /*
>   * Comma-separated list of hexadecimal page numbers containing bad
> bytes.
>   * e.g. 'badpage=0x3f45,0x8a321'.
> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
> index c955cf7167..1adb96f00c 100644
> --- a/xen/common/tmem.c
> +++ b/xen/common/tmem.c
> @@ -243,7 +243,7 @@ static void tmem_persistent_pool_page_put(void
> *page_va)
>      struct page_info *pi;
> 
>      ASSERT(IS_PAGE_ALIGNED(page_va));
> -    pi = mfn_to_page(virt_to_mfn(page_va));
> +    pi = mfn_to_page(_mfn(virt_to_mfn(page_va)));
>      ASSERT(IS_VALID_PAGE(pi));
>      __tmem_free_page_thispool(pi);
>  }
> diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
> index bd52e44faf..bf7b14f79a 100644
> --- a/xen/common/tmem_xen.c
> +++ b/xen/common/tmem_xen.c
> @@ -14,10 +14,6 @@
>  #include <xen/cpu.h>
>  #include <xen/init.h>
> 
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
>  bool __read_mostly opt_tmem;
>  boolean_param("tmem", opt_tmem);
> 
> diff --git a/xen/common/trace.c b/xen/common/trace.c
> index 2e18702317..cf8f8b0997 100644
> --- a/xen/common/trace.c
> +++ b/xen/common/trace.c
> @@ -42,6 +42,12 @@ CHECK_t_buf;
>  #define compat_t_rec t_rec
>  #endif
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  /* opt_tbuf_size: trace buffer size (in pages) for each cpu */
>  static unsigned int opt_tbuf_size;
>  static unsigned int opt_tevt_mask;
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 0b23f8fb97..10f32b29e0 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -36,7 +36,7 @@ void __init vm_init_type(enum vmap_region type, void
> *start, void *end)
>      {
>          struct page_info *pg = alloc_domheap_page(NULL, 0);
> 
> -        map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
> +        map_pages_to_xen(va, mfn_x(page_to_mfn(pg)), 1,
> PAGE_HYPERVISOR);
>          clear_page((void *)va);
>      }
>      bitmap_fill(vm_bitmap(type), vm_low[type]);
> @@ -107,7 +107,8 @@ static void *vm_alloc(unsigned int nr, unsigned int
> align,
>          {
>              unsigned long va = (unsigned long)vm_bitmap(t) + vm_top[t] / 8;
> 
> -            if ( !map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR)
> )
> +            if ( !map_pages_to_xen(va, mfn_x(page_to_mfn(pg)),
> +                                   1, PAGE_HYPERVISOR) )
>              {
>                  clear_page((void *)va);
>                  vm_top[t] += PAGE_SIZE * 8;
> @@ -258,7 +259,7 @@ static void *vmalloc_type(size_t size, enum
> vmap_region type)
>          pg = alloc_domheap_page(NULL, 0);
>          if ( pg == NULL )
>              goto error;
> -        mfn[i] = _mfn(page_to_mfn(pg));
> +        mfn[i] = page_to_mfn(pg);
>      }
> 
>      va = __vmap(mfn, 1, pages, 1, PAGE_HYPERVISOR, type);
> @@ -270,7 +271,7 @@ static void *vmalloc_type(size_t size, enum
> vmap_region type)
> 
>   error:
>      while ( i-- )
> -        free_domheap_page(mfn_to_page(mfn_x(mfn[i])));
> +        free_domheap_page(mfn_to_page(mfn[i]));
>      xfree(mfn);
>      return NULL;
>  }
> diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
> index 5acdde5691..fecdfb3697 100644
> --- a/xen/common/xenoprof.c
> +++ b/xen/common/xenoprof.c
> @@ -22,8 +22,6 @@
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef virt_to_mfn
>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> 
>  /* Limit amount of pages used for shared buffer (per domain) */
>  #define MAX_OPROF_SHARED_PAGES 32
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c
> b/xen/drivers/passthrough/amd/iommu_map.c
> index fd2327d3e5..bd62c2ce90 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -25,6 +25,12 @@
>  #include "../ats.h"
>  #include <xen/pci.h>
> 
> +/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
> +#undef page_to_mfn
> +#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
> +#undef mfn_to_page
> +#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
> +
>  /* Given pfn and page table level, return pde index */
>  static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
>  {
> diff --git a/xen/drivers/passthrough/iommu.c
> b/xen/drivers/passthrough/iommu.c
> index 1aecf7cf34..2c44fabf99 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -184,7 +184,7 @@ void __hwdom_init iommu_hwdom_init(struct
> domain *d)
> 
>          page_list_for_each ( page, &d->page_list )
>          {
> -            unsigned long mfn = page_to_mfn(page);
> +            unsigned long mfn = mfn_x(page_to_mfn(page));
>              unsigned long gfn = mfn_to_gmfn(d, mfn);
>              unsigned int mapping = IOMMUF_readable;
>              int ret;
> diff --git a/xen/drivers/passthrough/x86/iommu.c
> b/xen/drivers/passthrough/x86/iommu.c
> index 0253823173..68182afd91 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -58,7 +58,7 @@ int arch_iommu_populate_page_table(struct domain
> *d)
>          if ( is_hvm_domain(d) ||
>              (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
>          {
> -            unsigned long mfn = page_to_mfn(page);
> +            unsigned long mfn = mfn_x(page_to_mfn(page));
>              unsigned long gfn = mfn_to_gmfn(d, mfn);
> 
>              if ( gfn != gfn_x(INVALID_GFN) )
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 737a429409..3eb4b68761 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -138,7 +138,7 @@ extern vaddr_t xenheap_virt_start;
>  #endif
> 
>  #ifdef CONFIG_ARM_32
> -#define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
> +#define is_xen_heap_page(page)
> is_xen_heap_mfn(mfn_x(__page_to_mfn(page)))
>  #define is_xen_heap_mfn(mfn) ({                                 \
>      unsigned long mfn_ = (mfn);                                 \
>      (mfn_ >= mfn_x(xenheap_mfn_start) &&                        \
> @@ -220,12 +220,14 @@ static inline void __iomem *ioremap_wc(paddr_t
> start, size_t len)
>  })
> 
>  /* Convert between machine frame numbers and page-info structures. */
> -#define __mfn_to_page(mfn)  (frame_table + (pfn_to_pdx(mfn) -
> frametable_base_pdx))
> -#define __page_to_mfn(pg)   pdx_to_pfn((unsigned long)((pg) -
> frame_table) + frametable_base_pdx)
> +#define __mfn_to_page(mfn)                                          \
> +    (frame_table + (pfn_to_pdx(mfn_x(mfn)) - frametable_base_pdx))
> +#define __page_to_mfn(pg)                                           \
> +    _mfn(pdx_to_pfn((unsigned long)((pg) - frame_table) +
> frametable_base_pdx))
> 
>  /* Convert between machine addresses and page-info structures. */
> -#define maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
> -#define page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) <<
> PAGE_SHIFT)
> +#define maddr_to_page(ma) __mfn_to_page(maddr_to_mfn(ma))
> +#define page_to_maddr(pg) (mfn_to_maddr(__page_to_mfn(pg)))
> 
>  /* Convert between frame number and address formats.  */
>  #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
> @@ -235,7 +237,7 @@ static inline void __iomem *ioremap_wc(paddr_t
> start, size_t len)
>  #define gaddr_to_gfn(ga)    _gfn(paddr_to_pfn(ga))
>  #define mfn_to_maddr(mfn)   pfn_to_paddr(mfn_x(mfn))
>  #define maddr_to_mfn(ma)    _mfn(paddr_to_pfn(ma))
> -#define vmap_to_mfn(va)     paddr_to_pfn(virt_to_maddr((vaddr_t)va))
> +#define vmap_to_mfn(va)     maddr_to_mfn(virt_to_maddr((vaddr_t)va))
>  #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
> 
>  /* Page-align address and convert to frame number format */
> @@ -309,7 +311,7 @@ static inline struct page_info *virt_to_page(const void
> *v)
> 
>  static inline void *page_to_virt(const struct page_info *pg)
>  {
> -    return mfn_to_virt(page_to_mfn(pg));
> +    return mfn_to_virt(mfn_x(__page_to_mfn(pg)));
>  }
> 
>  struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index faadcfe8fe..87c9994974 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -276,7 +276,7 @@ static inline struct page_info *get_page_from_gfn(
>  {
>      struct page_info *page;
>      p2m_type_t p2mt;
> -    unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
> +    mfn_t mfn = p2m_lookup(d, _gfn(gfn), &p2mt);
> 
>      if (t)
>          *t = p2mt;
> @@ -284,7 +284,7 @@ static inline struct page_info *get_page_from_gfn(
>      if ( !p2m_is_any_ram(p2mt) )
>          return NULL;
> 
> -    if ( !mfn_valid(_mfn(mfn)) )
> +    if ( !mfn_valid(mfn) )
>          return NULL;
>      page = mfn_to_page(mfn);
> 
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index f2e0f498c4..984f54c3fa 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -259,7 +259,7 @@ struct page_info
> 
>  #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
>  #define is_xen_heap_mfn(mfn) \
> -    (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> +    (__mfn_valid(mfn) &&
> is_xen_heap_page(__mfn_to_page(_mfn(mfn))))
>  #define is_xen_fixed_mfn(mfn)                     \
>      ((((mfn) << PAGE_SHIFT) >= __pa(&_stext)) &&  \
>       (((mfn) << PAGE_SHIFT) <= __pa(&__2M_rwdata_end)))
> @@ -369,7 +369,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct
> domain *l1e_owner);
> 
>  static inline bool get_page_from_mfn(mfn_t mfn, struct domain *d)
>  {
> -    struct page_info *page = __mfn_to_page(mfn_x(mfn));
> +    struct page_info *page = __mfn_to_page(mfn);
> 
>      if ( unlikely(!mfn_valid(mfn)) || unlikely(!get_page(page, d)) )
>      {
> @@ -463,10 +463,10 @@ extern paddr_t mem_hotplug;
>  #define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
> 
>  #define compat_machine_to_phys_mapping ((unsigned int
> *)RDWR_COMPAT_MPT_VIRT_START)
> -#define _set_gpfn_from_mfn(mfn, pfn) ({                        \
> -    struct domain *d = page_get_owner(__mfn_to_page(mfn));     \
> -    unsigned long entry = (d && (d == dom_cow)) ?              \
> -        SHARED_M2P_ENTRY : (pfn);                              \
> +#define _set_gpfn_from_mfn(mfn, pfn) ({                         \
> +    struct domain *d = page_get_owner(__mfn_to_page(_mfn(mfn)));    \
> +    unsigned long entry = (d && (d == dom_cow)) ?               \
> +        SHARED_M2P_ENTRY : (pfn);                               \
>      ((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END -
> RDWR_COMPAT_MPT_VIRT_START) / 4 || \
>              (compat_machine_to_phys_mapping[(mfn)] = (unsigned int)(entry))),
> \
>       machine_to_phys_mapping[(mfn)] = (entry));                \
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 70f00c332f..18eac537c9 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -480,7 +480,7 @@ static inline struct page_info *get_page_from_gfn(
>      /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
>      if ( t )
>          *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct;
> -    page = __mfn_to_page(gfn);
> +    page = __mfn_to_page(_mfn(gfn));
>      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
>  }
> 
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 45ca742678..8737ef16ff 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -88,10 +88,10 @@
>      ((paddr_t)(((x).l4 & (PADDR_MASK&PAGE_MASK))))
> 
>  /* Get pointer to info structure of page mapped by pte (struct page_info *).
> */
> -#define l1e_get_page(x)           (__mfn_to_page(l1e_get_pfn(x)))
> -#define l2e_get_page(x)           (__mfn_to_page(l2e_get_pfn(x)))
> -#define l3e_get_page(x)           (__mfn_to_page(l3e_get_pfn(x)))
> -#define l4e_get_page(x)           (__mfn_to_page(l4e_get_pfn(x)))
> +#define l1e_get_page(x)           (__mfn_to_page(l1e_get_mfn(x)))
> +#define l2e_get_page(x)           (__mfn_to_page(l2e_get_mfn(x)))
> +#define l3e_get_page(x)           (__mfn_to_page(l3e_get_mfn(x)))
> +#define l4e_get_page(x)           (__mfn_to_page(l4e_get_mfn(x)))
> 
>  /* Get pte access flags (unsigned int). */
>  #define l1e_get_flags(x)           (get_pte_flags((x).l1))
> @@ -157,10 +157,10 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t
> pa, unsigned int flags)
>  #define l4e_from_intpte(intpte)    ((l4_pgentry_t) { (intpte_t)(intpte) })
> 
>  /* Construct a pte from a page pointer and access flags. */
> -#define l1e_from_page(page, flags) l1e_from_pfn(__page_to_mfn(page),
> (flags))
> -#define l2e_from_page(page, flags) l2e_from_pfn(__page_to_mfn(page),
> (flags))
> -#define l3e_from_page(page, flags) l3e_from_pfn(__page_to_mfn(page),
> (flags))
> -#define l4e_from_page(page, flags) l4e_from_pfn(__page_to_mfn(page),
> (flags))
> +#define l1e_from_page(page, flags) l1e_from_mfn(__page_to_mfn(page),
> (flags))
> +#define l2e_from_page(page, flags) l2e_from_mfn(__page_to_mfn(page),
> (flags))
> +#define l3e_from_page(page, flags) l3e_from_mfn(__page_to_mfn(page),
> (flags))
> +#define l4e_from_page(page, flags) l4e_from_mfn(__page_to_mfn(page),
> (flags))
> 
>  /* Add extra flags to an existing pte. */
>  #define l1e_add_flags(x, flags)    ((x).l1 |= put_pte_flags(flags))
> @@ -215,13 +215,13 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t
> pa, unsigned int flags)
>  /* Page-table type. */
>  typedef struct { u64 pfn; } pagetable_t;
>  #define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
> -#define pagetable_get_page(x)   __mfn_to_page((x).pfn)
> +#define pagetable_get_page(x)   __mfn_to_page(pagetable_get_mfn(x))
>  #define pagetable_get_pfn(x)    ((x).pfn)
>  #define pagetable_get_mfn(x)    _mfn(((x).pfn))
>  #define pagetable_is_null(x)    ((x).pfn == 0)
>  #define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) })
>  #define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) })
> -#define pagetable_from_page(pg)
> pagetable_from_pfn(__page_to_mfn(pg))
> +#define pagetable_from_page(pg)
> pagetable_from_mfn(__page_to_mfn(pg))
>  #define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT)
>  #define pagetable_null()        pagetable_from_pfn(0)
> 
> @@ -240,12 +240,12 @@ void copy_page_sse2(void *, const void *);
>  #define __mfn_to_virt(mfn)  (maddr_to_virt((paddr_t)(mfn) <<
> PAGE_SHIFT))
> 
>  /* Convert between machine frame numbers and page-info structures. */
> -#define __mfn_to_page(mfn)  (frame_table + pfn_to_pdx(mfn))
> -#define __page_to_mfn(pg)   pdx_to_pfn((unsigned long)((pg) -
> frame_table))
> +#define __mfn_to_page(mfn)  (frame_table + pfn_to_pdx(mfn_x(mfn)))
> +#define __page_to_mfn(pg)   _mfn(pdx_to_pfn((unsigned long)((pg) -
> frame_table)))
> 
>  /* Convert between machine addresses and page-info structures. */
> -#define __maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
> -#define __page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) <<
> PAGE_SHIFT)
> +#define __maddr_to_page(ma) __mfn_to_page(maddr_to_mfn(ma))
> +#define __page_to_maddr(pg) (mfn_to_maddr(__page_to_mfn(pg)))
> 
>  /* Convert between frame number and address formats.  */
>  #define __pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
> @@ -273,8 +273,8 @@ void copy_page_sse2(void *, const void *);
>  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
>  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
>  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va)     l1e_get_pfn(*virt_to_xen_l1e((unsigned
> long)(va)))
> -#define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
> +#define vmap_to_mfn(va)
> _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
> +#define vmap_to_page(va)    __mfn_to_page(vmap_to_mfn(va))
> 
>  #endif /* !defined(__ASSEMBLY__) */
> 
> diff --git a/xen/include/xen/domain_page.h
> b/xen/include/xen/domain_page.h
> index 890bae5b9c..22ab65ba16 100644
> --- a/xen/include/xen/domain_page.h
> +++ b/xen/include/xen/domain_page.h
> @@ -34,7 +34,7 @@ void unmap_domain_page(const void *va);
>  /*
>   * Given a VA from map_domain_page(), return its underlying MFN.
>   */
> -unsigned long domain_page_map_to_mfn(const void *va);
> +mfn_t domain_page_map_to_mfn(const void *va);
> 
>  /*
>   * Similar to the above calls, except the mapping is accessible in all
> @@ -44,11 +44,11 @@ unsigned long domain_page_map_to_mfn(const void
> *va);
>  void *map_domain_page_global(mfn_t mfn);
>  void unmap_domain_page_global(const void *va);
> 
> -#define __map_domain_page(pg)
> map_domain_page(_mfn(__page_to_mfn(pg)))
> +#define __map_domain_page(pg)
> map_domain_page(__page_to_mfn(pg))
> 
>  static inline void *__map_domain_page_global(const struct page_info *pg)
>  {
> -    return map_domain_page_global(_mfn(__page_to_mfn(pg)));
> +    return map_domain_page_global(page_to_mfn(pg));
>  }
> 
>  #else /* !CONFIG_DOMAIN_PAGE */
> @@ -56,7 +56,7 @@ static inline void *__map_domain_page_global(const
> struct page_info *pg)
>  #define map_domain_page(mfn)                __mfn_to_virt(mfn_x(mfn))
>  #define __map_domain_page(pg)               page_to_virt(pg)
>  #define unmap_domain_page(va)               ((void)(va))
> -#define domain_page_map_to_mfn(va)          virt_to_mfn((unsigned
> long)(va))
> +#define domain_page_map_to_mfn(va)          _mfn(virt_to_mfn((unsigned
> long)(va)))
> 
>  static inline void *map_domain_page_global(mfn_t mfn)
>  {
> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
> index 542c0b3f20..8516a0b131 100644
> --- a/xen/include/xen/tmem_xen.h
> +++ b/xen/include/xen/tmem_xen.h
> @@ -25,7 +25,7 @@
>  typedef uint32_t pagesize_t;  /* like size_t, must handle largest PAGE_SIZE
> */
> 
>  #define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr),
> PAGE_SIZE)
> -#define IS_VALID_PAGE(_pi)    mfn_valid(_mfn(page_to_mfn(_pi)))
> +#define IS_VALID_PAGE(_pi)    mfn_valid(page_to_mfn(_pi))
> 
>  extern struct page_list_head tmem_page_list;
>  extern spinlock_t tmem_page_list_lock;
> --
> 2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-06  9:11   ` Paul Durrant
@ 2017-10-06 10:49     ` Julien Grall
  2017-10-06 10:55       ` Paul Durrant
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-10-06 10:49 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Razvan Cojocaru, Konrad Rzeszutek Wilk, Jun Nakajima,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Shane Wang, Ian Jackson, Boris Ostrovsky, Gang Wei

Hi Paul,

On 06/10/17 10:11, Paul Durrant wrote:
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 0410b1e86b..1e7a0c6c40 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -38,12 +38,6 @@ static unsigned int __read_mostly max_vmid =
>> MAX_VMID_8_BIT;
>>
>>   #define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
>>
>> -/* Override macros from asm/mm.h to make them work with mfn_t */
>> -#undef mfn_to_page
>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>> -#undef page_to_mfn
>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>> -
>>   unsigned int __read_mostly p2m_ipa_bits;
>>
>>   /* Helpers to lookup the properties of each level */
>> @@ -98,7 +92,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t
>> addr)
>>       printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
>>
>>       printk("P2M @ %p mfn:0x%lx\n",
>> -           p2m->root, __page_to_mfn(p2m->root));
>> +           p2m->root, mfn_x(page_to_mfn(p2m->root)));
> 
> The format specifier should really be using PRI_mfn now. Same goes for others below.

Similarly we could do much more clean-up in each chunk. So where do I 
stop? That's why I wrote down in this comment I will not handle all the 
clean-up...

>> diff --git a/xen/arch/x86/pv/descriptor-tables.c
>> b/xen/arch/x86/pv/descriptor-tables.c
>> index 81973af124..f2b20f9910 100644
>> --- a/xen/arch/x86/pv/descriptor-tables.c
>> +++ b/xen/arch/x86/pv/descriptor-tables.c
>> @@ -25,16 +25,6 @@
>>   #include <asm/p2m.h>
>>   #include <asm/pv/mm.h>
>>
>> -/* Override macros from asm/page.h to make them work with mfn_t */
>> -#undef mfn_to_page
>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>> -#undef page_to_mfn
>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>> -
>> -/*******************
>> - * Descriptor Tables
>> - */
>> -
> 
> Is the comment wrong?

[...]

>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-
>> op.c
>> index dd90713acf..9ccbd021ef 100644
>> --- a/xen/arch/x86/pv/emul-priv-op.c
>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>> @@ -43,16 +43,6 @@
>>   #include "emulate.h"
>>   #include "mm.h"
>>
>> -/* Override macros from asm/page.h to make them work with mfn_t */
>> -#undef mfn_to_page
>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>> -#undef page_to_mfn
>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>> -
>> -/***********************
>> - * I/O emulation support
>> - */
>> -
> 
> What's wrong with the comment?

The file is dedicated to I/O emulation support as said in the header and 
the name. I can understand why it was there given there was macros 
defined not related to I/O. Now they are dropped, why would you need a 
comment to separate includes and the code?

[...]

>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> index 86506f3747..b85394d1f9 100644
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>
>>               gdprintk(XENLOG_WARNING,
>>                        "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
>> -                     gmfn, page ? page_to_mfn(page) : -1UL, base);
>> +                     gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);
> 
> Would this not be better as mfn_x(page ? page_to_mfn(page) : INVALID_MFN), as you have done elsewhere?

See above.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-06 10:49     ` Julien Grall
@ 2017-10-06 10:55       ` Paul Durrant
  2017-10-06 11:00         ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Paul Durrant @ 2017-10-06 10:55 UTC (permalink / raw)
  To: 'Julien Grall', xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Razvan Cojocaru, Konrad Rzeszutek Wilk, Jun Nakajima,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Shane Wang, Ian Jackson, Boris Ostrovsky, Gang Wei

> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@linaro.org]
> Sent: 06 October 2017 11:49
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xen.org
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> George Dunlap <George.Dunlap@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Jan Beulich <jbeulich@suse.com>; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>;
> Wei Liu <wei.liu2@citrix.com>; Razvan Cojocaru
> <rcojocaru@bitdefender.com>; Tamas K Lengyel <tamas@tklengyel.com>;
> Boris Ostrovsky <boris.ostrovsky@oracle.com>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>; Jun Nakajima
> <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Gang Wei
> <gang.wei@intel.com>; Shane Wang <shane.wang@intel.com>
> Subject: Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and
> __mfn_to_page to use typesafe MFN
> 
> Hi Paul,
> 
> On 06/10/17 10:11, Paul Durrant wrote:
> >> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> >> index 0410b1e86b..1e7a0c6c40 100644
> >> --- a/xen/arch/arm/p2m.c
> >> +++ b/xen/arch/arm/p2m.c
> >> @@ -38,12 +38,6 @@ static unsigned int __read_mostly max_vmid =
> >> MAX_VMID_8_BIT;
> >>
> >>   #define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
> >>
> >> -/* Override macros from asm/mm.h to make them work with mfn_t */
> >> -#undef mfn_to_page
> >> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> >> -#undef page_to_mfn
> >> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> >> -
> >>   unsigned int __read_mostly p2m_ipa_bits;
> >>
> >>   /* Helpers to lookup the properties of each level */
> >> @@ -98,7 +92,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t
> >> addr)
> >>       printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
> >>
> >>       printk("P2M @ %p mfn:0x%lx\n",
> >> -           p2m->root, __page_to_mfn(p2m->root));
> >> +           p2m->root, mfn_x(page_to_mfn(p2m->root)));
> >
> > The format specifier should really be using PRI_mfn now. Same goes for
> others below.
> 
> Similarly we could do much more clean-up in each chunk. So where do I
> stop? That's why I wrote down in this comment I will not handle all the
> clean-up...
> 

I realise you can't fix it all, but to me it seems reasonable to fix format specifiers in the calls you modify.

> >> diff --git a/xen/arch/x86/pv/descriptor-tables.c
> >> b/xen/arch/x86/pv/descriptor-tables.c
> >> index 81973af124..f2b20f9910 100644
> >> --- a/xen/arch/x86/pv/descriptor-tables.c
> >> +++ b/xen/arch/x86/pv/descriptor-tables.c
> >> @@ -25,16 +25,6 @@
> >>   #include <asm/p2m.h>
> >>   #include <asm/pv/mm.h>
> >>
> >> -/* Override macros from asm/page.h to make them work with mfn_t */
> >> -#undef mfn_to_page
> >> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> >> -#undef page_to_mfn
> >> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> >> -
> >> -/*******************
> >> - * Descriptor Tables
> >> - */
> >> -
> >
> > Is the comment wrong?
> 
> [...]
> 
> >> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-
> priv-
> >> op.c
> >> index dd90713acf..9ccbd021ef 100644
> >> --- a/xen/arch/x86/pv/emul-priv-op.c
> >> +++ b/xen/arch/x86/pv/emul-priv-op.c
> >> @@ -43,16 +43,6 @@
> >>   #include "emulate.h"
> >>   #include "mm.h"
> >>
> >> -/* Override macros from asm/page.h to make them work with mfn_t */
> >> -#undef mfn_to_page
> >> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> >> -#undef page_to_mfn
> >> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> >> -
> >> -/***********************
> >> - * I/O emulation support
> >> - */
> >> -
> >
> > What's wrong with the comment?
> 
> The file is dedicated to I/O emulation support as said in the header and
> the name. I can understand why it was there given there was macros
> defined not related to I/O. Now they are dropped, why would you need a
> comment to separate includes and the code?
> 

It makes the hunk look odd though. I think you should leave the comment alone in this patch, even if you do think it superfluous.

> [...]
> 
> >> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> >> index 86506f3747..b85394d1f9 100644
> >> --- a/xen/arch/x86/traps.c
> >> +++ b/xen/arch/x86/traps.c
> >> @@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t
> val)
> >>
> >>               gdprintk(XENLOG_WARNING,
> >>                        "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
> >> -                     gmfn, page ? page_to_mfn(page) : -1UL, base);
> >> +                     gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);
> >
> > Would this not be better as mfn_x(page ? page_to_mfn(page) :
> INVALID_MFN), as you have done elsewhere?
> 
> See above.

And again, you are modifying the code so why not modify it such that it is coded appropriately, as you have in other places in this patch?

  Paul

> 
> --
> Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-06 10:55       ` Paul Durrant
@ 2017-10-06 11:00         ` Julien Grall
  2017-10-06 11:44           ` Paul Durrant
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-10-06 11:00 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Razvan Cojocaru, Konrad Rzeszutek Wilk, Jun Nakajima,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Shane Wang, Ian Jackson, Boris Ostrovsky, Gang Wei



On 06/10/17 11:55, Paul Durrant wrote:
>>>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-
>> priv-
>>>> op.c
>>>> index dd90713acf..9ccbd021ef 100644
>>>> --- a/xen/arch/x86/pv/emul-priv-op.c
>>>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>>>> @@ -43,16 +43,6 @@
>>>>    #include "emulate.h"
>>>>    #include "mm.h"
>>>>
>>>> -/* Override macros from asm/page.h to make them work with mfn_t */
>>>> -#undef mfn_to_page
>>>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>>>> -#undef page_to_mfn
>>>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>>>> -
>>>> -/***********************
>>>> - * I/O emulation support
>>>> - */
>>>> -
>>>
>>> What's wrong with the comment?
>>
>> The file is dedicated to I/O emulation support as said in the header and
>> the name. I can understand why it was there given there was macros
>> defined not related to I/O. Now they are dropped, why would you need a
>> comment to separate includes and the code?
>>
> 
> It makes the hunk look odd though. I think you should leave the comment alone in this patch, even if you do think it superfluous.

Please get agree with Andrew... Here his comment on the previous version:

"If you're making this change, please take out the Descriptor Tables
comment like you do with I/O below, because the entire file is dedicated
to descriptor table support and it will save me one item on a cleanup
patch :)."

> 
>> [...]
>>
>>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>>>> index 86506f3747..b85394d1f9 100644
>>>> --- a/xen/arch/x86/traps.c
>>>> +++ b/xen/arch/x86/traps.c
>>>> @@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t
>> val)
>>>>
>>>>                gdprintk(XENLOG_WARNING,
>>>>                         "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
>>>> -                     gmfn, page ? page_to_mfn(page) : -1UL, base);
>>>> +                     gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);
>>>
>>> Would this not be better as mfn_x(page ? page_to_mfn(page) :
>> INVALID_MFN), as you have done elsewhere?
>>
>> See above.
> 
> And again, you are modifying the code so why not modify it such that it is coded appropriately, as you have in other places in this patch?

I will see what I can do when I will have time to spend on clean-up...

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-06 11:00         ` Julien Grall
@ 2017-10-06 11:44           ` Paul Durrant
  0 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2017-10-06 11:44 UTC (permalink / raw)
  To: 'Julien Grall', xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Razvan Cojocaru, Konrad Rzeszutek Wilk, Jun Nakajima,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Shane Wang, Ian Jackson, Boris Ostrovsky, Gang Wei

> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@linaro.org]
> Sent: 06 October 2017 12:00
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xen.org
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> George Dunlap <George.Dunlap@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Jan Beulich <jbeulich@suse.com>; Konrad
> Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>;
> Wei Liu <wei.liu2@citrix.com>; Razvan Cojocaru
> <rcojocaru@bitdefender.com>; Tamas K Lengyel <tamas@tklengyel.com>;
> Boris Ostrovsky <boris.ostrovsky@oracle.com>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>; Jun Nakajima
> <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Gang Wei
> <gang.wei@intel.com>; Shane Wang <shane.wang@intel.com>
> Subject: Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and
> __mfn_to_page to use typesafe MFN
> 
> 
> 
> On 06/10/17 11:55, Paul Durrant wrote:
> >>>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-
> >> priv-
> >>>> op.c
> >>>> index dd90713acf..9ccbd021ef 100644
> >>>> --- a/xen/arch/x86/pv/emul-priv-op.c
> >>>> +++ b/xen/arch/x86/pv/emul-priv-op.c
> >>>> @@ -43,16 +43,6 @@
> >>>>    #include "emulate.h"
> >>>>    #include "mm.h"
> >>>>
> >>>> -/* Override macros from asm/page.h to make them work with mfn_t
> */
> >>>> -#undef mfn_to_page
> >>>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> >>>> -#undef page_to_mfn
> >>>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> >>>> -
> >>>> -/***********************
> >>>> - * I/O emulation support
> >>>> - */
> >>>> -
> >>>
> >>> What's wrong with the comment?
> >>
> >> The file is dedicated to I/O emulation support as said in the header and
> >> the name. I can understand why it was there given there was macros
> >> defined not related to I/O. Now they are dropped, why would you need a
> >> comment to separate includes and the code?
> >>
> >
> > It makes the hunk look odd though. I think you should leave the comment
> alone in this patch, even if you do think it superfluous.
> 
> Please get agree with Andrew... Here his comment on the previous version:
> 
> "If you're making this change, please take out the Descriptor Tables
> comment like you do with I/O below, because the entire file is dedicated
> to descriptor table support and it will save me one item on a cleanup
> patch :)."

Ok, if Andrew is planning cleanup anyway and has specifically requested this then I withdraw my objection, since the end result will be the same.

  Paul

> 
> >
> >> [...]
> >>
> >>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> >>>> index 86506f3747..b85394d1f9 100644
> >>>> --- a/xen/arch/x86/traps.c
> >>>> +++ b/xen/arch/x86/traps.c
> >>>> @@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx,
> uint64_t
> >> val)
> >>>>
> >>>>                gdprintk(XENLOG_WARNING,
> >>>>                         "Bad GMFN %lx (MFN %lx) to MSR %08x\n",
> >>>> -                     gmfn, page ? page_to_mfn(page) : -1UL, base);
> >>>> +                     gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);
> >>>
> >>> Would this not be better as mfn_x(page ? page_to_mfn(page) :
> >> INVALID_MFN), as you have done elsewhere?
> >>
> >> See above.
> >
> > And again, you are modifying the code so why not modify it such that it is
> coded appropriately, as you have in other places in this patch?
> 
> I will see what I can do when I will have time to spend on clean-up...
> 
> --
> Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
                     ` (2 preceding siblings ...)
  2017-10-06  9:11   ` Paul Durrant
@ 2017-10-06 13:02   ` Boris Ostrovsky
  2017-10-09 11:42   ` Jan Beulich
  2017-10-10  5:18   ` Tian, Kevin
  5 siblings, 0 replies; 27+ messages in thread
From: Boris Ostrovsky @ 2017-10-06 13:02 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Jun Nakajima, Kevin Tian, Stefano Stabellini, Wei Liu,
	Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Tamas K Lengyel, Jan Beulich, Shane Wang, Gang Wei,
	Paul Durrant

On 10/05/2017 01:42 PM, Julien Grall wrote:
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
>
> So make __page_to_mfn and __mfn_to_page return mfn_t by default.
>
> Only reasonable clean-ups are done in this patch because it is
> already quite big. So some of the files now override page_to_mfn and
> mfn_to_page to avoid using mfn_t.
>
> Lastly, domain_page_to_mfn is also converted to use mfn_t given that
> most of the callers are now switched to _mfn(domain_page_to_mfn(...)).
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

SVM bits:

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
                     ` (3 preceding siblings ...)
  2017-10-06 13:02   ` Boris Ostrovsky
@ 2017-10-09 11:42   ` Jan Beulich
  2017-10-09 12:20     ` Julien Grall
  2017-10-10  5:18   ` Tian, Kevin
  5 siblings, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2017-10-09 11:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: Tim Deegan, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
	Julien Grall, Paul Durrant, Tamas K Lengyel,
	Suravee Suthikulpanit, Shane Wang, Boris Ostrovsky, Gang Wei

>>> On 05.10.17 at 19:42, <julien.grall@linaro.org> wrote:
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -50,8 +50,6 @@ struct map_range_data
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef virt_to_mfn
>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))

With the patch dropping (I assume) all overrides of this kind, what
is the difference between the double-underscore-prefixed versions
of the two constructs you convert here and the plain ones? If
there's none (which I think is what the result here is meant to be),
then ideally the patch would drop the former altogether. In case
this means touching a lot more code, then at least I'd expect you
to convert all instances you touch anyway, and that you in
particular don't introduce any new ones.

But wait - the patch even introduces new overrides (doing the
inverse). What's the deal here? If that's again to limit patch size,
then I'd still prefer the global aliases to go away, and local (per
file) aliases to be retained as needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-09 11:42   ` Jan Beulich
@ 2017-10-09 12:20     ` Julien Grall
  2017-10-09 13:40       ` Jan Beulich
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-10-09 12:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
	Julien Grall, Paul Durrant, Tamas K Lengyel,
	Suravee Suthikulpanit, Shane Wang, Boris Ostrovsky, nd, Gang Wei

Hi Jan,

On 09/10/17 12:42, Jan Beulich wrote:
>>>> On 05.10.17 at 19:42, <julien.grall@linaro.org> wrote:
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -50,8 +50,6 @@ struct map_range_data
>>   /* Override macros from asm/page.h to make them work with mfn_t */
>>   #undef virt_to_mfn
>>   #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>> -#undef page_to_mfn
>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> 
> With the patch dropping (I assume) all overrides of this kind, what
> is the difference between the double-underscore-prefixed versions
> of the two constructs you convert here and the plain ones? If
> there's none (which I think is what the result here is meant to be),
> then ideally the patch would drop the former altogether. In case
> this means touching a lot more code, then at least I'd expect you
> to convert all instances you touch anyway, and that you in
> particular don't introduce any new ones.
> 
> But wait - the patch even introduces new overrides (doing the
> inverse). What's the deal here? If that's again to limit patch size,
> then I'd still prefer the global aliases to go away, and local (per
> file) aliases to be retained as needed.

It introduces new overrides because some of the code is not trivial to 
convert to use mfn_t. It needs more effort to see whether it is worth it 
to convert them and I think is out of scope of this series.

This series is meant to reduce the number of place we override 
page_to_mfn to an handful numbers whilst still enforcing the use of 
mfn_t by default.

But I am not entirely sure what you are suggesting here. Are you saying 
we could define page_to_mfn/mfn_to_page on every file?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-09 12:20     ` Julien Grall
@ 2017-10-09 13:40       ` Jan Beulich
  2017-10-09 13:48         ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2017-10-09 13:40 UTC (permalink / raw)
  To: Julien Grall
  Cc: Tim Deegan, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
	Paul Durrant, Tamas K Lengyel, Suravee Suthikulpanit, Shane Wang,
	Boris Ostrovsky, nd, Gang Wei

>>> On 09.10.17 at 14:20, <julien.grall@arm.com> wrote:
> Hi Jan,
> 
> On 09/10/17 12:42, Jan Beulich wrote:
>>>>> On 05.10.17 at 19:42, <julien.grall@linaro.org> wrote:
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -50,8 +50,6 @@ struct map_range_data
>>>   /* Override macros from asm/page.h to make them work with mfn_t */
>>>   #undef virt_to_mfn
>>>   #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>>> -#undef page_to_mfn
>>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>> 
>> With the patch dropping (I assume) all overrides of this kind, what
>> is the difference between the double-underscore-prefixed versions
>> of the two constructs you convert here and the plain ones? If
>> there's none (which I think is what the result here is meant to be),
>> then ideally the patch would drop the former altogether. In case
>> this means touching a lot more code, then at least I'd expect you
>> to convert all instances you touch anyway, and that you in
>> particular don't introduce any new ones.
>> 
>> But wait - the patch even introduces new overrides (doing the
>> inverse). What's the deal here? If that's again to limit patch size,
>> then I'd still prefer the global aliases to go away, and local (per
>> file) aliases to be retained as needed.
> 
> It introduces new overrides because some of the code is not trivial to 
> convert to use mfn_t. It needs more effort to see whether it is worth it 
> to convert them and I think is out of scope of this series.
> 
> This series is meant to reduce the number of place we override 
> page_to_mfn to an handful numbers whilst still enforcing the use of 
> mfn_t by default.
> 
> But I am not entirely sure what you are suggesting here. Are you saying 
> we could define page_to_mfn/mfn_to_page on every file?

Actually the other way around: Globally only page_to_mfn() and
mfn_to_page() should remain. In files that need them
__page_to_mfn() and __mfn_to_page() could be added to limit
the scope of the change / the size of the patch.

But first and foremost I would have wished that other than for
defining these overrides, the patch wouldn't leave around
__mfn_to_page() uses (which it does in a few x86 headers). But
then again maybe it's unavoidable to leave those in place until
full conversion has happened?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-09 13:40       ` Jan Beulich
@ 2017-10-09 13:48         ` Julien Grall
  2017-10-09 14:07           ` Jan Beulich
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-10-09 13:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
	Paul Durrant, Tamas K Lengyel, Suravee Suthikulpanit, Shane Wang,
	Boris Ostrovsky, nd, Gang Wei

Hi Jan,

On 09/10/17 14:40, Jan Beulich wrote:
>>>> On 09.10.17 at 14:20, <julien.grall@arm.com> wrote:
>> Hi Jan,
>>
>> On 09/10/17 12:42, Jan Beulich wrote:
>>>>>> On 05.10.17 at 19:42, <julien.grall@linaro.org> wrote:
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -50,8 +50,6 @@ struct map_range_data
>>>>    /* Override macros from asm/page.h to make them work with mfn_t */
>>>>    #undef virt_to_mfn
>>>>    #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>>>> -#undef page_to_mfn
>>>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>>>
>>> With the patch dropping (I assume) all overrides of this kind, what
>>> is the difference between the double-underscore-prefixed versions
>>> of the two constructs you convert here and the plain ones? If
>>> there's none (which I think is what the result here is meant to be),
>>> then ideally the patch would drop the former altogether. In case
>>> this means touching a lot more code, then at least I'd expect you
>>> to convert all instances you touch anyway, and that you in
>>> particular don't introduce any new ones.
>>>
>>> But wait - the patch even introduces new overrides (doing the
>>> inverse). What's the deal here? If that's again to limit patch size,
>>> then I'd still prefer the global aliases to go away, and local (per
>>> file) aliases to be retained as needed.
>>
>> It introduces new overrides because some of the code is not trivial to
>> convert to use mfn_t. It needs more effort to see whether it is worth it
>> to convert them and I think is out of scope of this series.
>>
>> This series is meant to reduce the number of place we override
>> page_to_mfn to an handful numbers whilst still enforcing the use of
>> mfn_t by default.
>>
>> But I am not entirely sure what you are suggesting here. Are you saying
>> we could define page_to_mfn/mfn_to_page on every file?
> 
> Actually the other way around: Globally only page_to_mfn() and
> mfn_to_page() should remain. In files that need them
> __page_to_mfn() and __mfn_to_page() could be added to limit
> the scope of the change / the size of the patch.

I am still unsure to follow your suggestion here. If you define 
__page_to_mfn() in the code, then you would have to do renaming in the 
file not converting to use mfn_t. Therefore increasing size of the patch...

> 
> But first and foremost I would have wished that other than for
> defining these overrides, the patch wouldn't leave around
> __mfn_to_page() uses (which it does in a few x86 headers). But
> then again maybe it's unavoidable to leave those in place until
> full conversion has happened?

You have to keep __mfn_to_page() in x86 headers because some files may 
override mfn_to_page(). So it is not possible to use the latter safely.

We could get rid of them once the hypervisor has fully switched to mfn_t.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-09 13:48         ` Julien Grall
@ 2017-10-09 14:07           ` Jan Beulich
  0 siblings, 0 replies; 27+ messages in thread
From: Jan Beulich @ 2017-10-09 14:07 UTC (permalink / raw)
  To: Julien Grall
  Cc: Tim Deegan, Kevin Tian, Stefano Stabellini, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
	Paul Durrant, Tamas K Lengyel, Suravee Suthikulpanit, Shane Wang,
	Boris Ostrovsky, nd, Gang Wei

>>> On 09.10.17 at 15:48, <julien.grall@arm.com> wrote:
> On 09/10/17 14:40, Jan Beulich wrote:
>>>>> On 09.10.17 at 14:20, <julien.grall@arm.com> wrote:
>>> On 09/10/17 12:42, Jan Beulich wrote:
>>>>>>> On 05.10.17 at 19:42, <julien.grall@linaro.org> wrote:
>>>>> --- a/xen/arch/arm/domain_build.c
>>>>> +++ b/xen/arch/arm/domain_build.c
>>>>> @@ -50,8 +50,6 @@ struct map_range_data
>>>>>    /* Override macros from asm/page.h to make them work with mfn_t */
>>>>>    #undef virt_to_mfn
>>>>>    #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>>>>> -#undef page_to_mfn
>>>>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>>>>
>>>> With the patch dropping (I assume) all overrides of this kind, what
>>>> is the difference between the double-underscore-prefixed versions
>>>> of the two constructs you convert here and the plain ones? If
>>>> there's none (which I think is what the result here is meant to be),
>>>> then ideally the patch would drop the former altogether. In case
>>>> this means touching a lot more code, then at least I'd expect you
>>>> to convert all instances you touch anyway, and that you in
>>>> particular don't introduce any new ones.
>>>>
>>>> But wait - the patch even introduces new overrides (doing the
>>>> inverse). What's the deal here? If that's again to limit patch size,
>>>> then I'd still prefer the global aliases to go away, and local (per
>>>> file) aliases to be retained as needed.
>>>
>>> It introduces new overrides because some of the code is not trivial to
>>> convert to use mfn_t. It needs more effort to see whether it is worth it
>>> to convert them and I think is out of scope of this series.
>>>
>>> This series is meant to reduce the number of place we override
>>> page_to_mfn to an handful numbers whilst still enforcing the use of
>>> mfn_t by default.
>>>
>>> But I am not entirely sure what you are suggesting here. Are you saying
>>> we could define page_to_mfn/mfn_to_page on every file?
>> 
>> Actually the other way around: Globally only page_to_mfn() and
>> mfn_to_page() should remain. In files that need them
>> __page_to_mfn() and __mfn_to_page() could be added to limit
>> the scope of the change / the size of the patch.
> 
> I am still unsure to follow your suggestion here. If you define 
> __page_to_mfn() in the code, then you would have to do renaming in the 
> file not converting to use mfn_t. Therefore increasing size of the patch...
> 
>> 
>> But first and foremost I would have wished that other than for
>> defining these overrides, the patch wouldn't leave around
>> __mfn_to_page() uses (which it does in a few x86 headers). But
>> then again maybe it's unavoidable to leave those in place until
>> full conversion has happened?
> 
> You have to keep __mfn_to_page() in x86 headers because some files may 
> override mfn_to_page(). So it is not possible to use the latter safely.
> 
> We could get rid of them once the hypervisor has fully switched to mfn_t.

Okay, never mind then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
  2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
                     ` (4 preceding siblings ...)
  2017-10-09 11:42   ` Jan Beulich
@ 2017-10-10  5:18   ` Tian, Kevin
  5 siblings, 0 replies; 27+ messages in thread
From: Tian, Kevin @ 2017-10-10  5:18 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Nakajima, Jun, Stefano Stabellini, Wei Liu,
	Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Tamas K Lengyel, Jan Beulich, Wang, Shane,
	Boris Ostrovsky, Wei, Gang, Paul Durrant

> From: Julien Grall [mailto:julien.grall@linaro.org]
> Sent: Friday, October 6, 2017 1:42 AM
> 
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
> 
> So make __page_to_mfn and __mfn_to_page return mfn_t by default.
> 
> Only reasonable clean-ups are done in this patch because it is
> already quite big. So some of the files now override page_to_mfn and
> mfn_to_page to avoid using mfn_t.
> 
> Lastly, domain_page_to_mfn is also converted to use mfn_t given that
> most of the callers are now switched to _mfn(domain_page_to_mfn(...)).
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2017-10-10  5:18 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-05 17:42 [PATCH v2 0/9] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
2017-10-05 17:42 ` [PATCH v2 1/9] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
2017-10-05 17:42 ` [PATCH v2 2/9] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash Julien Grall
2017-10-05 17:42 ` [PATCH v2 3/9] xen/x86: mem_sharing: Use copy_domain_page in __mem_sharing_unshare_page Julien Grall
2017-10-05 17:49   ` Andrew Cooper
2017-10-05 17:52   ` Tamas K Lengyel
2017-10-05 17:42 ` [PATCH v2 4/9] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
2017-10-05 17:42 ` [PATCH v2 5/9] xen/kimage: Remove defined but unused variables Julien Grall
2017-10-05 17:42 ` [PATCH v2 6/9] xen/kexec, kimage: Convert kexec and kimage to use typesafe mfn_t Julien Grall
2017-10-05 17:51   ` Andrew Cooper
2017-10-05 17:42 ` [PATCH v2 7/9] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
2017-10-05 17:42 ` [PATCH v2 8/9] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
2017-10-05 17:42 ` [PATCH v2 9/9] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
2017-10-05 17:59   ` Andrew Cooper
2017-10-05 18:31   ` Razvan Cojocaru
2017-10-06  9:11   ` Paul Durrant
2017-10-06 10:49     ` Julien Grall
2017-10-06 10:55       ` Paul Durrant
2017-10-06 11:00         ` Julien Grall
2017-10-06 11:44           ` Paul Durrant
2017-10-06 13:02   ` Boris Ostrovsky
2017-10-09 11:42   ` Jan Beulich
2017-10-09 12:20     ` Julien Grall
2017-10-09 13:40       ` Jan Beulich
2017-10-09 13:48         ` Julien Grall
2017-10-09 14:07           ` Jan Beulich
2017-10-10  5:18   ` Tian, Kevin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.