xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion
@ 2020-03-22 16:14 julien
  2020-03-22 16:14 ` [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN julien
                   ` (16 more replies)
  0 siblings, 17 replies; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, julien, Jun Nakajima, Wei Liu,
	Konrad Rzeszutek Wilk, Andrew Cooper, Julien Grall, Paul Durrant,
	Ian Jackson, George Dunlap, Tim Deegan, Ross Lagerwall,
	Tamas K Lengyel, Lukasz Hawrylko, Jan Beulich, Volodymyr Babchuk,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Hi all,

This is a collection of patches I have sent over last year but never
took the opportunity to respin them. There are a few new one.

I have a couple of patches that also rename fields in the public interface
to what they are supposed to contain (e.g storing a GFN in a GFN field
rather than MFN). I will send it separately once I have done more build
testing with them.

Cheers,

Julien Grall (17):
  xen/x86: Introduce helpers to generate/convert the CR3 from/to a
    MFN/GFN
  xen/x86_64: Convert do_page_walk() to use typesafe MFN
  xen/mm: Move the MM types in a separate header
  xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN
  xen/x86: Remove the non-typesafe version of pagetable_* helpers
  xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn'
  xen/x86: traps: Convert __page_fault_type() to use typesafe MFN
  xen/x86: traps: Convert show_page_walk() to use typesafe MFN
  xen/x86: Reduce the number of use of l*e_{from, get}_pfn()
  xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version
  xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga()
  xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m()
  xen/x86: p2m: Reflow P2M_PRINTK()s in p2m_pt_audit_p2m()
  xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline
    function
  xen/x86: p2m: Rework printk format in audit_p2m()
  xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  xen: Switch parameter in get_page_from_gfn to use typesafe gfn

 xen/arch/arm/acpi/domain_build.c     |   4 -
 xen/arch/arm/alternative.c           |   4 -
 xen/arch/arm/cpuerrata.c             |   4 -
 xen/arch/arm/domain_build.c          |   4 -
 xen/arch/arm/guestcopy.c             |   2 +-
 xen/arch/arm/livepatch.c             |   4 -
 xen/arch/arm/mm.c                    |  10 +-
 xen/arch/x86/cpu/mcheck/mcaction.c   |   2 +-
 xen/arch/x86/cpu/vpmu.c              |   2 +-
 xen/arch/x86/domain.c                |  22 ++--
 xen/arch/x86/domain_page.c           |  10 +-
 xen/arch/x86/domctl.c                |  12 +--
 xen/arch/x86/hvm/dm.c                |   2 +-
 xen/arch/x86/hvm/dom0_build.c        |  20 ++--
 xen/arch/x86/hvm/domain.c            |   6 +-
 xen/arch/x86/hvm/hvm.c               |   9 +-
 xen/arch/x86/hvm/svm/svm.c           |   8 +-
 xen/arch/x86/hvm/viridian/viridian.c |  16 +--
 xen/arch/x86/hvm/vmx/vmcs.c          |   2 +-
 xen/arch/x86/hvm/vmx/vmx.c           |   6 +-
 xen/arch/x86/hvm/vmx/vvmx.c          |  14 +--
 xen/arch/x86/machine_kexec.c         |   2 +-
 xen/arch/x86/mm.c                    | 142 ++++++++++++------------
 xen/arch/x86/mm/hap/hap.c            |   2 +-
 xen/arch/x86/mm/hap/nested_ept.c     |   2 +-
 xen/arch/x86/mm/mem_sharing.c        |  20 ++--
 xen/arch/x86/mm/p2m-ept.c            |   2 +-
 xen/arch/x86/mm/p2m-pod.c            |   4 +-
 xen/arch/x86/mm/p2m-pt.c             |  39 ++++---
 xen/arch/x86/mm/p2m.c                |  71 ++++++------
 xen/arch/x86/mm/paging.c             |   4 +-
 xen/arch/x86/mm/shadow/hvm.c         |   6 +-
 xen/arch/x86/mm/shadow/multi.c       |  24 ++---
 xen/arch/x86/numa.c                  |   8 +-
 xen/arch/x86/physdev.c               |   3 +-
 xen/arch/x86/pv/descriptor-tables.c  |   6 +-
 xen/arch/x86/pv/dom0_build.c         |  20 ++--
 xen/arch/x86/pv/emul-priv-op.c       |   6 +-
 xen/arch/x86/pv/grant_table.c        |   4 +-
 xen/arch/x86/pv/mm.c                 |   2 +-
 xen/arch/x86/pv/shim.c               |   3 -
 xen/arch/x86/setup.c                 |  12 +--
 xen/arch/x86/smpboot.c               |   4 +-
 xen/arch/x86/srat.c                  |   2 +-
 xen/arch/x86/tboot.c                 |   4 +-
 xen/arch/x86/traps.c                 |  42 ++++----
 xen/arch/x86/x86_64/mm.c             |  39 +++----
 xen/arch/x86/x86_64/traps.c          |  42 ++++----
 xen/common/domain.c                  |   2 +-
 xen/common/domctl.c                  |   3 +-
 xen/common/efi/boot.c                |   7 +-
 xen/common/event_fifo.c              |  12 +--
 xen/common/grant_table.c             |   8 +-
 xen/common/memory.c                  |   4 +-
 xen/common/page_alloc.c              |  20 ++--
 xen/common/trace.c                   |  19 ++--
 xen/common/xenoprof.c                |   4 -
 xen/drivers/acpi/osl.c               |   2 +-
 xen/include/asm-arm/mm.h             |  16 +--
 xen/include/asm-arm/p2m.h            |   6 +-
 xen/include/asm-x86/grant_table.h    |   6 +-
 xen/include/asm-x86/mm.h             |  55 +++++++---
 xen/include/asm-x86/p2m.h            |  14 ++-
 xen/include/asm-x86/page.h           |  27 +++--
 xen/include/xen/domain_page.h        |   6 +-
 xen/include/xen/mm.h                 | 134 +----------------------
 xen/include/xen/mm_types.h           | 155 +++++++++++++++++++++++++++
 67 files changed, 598 insertions(+), 580 deletions(-)
 create mode 100644 xen/include/xen/mm_types.h

-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
@ 2020-03-22 16:14 ` julien
  2020-03-25 14:46   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN julien
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Introduce handy helpers to generate/convert the CR3 from/to a MFN/GFN.

Note that we are using cr3_pa() rather than xen_cr3_to_pfn() because the
latter does not ignore the top 12-bits.

Take the opportunity to use the new helpers when possible.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/domain.c    |  4 ++--
 xen/arch/x86/mm.c        |  2 +-
 xen/include/asm-x86/mm.h | 20 ++++++++++++++++++++
 3 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index caf2ecad7e..15750ce210 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1096,7 +1096,7 @@ int arch_set_info_guest(
     set_bit(_VPF_in_reset, &v->pause_flags);
 
     if ( !compat )
-        cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3]));
+        cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[3]);
     else
         cr3_mfn = _mfn(compat_cr3_to_pfn(c.cmp->ctrlreg[3]));
     cr3_page = get_page_from_mfn(cr3_mfn, d);
@@ -1142,7 +1142,7 @@ int arch_set_info_guest(
         v->arch.guest_table = pagetable_from_page(cr3_page);
         if ( c.nat->ctrlreg[1] )
         {
-            cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[1]));
+            cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[1]);
             cr3_page = get_page_from_mfn(cr3_mfn, d);
 
             if ( !cr3_page )
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 62507ca651..069a61deb8 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -509,7 +509,7 @@ void make_cr3(struct vcpu *v, mfn_t mfn)
 {
     struct domain *d = v->domain;
 
-    v->arch.cr3 = mfn_x(mfn) << PAGE_SHIFT;
+    v->arch.cr3 = mfn_to_cr3(mfn);
     if ( is_pv_domain(d) && d->arch.pv.pcid )
         v->arch.cr3 |= get_pcid_bits(v, false);
 }
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index a06b2fb81f..9764362a38 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -524,6 +524,26 @@ extern struct rangeset *mmio_ro_ranges;
 #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
 #define compat_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20))
 
+static inline unsigned long mfn_to_cr3(mfn_t mfn)
+{
+    return xen_pfn_to_cr3(mfn_x(mfn));
+}
+
+static inline mfn_t cr3_to_mfn(unsigned long cr3)
+{
+    return maddr_to_mfn(cr3_pa(cr3));
+}
+
+static inline unsigned long gfn_to_cr3(gfn_t gfn)
+{
+    return xen_pfn_to_cr3(gfn_x(gfn));
+}
+
+static inline gfn_t cr3_to_gfn(unsigned long cr3)
+{
+    return gaddr_to_gfn(cr3_pa(cr3));
+}
+
 #ifdef MEMORY_GUARD
 void memguard_guard_range(void *p, unsigned long l);
 void memguard_unguard_range(void *p, unsigned long l);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
  2020-03-22 16:14 ` [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN julien
@ 2020-03-22 16:14 ` julien
  2020-03-25 14:51   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header julien
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/x86_64/mm.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index b7ce833ffc..3516423bb0 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -46,7 +46,7 @@ l2_pgentry_t *compat_idle_pg_table_l2;
 
 void *do_page_walk(struct vcpu *v, unsigned long addr)
 {
-    unsigned long mfn = pagetable_get_pfn(v->arch.guest_table);
+    mfn_t mfn = pagetable_get_mfn(v->arch.guest_table);
     l4_pgentry_t l4e, *l4t;
     l3_pgentry_t l3e, *l3t;
     l2_pgentry_t l2e, *l2t;
@@ -55,7 +55,7 @@ void *do_page_walk(struct vcpu *v, unsigned long addr)
     if ( !is_pv_vcpu(v) || !is_canonical_address(addr) )
         return NULL;
 
-    l4t = map_domain_page(_mfn(mfn));
+    l4t = map_domain_page(mfn);
     l4e = l4t[l4_table_offset(addr)];
     unmap_domain_page(l4t);
     if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
@@ -64,36 +64,36 @@ void *do_page_walk(struct vcpu *v, unsigned long addr)
     l3t = map_l3t_from_l4e(l4e);
     l3e = l3t[l3_table_offset(addr)];
     unmap_domain_page(l3t);
-    mfn = l3e_get_pfn(l3e);
-    if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || !mfn_valid(_mfn(mfn)) )
+    mfn = l3e_get_mfn(l3e);
+    if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || !mfn_valid(mfn) )
         return NULL;
     if ( (l3e_get_flags(l3e) & _PAGE_PSE) )
     {
-        mfn += PFN_DOWN(addr & ((1UL << L3_PAGETABLE_SHIFT) - 1));
+        mfn = mfn_add(mfn, PFN_DOWN(addr & ((1UL << L3_PAGETABLE_SHIFT) - 1)));
         goto ret;
     }
 
-    l2t = map_domain_page(_mfn(mfn));
+    l2t = map_domain_page(mfn);
     l2e = l2t[l2_table_offset(addr)];
     unmap_domain_page(l2t);
-    mfn = l2e_get_pfn(l2e);
-    if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || !mfn_valid(_mfn(mfn)) )
+    mfn = l2e_get_mfn(l2e);
+    if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || !mfn_valid(mfn) )
         return NULL;
     if ( (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
-        mfn += PFN_DOWN(addr & ((1UL << L2_PAGETABLE_SHIFT) - 1));
+        mfn = mfn_add(mfn, PFN_DOWN(addr & ((1UL << L2_PAGETABLE_SHIFT) - 1)));
         goto ret;
     }
 
-    l1t = map_domain_page(_mfn(mfn));
+    l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(addr)];
     unmap_domain_page(l1t);
-    mfn = l1e_get_pfn(l1e);
-    if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || !mfn_valid(_mfn(mfn)) )
+    mfn = l1e_get_mfn(l1e);
+    if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || !mfn_valid(mfn) )
         return NULL;
 
  ret:
-    return map_domain_page(_mfn(mfn)) + (addr & ~PAGE_MASK);
+    return map_domain_page(mfn) + (addr & ~PAGE_MASK);
 }
 
 /*
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
  2020-03-22 16:14 ` [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN julien
  2020-03-22 16:14 ` [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN julien
@ 2020-03-22 16:14 ` julien
  2020-03-25 15:00   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN julien
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien, Wei Liu, Andrew Cooper, Julien Grall,
	Ian Jackson, George Dunlap, Jan Beulich

From: Julien Grall <jgrall@amazon.com>

It is getting incredibly difficult to use typesafe GFN/MFN/PFN in the
headers because of circular dependency. For instance, asm-x86/page.h
cannot include xen/mm.h.

In order to convert more code to use typesafe, the types are now moved
in a separate header that requires only a few dependencies.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/mm.h       | 134 +-------------------------------
 xen/include/xen/mm_types.h | 155 +++++++++++++++++++++++++++++++++++++
 2 files changed, 156 insertions(+), 133 deletions(-)
 create mode 100644 xen/include/xen/mm_types.h

diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index d0d095d9c7..4337303f99 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -1,50 +1,7 @@
 /******************************************************************************
  * include/xen/mm.h
  *
- * Definitions for memory pages, frame numbers, addresses, allocations, etc.
- *
  * Copyright (c) 2002-2006, K A Fraser <keir@xensource.com>
- *
- *                         +---------------------+
- *                          Xen Memory Management
- *                         +---------------------+
- *
- * Xen has to handle many different address spaces.  It is important not to
- * get these spaces mixed up.  The following is a consistent terminology which
- * should be adhered to.
- *
- * mfn: Machine Frame Number
- *   The values Xen puts into its own pagetables.  This is the host physical
- *   memory address space with RAM, MMIO etc.
- *
- * gfn: Guest Frame Number
- *   The values a guest puts in its own pagetables.  For an auto-translated
- *   guest (hardware assisted with 2nd stage translation, or shadowed), gfn !=
- *   mfn.  For a non-translated guest which is aware of Xen, gfn == mfn.
- *
- * pfn: Pseudophysical Frame Number
- *   A linear idea of a guest physical address space. For an auto-translated
- *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
- *
- * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
- *   The linear frame numbers of device DMA address space. All initiators for
- *   (i.e. all devices assigned to) a guest share a single DMA address space
- *   and, by default, Xen will ensure dfn == pfn.
- *
- * WARNING: Some of these terms have changed over time while others have been
- * used inconsistently, meaning that a lot of existing code does not match the
- * definitions above.  New code should use these terms as described here, and
- * over time older code should be corrected to be consistent.
- *
- * An incomplete list of larger work area:
- * - Phase out the use of 'pfn' from the x86 pagetable code.  Callers should
- *   know explicitly whether they are talking about mfns or gfns.
- * - Phase out the use of 'pfn' from the ARM mm code.  A cursory glance
- *   suggests that 'mfn' and 'pfn' are currently used interchangeably, where
- *   'mfn' is the appropriate term to use.
- * - Phase out the use of gpfn/gmfn where pfn/mfn are meant.  This excludes
- *   the x86 shadow code, which uses gmfn/smfn pairs with different,
- *   documented, meanings.
  */
 
 #ifndef __XEN_MM_H__
@@ -54,100 +11,11 @@
 #include <xen/types.h>
 #include <xen/list.h>
 #include <xen/spinlock.h>
-#include <xen/typesafe.h>
 #include <xen/kernel.h>
+#include <xen/mm_types.h>
 #include <xen/perfc.h>
 #include <public/memory.h>
 
-TYPE_SAFE(unsigned long, mfn);
-#define PRI_mfn          "05lx"
-#define INVALID_MFN      _mfn(~0UL)
-/*
- * To be used for global variable initialization. This workaround a bug
- * in GCC < 5.0.
- */
-#define INVALID_MFN_INITIALIZER { ~0UL }
-
-#ifndef mfn_t
-#define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
-#define _mfn
-#define mfn_x
-#undef mfn_t
-#undef _mfn
-#undef mfn_x
-#endif
-
-static inline mfn_t mfn_add(mfn_t mfn, unsigned long i)
-{
-    return _mfn(mfn_x(mfn) + i);
-}
-
-static inline mfn_t mfn_max(mfn_t x, mfn_t y)
-{
-    return _mfn(max(mfn_x(x), mfn_x(y)));
-}
-
-static inline mfn_t mfn_min(mfn_t x, mfn_t y)
-{
-    return _mfn(min(mfn_x(x), mfn_x(y)));
-}
-
-static inline bool_t mfn_eq(mfn_t x, mfn_t y)
-{
-    return mfn_x(x) == mfn_x(y);
-}
-
-TYPE_SAFE(unsigned long, gfn);
-#define PRI_gfn          "05lx"
-#define INVALID_GFN      _gfn(~0UL)
-/*
- * To be used for global variable initialization. This workaround a bug
- * in GCC < 5.0 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64856
- */
-#define INVALID_GFN_INITIALIZER { ~0UL }
-
-#ifndef gfn_t
-#define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
-#define _gfn
-#define gfn_x
-#undef gfn_t
-#undef _gfn
-#undef gfn_x
-#endif
-
-static inline gfn_t gfn_add(gfn_t gfn, unsigned long i)
-{
-    return _gfn(gfn_x(gfn) + i);
-}
-
-static inline gfn_t gfn_max(gfn_t x, gfn_t y)
-{
-    return _gfn(max(gfn_x(x), gfn_x(y)));
-}
-
-static inline gfn_t gfn_min(gfn_t x, gfn_t y)
-{
-    return _gfn(min(gfn_x(x), gfn_x(y)));
-}
-
-static inline bool_t gfn_eq(gfn_t x, gfn_t y)
-{
-    return gfn_x(x) == gfn_x(y);
-}
-
-TYPE_SAFE(unsigned long, pfn);
-#define PRI_pfn          "05lx"
-#define INVALID_PFN      (~0UL)
-
-#ifndef pfn_t
-#define pfn_t /* Grep fodder: pfn_t, _pfn() and pfn_x() are defined above */
-#define _pfn
-#define pfn_x
-#undef pfn_t
-#undef _pfn
-#undef pfn_x
-#endif
-
 struct page_info;
 
 void put_page(struct page_info *);
diff --git a/xen/include/xen/mm_types.h b/xen/include/xen/mm_types.h
new file mode 100644
index 0000000000..f14359f571
--- /dev/null
+++ b/xen/include/xen/mm_types.h
@@ -0,0 +1,155 @@
+/******************************************************************************
+ * include/xen/mm_types.h
+ *
+ * Definitions for memory pages, frame numbers, addresses, allocations, etc.
+ *
+ * Copyright (c) 2002-2006, K A Fraser <keir@xensource.com>
+ *
+ *                         +---------------------+
+ *                          Xen Memory Management
+ *                         +---------------------+
+ *
+ * Xen has to handle many different address spaces.  It is important not to
+ * get these spaces mixed up.  The following is a consistent terminology which
+ * should be adhered to.
+ *
+ * mfn: Machine Frame Number
+ *   The values Xen puts into its own pagetables.  This is the host physical
+ *   memory address space with RAM, MMIO etc.
+ *
+ * gfn: Guest Frame Number
+ *   The values a guest puts in its own pagetables.  For an auto-translated
+ *   guest (hardware assisted with 2nd stage translation, or shadowed), gfn !=
+ *   mfn.  For a non-translated guest which is aware of Xen, gfn == mfn.
+ *
+ * pfn: Pseudophysical Frame Number
+ *   A linear idea of a guest physical address space. For an auto-translated
+ *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
+ *
+ * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
+ *   The linear frame numbers of device DMA address space. All initiators for
+ *   (i.e. all devices assigned to) a guest share a single DMA address space
+ *   and, by default, Xen will ensure dfn == pfn.
+ *
+ * WARNING: Some of these terms have changed over time while others have been
+ * used inconsistently, meaning that a lot of existing code does not match the
+ * definitions above.  New code should use these terms as described here, and
+ * over time older code should be corrected to be consistent.
+ *
+ * An incomplete list of larger work area:
+ * - Phase out the use of 'pfn' from the x86 pagetable code.  Callers should
+ *   know explicitly whether they are talking about mfns or gfns.
+ * - Phase out the use of 'pfn' from the ARM mm code.  A cursory glance
+ *   suggests that 'mfn' and 'pfn' are currently used interchangeably, where
+ *   'mfn' is the appropriate term to use.
+ * - Phase out the use of gpfn/gmfn where pfn/mfn are meant.  This excludes
+ *   the x86 shadow code, which uses gmfn/smfn pairs with different,
+ *   documented, meanings.
+ */
+
+#ifndef __XEN_MM_TYPES_H__
+#define __XEN_MM_TYPES_H__
+
+#include <xen/typesafe.h>
+#include <xen/kernel.h>
+
+TYPE_SAFE(unsigned long, mfn);
+#define PRI_mfn          "05lx"
+#define INVALID_MFN      _mfn(~0UL)
+/*
+ * To be used for global variable initialization. This workaround a bug
+ * in GCC < 5.0.
+ */
+#define INVALID_MFN_INITIALIZER { ~0UL }
+
+#ifndef mfn_t
+#define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
+#define _mfn
+#define mfn_x
+#undef mfn_t
+#undef _mfn
+#undef mfn_x
+#endif
+
+static inline mfn_t mfn_add(mfn_t mfn, unsigned long i)
+{
+    return _mfn(mfn_x(mfn) + i);
+}
+
+static inline mfn_t mfn_max(mfn_t x, mfn_t y)
+{
+    return _mfn(max(mfn_x(x), mfn_x(y)));
+}
+
+static inline mfn_t mfn_min(mfn_t x, mfn_t y)
+{
+    return _mfn(min(mfn_x(x), mfn_x(y)));
+}
+
+static inline bool_t mfn_eq(mfn_t x, mfn_t y)
+{
+    return mfn_x(x) == mfn_x(y);
+}
+
+TYPE_SAFE(unsigned long, gfn);
+#define PRI_gfn          "05lx"
+#define INVALID_GFN      _gfn(~0UL)
+/*
+ * To be used for global variable initialization. This workaround a bug
+ * in GCC < 5.0 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64856
+ */
+#define INVALID_GFN_INITIALIZER { ~0UL }
+
+#ifndef gfn_t
+#define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
+#define _gfn
+#define gfn_x
+#undef gfn_t
+#undef _gfn
+#undef gfn_x
+#endif
+
+static inline gfn_t gfn_add(gfn_t gfn, unsigned long i)
+{
+    return _gfn(gfn_x(gfn) + i);
+}
+
+static inline gfn_t gfn_max(gfn_t x, gfn_t y)
+{
+    return _gfn(max(gfn_x(x), gfn_x(y)));
+}
+
+static inline gfn_t gfn_min(gfn_t x, gfn_t y)
+{
+    return _gfn(min(gfn_x(x), gfn_x(y)));
+}
+
+static inline bool_t gfn_eq(gfn_t x, gfn_t y)
+{
+    return gfn_x(x) == gfn_x(y);
+}
+
+TYPE_SAFE(unsigned long, pfn);
+#define PRI_pfn          "05lx"
+#define INVALID_PFN      (~0UL)
+
+#ifndef pfn_t
+#define pfn_t /* Grep fodder: pfn_t, _pfn() and pfn_x() are defined above */
+#define _pfn
+#define pfn_x
+#undef pfn_t
+#undef _pfn
+#undef pfn_x
+#endif
+
+#endif /* __XEN_MM_TYPES_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (2 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header julien
@ 2020-03-22 16:14 ` julien
  2020-03-25 15:27   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers julien
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Julien Grall, Ian Jackson, George Dunlap,
	Ross Lagerwall, Lukasz Hawrylko, Jan Beulich, Volodymyr Babchuk,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Most of Xen is now either override the helpers virt_to_mfn() and
mfn_to_virt() to use typesafe MFN or use mfn_x() to remove the
typesafety when calling the helpers.

Therefore it is time to switch the two helpers to use typesafe MFN and
remove completely the possibly to make them unsafe by dropping the
double-underscore version.

Places that were still using non-typesafe MFN have been either
converted to use typesafe (if the changes are simple) or use
_mfn()/mfn_x() until the rest of the code is changed.

There are a couple of noticeable changes in the code:
    - pvh_populate_p2m() were storing the mfn in a variable called
      'addr'. This has now been renamed to 'mfn'.
    - allocate_cache_aligned_memnodemap() were storing an address in a
      variable called 'mfn'. The code has been reworked to avoid
      repurposing the variable.

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/acpi/domain_build.c    |  4 ----
 xen/arch/arm/alternative.c          |  4 ----
 xen/arch/arm/cpuerrata.c            |  4 ----
 xen/arch/arm/domain_build.c         |  4 ----
 xen/arch/arm/livepatch.c            |  4 ----
 xen/arch/arm/mm.c                   |  8 +-------
 xen/arch/x86/domain_page.c          | 10 +++++-----
 xen/arch/x86/hvm/dom0_build.c       | 20 ++++++++++---------
 xen/arch/x86/mm.c                   | 30 +++++++++++++----------------
 xen/arch/x86/numa.c                 |  8 +++-----
 xen/arch/x86/pv/descriptor-tables.c |  2 +-
 xen/arch/x86/pv/dom0_build.c        |  4 ++--
 xen/arch/x86/pv/shim.c              |  3 ---
 xen/arch/x86/setup.c                | 10 +++++-----
 xen/arch/x86/smpboot.c              |  4 ++--
 xen/arch/x86/srat.c                 |  2 +-
 xen/arch/x86/tboot.c                |  4 ++--
 xen/arch/x86/traps.c                |  4 ++--
 xen/arch/x86/x86_64/mm.c            | 13 +++++++------
 xen/common/domctl.c                 |  3 ++-
 xen/common/efi/boot.c               |  7 ++++---
 xen/common/grant_table.c            |  8 ++++----
 xen/common/page_alloc.c             | 18 ++++++++---------
 xen/common/trace.c                  | 19 +++++++++---------
 xen/common/xenoprof.c               |  4 ----
 xen/drivers/acpi/osl.c              |  2 +-
 xen/include/asm-arm/mm.h            | 14 +++-----------
 xen/include/asm-x86/grant_table.h   |  4 ++--
 xen/include/asm-x86/mm.h            |  2 +-
 xen/include/asm-x86/page.h          |  6 ++----
 xen/include/xen/domain_page.h       |  6 +++---
 31 files changed, 96 insertions(+), 139 deletions(-)

diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
index 1b1cfabb00..b3ac32f601 100644
--- a/xen/arch/arm/acpi/domain_build.c
+++ b/xen/arch/arm/acpi/domain_build.c
@@ -20,10 +20,6 @@
 #include <asm/kernel.h>
 #include <asm/domain_build.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 #define ACPI_DOM0_FDT_MIN_SIZE 4096
 
 static int __init acpi_iomem_deny_access(struct domain *d)
diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index 237c4e5642..724b0b187e 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -32,10 +32,6 @@
 #include <asm/insn.h>
 #include <asm/page.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 extern const struct alt_instr __alt_instructions[], __alt_instructions_end[];
 
 struct alt_region {
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 0248893de0..68105fe91f 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -14,10 +14,6 @@
 #include <asm/insn.h>
 #include <asm/psci.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 /* Hardening Branch predictor code for Arm64 */
 #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4307087536..5c9a55f084 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -52,10 +52,6 @@ struct map_range_data
     p2m_type_t p2mt;
 };
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 //#define DEBUG_11_ALLOCATION
 #ifdef DEBUG_11_ALLOCATION
 # define D11PRINT(fmt, args...) printk(XENLOG_DEBUG fmt, ##args)
diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
index 915e9d926a..0ffdda6005 100644
--- a/xen/arch/arm/livepatch.c
+++ b/xen/arch/arm/livepatch.c
@@ -12,10 +12,6 @@
 #include <asm/livepatch.h>
 #include <asm/mm.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 void *vmap_of_xen_text;
 
 int arch_livepatch_safety_check(void)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 727107eefa..1075e5fcaf 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -43,12 +43,6 @@
 
 #include <asm/setup.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-#undef mfn_to_virt
-#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn))
-
 #ifdef NDEBUG
 static inline void
 __attribute__ ((__format__ (__printf__, 1, 2)))
@@ -835,7 +829,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
      * Virtual address aligned to previous 1GB to match physical
      * address alignment done above.
      */
-    vaddr = (vaddr_t)__mfn_to_virt(base_mfn) & FIRST_MASK;
+    vaddr = (vaddr_t)mfn_to_virt(_mfn(base_mfn)) & FIRST_MASK;
 
     while ( mfn < end_mfn )
     {
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index dd32712d2f..8b8bf4cbe8 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -78,17 +78,17 @@ void *map_domain_page(mfn_t mfn)
 
 #ifdef NDEBUG
     if ( mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) )
-        return mfn_to_virt(mfn_x(mfn));
+        return mfn_to_virt(mfn);
 #endif
 
     v = mapcache_current_vcpu();
     if ( !v || !is_pv_vcpu(v) )
-        return mfn_to_virt(mfn_x(mfn));
+        return mfn_to_virt(mfn);
 
     dcache = &v->domain->arch.pv.mapcache;
     vcache = &v->arch.pv.mapcache;
     if ( !dcache->inuse )
-        return mfn_to_virt(mfn_x(mfn));
+        return mfn_to_virt(mfn);
 
     perfc_incr(map_domain_page_count);
 
@@ -311,7 +311,7 @@ void *map_domain_page_global(mfn_t mfn)
 
 #ifdef NDEBUG
     if ( mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) )
-        return mfn_to_virt(mfn_x(mfn));
+        return mfn_to_virt(mfn);
 #endif
 
     return vmap(&mfn, 1);
@@ -336,7 +336,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
     const l1_pgentry_t *pl1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
-        return _mfn(virt_to_mfn(ptr));
+        return virt_to_mfn(ptr);
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
     {
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 2afd44c8a4..143b7e0a3c 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -444,31 +444,32 @@ static int __init pvh_populate_p2m(struct domain *d)
     /* Populate memory map. */
     for ( i = 0; i < d->arch.nr_e820; i++ )
     {
-        unsigned long addr, size;
+        mfn_t mfn;
+        unsigned long size;
 
         if ( d->arch.e820[i].type != E820_RAM )
             continue;
 
-        addr = PFN_DOWN(d->arch.e820[i].addr);
+        mfn = maddr_to_mfn(d->arch.e820[i].addr);
         size = PFN_DOWN(d->arch.e820[i].size);
 
-        rc = pvh_populate_memory_range(d, addr, size);
+        rc = pvh_populate_memory_range(d, mfn_x(mfn), size);
         if ( rc )
             return rc;
 
-        if ( addr < MB1_PAGES )
+        if ( mfn_x(mfn) < MB1_PAGES )
         {
             uint64_t end = min_t(uint64_t, MB(1),
                                  d->arch.e820[i].addr + d->arch.e820[i].size);
             enum hvm_translation_result res =
-                 hvm_copy_to_guest_phys(mfn_to_maddr(_mfn(addr)),
-                                        mfn_to_virt(addr),
+                 hvm_copy_to_guest_phys(mfn_to_maddr(mfn),
+                                        mfn_to_virt(mfn),
                                         d->arch.e820[i].addr - end,
                                         v);
 
             if ( res != HVMTRANS_okay )
-                printk("Failed to copy [%#lx, %#lx): %d\n",
-                       addr, addr + size, res);
+                printk("Failed to copy [%"PRI_mfn", %"PRI_mfn"): %d\n",
+                       mfn_x(mfn), mfn_x(mfn_add(mfn, size)), res);
         }
     }
 
@@ -607,7 +608,8 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
 
     if ( initrd != NULL )
     {
-        rc = hvm_copy_to_guest_phys(last_addr, mfn_to_virt(initrd->mod_start),
+        rc = hvm_copy_to_guest_phys(last_addr,
+                                    mfn_to_virt(_mfn(initrd->mod_start)),
                                     initrd->mod_end, v);
         if ( rc )
         {
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 069a61deb8..7c0f81759a 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -152,10 +152,6 @@
 #include "pv/mm.h"
 #endif
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(v) _mfn(__virt_to_mfn(v))
-
 /* Mapping of the fixmap space needed early. */
 l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
     l1_fixmap[L1_PAGETABLE_ENTRIES];
@@ -323,8 +319,8 @@ void __init arch_init_memory(void)
         iostart_pfn = max_t(unsigned long, pfn, 1UL << (20 - PAGE_SHIFT));
         ioend_pfn = min(rstart_pfn, 16UL << (20 - PAGE_SHIFT));
         if ( iostart_pfn < ioend_pfn )
-            destroy_xen_mappings((unsigned long)mfn_to_virt(iostart_pfn),
-                                 (unsigned long)mfn_to_virt(ioend_pfn));
+            destroy_xen_mappings((unsigned long)mfn_to_virt(_mfn(iostart_pfn)),
+                                 (unsigned long)mfn_to_virt(_mfn(ioend_pfn)));
 
         /* Mark as I/O up to next RAM region. */
         for ( ; pfn < rstart_pfn; pfn++ )
@@ -785,21 +781,21 @@ bool is_iomem_page(mfn_t mfn)
     return (page_get_owner(page) == dom_io);
 }
 
-static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
+static int update_xen_mappings(mfn_t mfn, unsigned int cacheattr)
 {
     int err = 0;
-    bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
-         mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
+    bool alias = mfn_x(mfn) >= PFN_DOWN(xen_phys_start) &&
+         mfn_x(mfn) < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
     unsigned long xen_va =
-        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
+        XEN_VIRT_START + mfn_to_maddr(mfn_add(mfn, -PFN_DOWN(xen_phys_start)));
 
     if ( unlikely(alias) && cacheattr )
-        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0);
+        err = map_pages_to_xen(xen_va, mfn, 1, 0);
     if ( !err )
-        err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
+        err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), mfn, 1,
                      PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
     if ( unlikely(alias) && !cacheattr && !err )
-        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR);
+        err = map_pages_to_xen(xen_va, mfn, 1, PAGE_HYPERVISOR);
     return err;
 }
 
@@ -1029,7 +1025,7 @@ get_page_from_l1e(
             nx = (x & ~PGC_cacheattr_mask) | (cacheattr << PGC_cacheattr_base);
         } while ( (y = cmpxchg(&page->count_info, x, nx)) != x );
 
-        err = update_xen_mappings(mfn, cacheattr);
+        err = update_xen_mappings(_mfn(mfn), cacheattr);
         if ( unlikely(err) )
         {
             cacheattr = y & PGC_cacheattr_mask;
@@ -2449,7 +2445,7 @@ static int cleanup_page_mappings(struct page_info *page)
 
         BUG_ON(is_xen_heap_page(page));
 
-        rc = update_xen_mappings(mfn, 0);
+        rc = update_xen_mappings(_mfn(mfn), 0);
     }
 
     /*
@@ -4950,7 +4946,7 @@ void *alloc_xen_pagetable(void)
 {
     mfn_t mfn = alloc_xen_pagetable_new();
 
-    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn));
+    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn);
 }
 
 void free_xen_pagetable(void *v)
@@ -4983,7 +4979,7 @@ mfn_t alloc_xen_pagetable_new(void)
 void free_xen_pagetable_new(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
-        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
+        free_xenheap_page(mfn_to_virt(mfn));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index f1066c59c7..87f7365304 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -100,14 +100,12 @@ static int __init populate_memnodemap(const struct node *nodes,
 static int __init allocate_cachealigned_memnodemap(void)
 {
     unsigned long size = PFN_UP(memnodemapsize * sizeof(*memnodemap));
-    unsigned long mfn = mfn_x(alloc_boot_pages(size, 1));
+    mfn_t mfn = alloc_boot_pages(size, 1);
 
     memnodemap = mfn_to_virt(mfn);
-    mfn <<= PAGE_SHIFT;
-    size <<= PAGE_SHIFT;
     printk(KERN_DEBUG "NUMA: Allocated memnodemap from %lx - %lx\n",
-           mfn, mfn + size);
-    memnodemapsize = size / sizeof(*memnodemap);
+           mfn_to_maddr(mfn), mfn_to_maddr(mfn_add(mfn, size)));
+    memnodemapsize = (size << PAGE_SHIFT) / sizeof(*memnodemap);
 
     return 0;
 }
diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
index 940804b18a..f22beb1f3c 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -76,7 +76,7 @@ bool pv_destroy_ldt(struct vcpu *v)
 void pv_destroy_gdt(struct vcpu *v)
 {
     l1_pgentry_t *pl1e = pv_gdt_ptes(v);
-    mfn_t zero_mfn = _mfn(virt_to_mfn(zero_page));
+    mfn_t zero_mfn = virt_to_mfn(zero_page);
     l1_pgentry_t zero_l1e = l1e_from_mfn(zero_mfn, __PAGE_HYPERVISOR_RO);
     unsigned int i;
 
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 5678da782d..30846b5f97 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -523,7 +523,7 @@ int __init dom0_construct_pv(struct domain *d,
                     free_domheap_pages(page, order);
                     page += 1UL << order;
                 }
-            memcpy(page_to_virt(page), mfn_to_virt(initrd->mod_start),
+            memcpy(page_to_virt(page), mfn_to_virt(_mfn(initrd->mod_start)),
                    initrd_len);
             mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT;
             init_domheap_pages(mpt_alloc,
@@ -601,7 +601,7 @@ int __init dom0_construct_pv(struct domain *d,
         maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l4_page_table;
         l4start = l4tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE;
         clear_page(l4tab);
-        init_xen_l4_slots(l4tab, _mfn(virt_to_mfn(l4start)),
+        init_xen_l4_slots(l4tab, virt_to_mfn(l4start),
                           d, INVALID_MFN, true);
         v->arch.guest_table = pagetable_from_paddr(__pa(l4start));
     }
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index ed2ece8a8a..b849c60699 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -39,9 +39,6 @@
 
 #include <compat/grant_table.h>
 
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 #ifdef CONFIG_PV_SHIM_EXCLUSIVE
 /* Tolerate "pv-shim" being passed to a CONFIG_PV_SHIM_EXCLUSIVE hypervisor. */
 ignore_param("pv-shim");
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 885919d5c3..cfe95c5dac 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -340,7 +340,7 @@ void *__init bootstrap_map(const module_t *mod)
     void *ret;
 
     if ( system_state != SYS_STATE_early_boot )
-        return mod ? mfn_to_virt(mod->mod_start) : NULL;
+        return mod ? mfn_to_virt(_mfn(mod->mod_start)) : NULL;
 
     if ( !mod )
     {
@@ -1005,7 +1005,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
          * This needs to remain in sync with xen_in_range() and the
          * respective reserve_e820_ram() invocation below.
          */
-        mod[mbi->mods_count].mod_start = virt_to_mfn(_stext);
+        mod[mbi->mods_count].mod_start = mfn_x(virt_to_mfn(_stext));
         mod[mbi->mods_count].mod_end = __2M_rwdata_end - _stext;
     }
 
@@ -1404,7 +1404,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     {
         set_pdx_range(mod[i].mod_start,
                       mod[i].mod_start + PFN_UP(mod[i].mod_end));
-        map_pages_to_xen((unsigned long)mfn_to_virt(mod[i].mod_start),
+        map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(mod[i].mod_start)),
                          _mfn(mod[i].mod_start),
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
@@ -1494,9 +1494,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     numa_initmem_init(0, raw_max_page);
 
-    if ( max_page - 1 > virt_to_mfn(HYPERVISOR_VIRT_END - 1) )
+    if ( max_page - 1 > mfn_x(virt_to_mfn(HYPERVISOR_VIRT_END - 1)) )
     {
-        unsigned long limit = virt_to_mfn(HYPERVISOR_VIRT_END - 1);
+        unsigned long limit = mfn_x(virt_to_mfn(HYPERVISOR_VIRT_END - 1));
         uint64_t mask = PAGE_SIZE - 1;
 
         if ( !highmem_start )
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 09264b02d1..31b4366ab2 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -996,7 +996,7 @@ static int cpu_smpboot_alloc(unsigned int cpu)
         goto out;
     per_cpu(gdt, cpu) = gdt;
     per_cpu(gdt_l1e, cpu) =
-        l1e_from_pfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW);
+        l1e_from_mfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW);
     memcpy(gdt, boot_gdt, NR_RESERVED_GDT_PAGES * PAGE_SIZE);
     BUILD_BUG_ON(NR_CPUS > 0x10000);
     gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
@@ -1005,7 +1005,7 @@ static int cpu_smpboot_alloc(unsigned int cpu)
     if ( gdt == NULL )
         goto out;
     per_cpu(compat_gdt_l1e, cpu) =
-        l1e_from_pfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW);
+        l1e_from_mfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW);
     memcpy(gdt, boot_compat_gdt, NR_RESERVED_GDT_PAGES * PAGE_SIZE);
     gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
 
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 506a56d66b..0baf8b97ce 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -196,7 +196,7 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
 		return;
 	}
 	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
-	acpi_slit = mfn_to_virt(mfn_x(mfn));
+	acpi_slit = mfn_to_virt(mfn);
 	memcpy(acpi_slit, slit, slit->header.length);
 }
 
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 8c232270b4..19ea69f7c1 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -260,7 +260,7 @@ static int mfn_in_guarded_stack(unsigned long mfn)
             continue;
         p = (void *)((unsigned long)stack_base[i] + STACK_SIZE -
                      PRIMARY_STACK_SIZE - PAGE_SIZE);
-        if ( mfn == virt_to_mfn(p) )
+        if ( mfn_eq(_mfn(mfn), virt_to_mfn(p)) )
             return -1;
     }
 
@@ -296,7 +296,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
             if ( mfn_in_guarded_stack(mfn) )
                 continue; /* skip guard stack, see memguard_guard_stack() in mm.c */
 
-            pg = mfn_to_virt(mfn);
+            pg = mfn_to_virt(_mfn(mfn));
             vmac_update((uint8_t *)pg, PAGE_SIZE, &ctx);
         }
     }
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index e838846c6b..4aa7c35be4 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2029,9 +2029,9 @@ void __init trap_init(void)
 
     /* Cache {,compat_}gdt_l1e now that physically relocation is done. */
     this_cpu(gdt_l1e) =
-        l1e_from_pfn(virt_to_mfn(boot_gdt), __PAGE_HYPERVISOR_RW);
+        l1e_from_mfn(virt_to_mfn(boot_gdt), __PAGE_HYPERVISOR_RW);
     this_cpu(compat_gdt_l1e) =
-        l1e_from_pfn(virt_to_mfn(boot_compat_gdt), __PAGE_HYPERVISOR_RW);
+        l1e_from_mfn(virt_to_mfn(boot_compat_gdt), __PAGE_HYPERVISOR_RW);
 
     percpu_traps_init();
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 3516423bb0..ddd5f1ddc4 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1369,11 +1369,12 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
         return -EINVAL;
     }
 
-    i = virt_to_mfn(HYPERVISOR_VIRT_END - 1) + 1;
+    i = mfn_x(virt_to_mfn(HYPERVISOR_VIRT_END - 1)) + 1;
     if ( spfn < i )
     {
-        ret = map_pages_to_xen((unsigned long)mfn_to_virt(spfn), _mfn(spfn),
-                               min(epfn, i) - spfn, PAGE_HYPERVISOR);
+        ret = map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(spfn)),
+                               _mfn(spfn), min(epfn, i) - spfn,
+                               PAGE_HYPERVISOR);
         if ( ret )
             goto destroy_directmap;
     }
@@ -1381,7 +1382,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
     {
         if ( i < spfn )
             i = spfn;
-        ret = map_pages_to_xen((unsigned long)mfn_to_virt(i), _mfn(i),
+        ret = map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(i)), _mfn(i),
                                epfn - i, __PAGE_HYPERVISOR_RW);
         if ( ret )
             goto destroy_directmap;
@@ -1473,8 +1474,8 @@ destroy_frametable:
     NODE_DATA(node)->node_start_pfn = old_node_start;
     NODE_DATA(node)->node_spanned_pages = old_node_span;
  destroy_directmap:
-    destroy_xen_mappings((unsigned long)mfn_to_virt(spfn),
-                         (unsigned long)mfn_to_virt(epfn));
+    destroy_xen_mappings((unsigned long)mfn_to_virt(_mfn(spfn)),
+                         (unsigned long)mfn_to_virt(_mfn(epfn)));
 
     return ret;
 }
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..e4a055dc67 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -196,7 +196,8 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
     info->outstanding_pages = d->outstanding_pages;
     info->shr_pages         = atomic_read(&d->shr_pages);
     info->paged_pages       = atomic_read(&d->paged_pages);
-    info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info));
+    info->shared_info_frame = mfn_to_gmfn(d,
+                                          mfn_x(virt_to_mfn(d->shared_info)));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
 
     info->cpupool = cpupool_get_id(d);
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index a6f84c945a..4f944fb3e8 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1447,7 +1447,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
     {
         l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
         l3_pgentry_t *l3src, *l3dst;
-        unsigned long va = (unsigned long)mfn_to_virt(mfn);
+        unsigned long va = (unsigned long)mfn_to_virt(_mfn(mfn));
 
         next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
         if ( !is_valid(mfn, min(next, end)) )
@@ -1562,9 +1562,10 @@ void __init efi_init_memory(void)
              !(smfn & pfn_hole_mask) &&
              !((smfn ^ (emfn - 1)) & ~pfn_pdx_bottom_mask) )
         {
-            if ( (unsigned long)mfn_to_virt(emfn - 1) >= HYPERVISOR_VIRT_END )
+            if ( (unsigned long)mfn_to_virt(_mfn(emfn - 1)) >=
+                 HYPERVISOR_VIRT_END )
                 prot &= ~_PAGE_GLOBAL;
-            if ( map_pages_to_xen((unsigned long)mfn_to_virt(smfn),
+            if ( map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(smfn)),
                                   _mfn(smfn), emfn - smfn, prot) == 0 )
                 desc->VirtualStart =
                     (unsigned long)maddr_to_virt(desc->PhysicalStart);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9fd6e60416..407fdf08ff 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3935,8 +3935,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
     }
 
     /* Make sure idx is bounded wrt nr_status_frames */
-    *mfn = _mfn(virt_to_mfn(
-                gt->status[array_index_nospec(idx, nr_status_frames(gt))]));
+    *mfn = virt_to_mfn(
+                gt->status[array_index_nospec(idx, nr_status_frames(gt))]);
     return 0;
 }
 
@@ -3966,8 +3966,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
     }
 
     /* Make sure idx is bounded wrt nr_status_frames */
-    *mfn = _mfn(virt_to_mfn(
-                gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))]));
+    *mfn = virt_to_mfn(
+                gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))]);
     return 0;
 }
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 76d37226df..41e4fa899d 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -565,7 +565,7 @@ static unsigned int __read_mostly xenheap_bits;
 #define xenheap_bits 0
 #endif
 
-static unsigned long init_node_heap(int node, unsigned long mfn,
+static unsigned long init_node_heap(int node, mfn_t mfn,
                                     unsigned long nr, bool *use_tail)
 {
     /* First node to be discovered has its heap metadata statically alloced. */
@@ -584,21 +584,21 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
         needed = 0;
     }
     else if ( *use_tail && nr >= needed &&
-              arch_mfn_in_directmap(mfn + nr) &&
+              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, nr))) &&
               (!xenheap_bits ||
-               !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
+               !((mfn_x(mfn) + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
-        _heap[node] = mfn_to_virt(mfn + nr - needed);
-        avail[node] = mfn_to_virt(mfn + nr - 1) +
+        _heap[node] = mfn_to_virt(mfn_add(mfn, nr - needed));
+        avail[node] = mfn_to_virt(mfn_add(mfn, nr - 1)) +
                       PAGE_SIZE - sizeof(**avail) * NR_ZONES;
     }
     else if ( nr >= needed &&
-              arch_mfn_in_directmap(mfn + needed) &&
+              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, needed))) &&
               (!xenheap_bits ||
-               !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
+               !((mfn_x(mfn) + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
         _heap[node] = mfn_to_virt(mfn);
-        avail[node] = mfn_to_virt(mfn + needed - 1) +
+        avail[node] = mfn_to_virt(mfn_add(mfn, needed - 1)) +
                       PAGE_SIZE - sizeof(**avail) * NR_ZONES;
         *use_tail = false;
     }
@@ -1809,7 +1809,7 @@ static void init_heap_pages(
                             (find_first_set_bit(e) <= find_first_set_bit(s));
             unsigned long n;
 
-            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
+            n = init_node_heap(nid, page_to_mfn(pg + i), nr_pages - i,
                                &use_tail);
             BUG_ON(i + n > nr_pages);
             if ( n && !use_tail )
diff --git a/xen/common/trace.c b/xen/common/trace.c
index a2a389a1c7..8dbbcd31de 100644
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -218,7 +218,7 @@ static int alloc_trace_bufs(unsigned int pages)
                 t_info_mfn_list[offset + i] = 0;
                 goto out_dealloc;
             }
-            t_info_mfn_list[offset + i] = virt_to_mfn(p);
+            t_info_mfn_list[offset + i] = mfn_x(virt_to_mfn(p));
         }
     }
 
@@ -234,7 +234,8 @@ static int alloc_trace_bufs(unsigned int pages)
         offset = t_info->mfn_offset[cpu];
 
         /* Initialize the buffer metadata */
-        per_cpu(t_bufs, cpu) = buf = mfn_to_virt(t_info_mfn_list[offset]);
+        buf = mfn_to_virt(_mfn(t_info_mfn_list[offset]));
+        per_cpu(t_bufs, cpu) = buf;
         buf->cons = buf->prod = 0;
 
         printk(XENLOG_INFO "xentrace: p%d mfn %x offset %u\n",
@@ -269,10 +270,10 @@ out_dealloc:
             continue;
         for ( i = 0; i < pages; i++ )
         {
-            uint32_t mfn = t_info_mfn_list[offset + i];
-            if ( !mfn )
+            mfn_t mfn = _mfn(t_info_mfn_list[offset + i]);
+            if ( mfn_eq(mfn, _mfn(0)) )
                 break;
-            ASSERT(!(mfn_to_page(_mfn(mfn))->count_info & PGC_allocated));
+            ASSERT(!(mfn_to_page(mfn)->count_info & PGC_allocated));
             free_xenheap_pages(mfn_to_virt(mfn), 0);
         }
     }
@@ -378,7 +379,7 @@ int tb_control(struct xen_sysctl_tbuf_op *tbc)
     {
     case XEN_SYSCTL_TBUFOP_get_info:
         tbc->evt_mask   = tb_event_mask;
-        tbc->buffer_mfn = t_info ? virt_to_mfn(t_info) : 0;
+        tbc->buffer_mfn = t_info ? mfn_x(virt_to_mfn(t_info)) : 0;
         tbc->size = t_info_pages * PAGE_SIZE;
         break;
     case XEN_SYSCTL_TBUFOP_set_cpu_mask:
@@ -512,7 +513,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next,
     uint16_t per_cpu_mfn_offset;
     uint32_t per_cpu_mfn_nr;
     uint32_t *mfn_list;
-    uint32_t mfn;
+    mfn_t mfn;
     unsigned char *this_page;
 
     barrier(); /* must read buf->prod and buf->cons only once */
@@ -533,7 +534,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next,
     per_cpu_mfn_nr = x >> PAGE_SHIFT;
     per_cpu_mfn_offset = t_info->mfn_offset[smp_processor_id()];
     mfn_list = (uint32_t *)t_info;
-    mfn = mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr];
+    mfn = _mfn(mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr]);
     this_page = mfn_to_virt(mfn);
     if (per_cpu_mfn_nr + 1 >= opt_tbuf_size)
     {
@@ -542,7 +543,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next,
     }
     else
     {
-        mfn = mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr + 1];
+        mfn = _mfn(mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr + 1]);
         *next_page = mfn_to_virt(mfn);
     }
     return this_page;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index 4f3e799ebb..2721e99da7 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -19,10 +19,6 @@
 #include <xsm/xsm.h>
 #include <xen/hypercall.h>
 
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-
 /* Limit amount of pages used for shared buffer (per domain) */
 #define MAX_OPROF_SHARED_PAGES 32
 
diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
index 4c8bb7839e..ca38565507 100644
--- a/xen/drivers/acpi/osl.c
+++ b/xen/drivers/acpi/osl.c
@@ -219,7 +219,7 @@ void *__init acpi_os_alloc_memory(size_t sz)
 	void *ptr;
 
 	if (system_state == SYS_STATE_early_boot)
-		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
+		return mfn_to_virt(alloc_boot_pages(PFN_UP(sz), 1));
 
 	ptr = xmalloc_bytes(sz);
 	ASSERT(!ptr || is_xmalloc_memory(ptr));
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 7df91280bc..abf4cc23e4 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -285,16 +285,8 @@ static inline uint64_t gvirt_to_maddr(vaddr_t va, paddr_t *pa,
 #define __va(x)             (maddr_to_virt(x))
 
 /* Convert between Xen-heap virtual addresses and machine frame numbers. */
-#define __virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT)
-#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
-
-/*
- * We define non-underscored wrappers for above conversion functions.
- * These are overriden in various source files while underscored version
- * remain intact.
- */
-#define virt_to_mfn(va)     __virt_to_mfn(va)
-#define mfn_to_virt(mfn)    __mfn_to_virt(mfn)
+#define virt_to_mfn(va)     maddr_to_mfn(virt_to_maddr(va))
+#define mfn_to_virt(mfn)    maddr_to_virt(mfn_to_maddr(mfn))
 
 /* Convert between Xen-heap virtual addresses and page-info structures. */
 static inline struct page_info *virt_to_page(const void *v)
@@ -312,7 +304,7 @@ static inline struct page_info *virt_to_page(const void *v)
 
 static inline void *page_to_virt(const struct page_info *pg)
 {
-    return mfn_to_virt(mfn_x(page_to_mfn(pg)));
+    return mfn_to_virt(page_to_mfn(pg));
 }
 
 struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 84e32960c0..5871238f6d 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -45,11 +45,11 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
     VALID_M2P(gpfn_) ? _gfn(gpfn_) : INVALID_GFN;                        \
 })
 
-#define gnttab_shared_mfn(t, i) _mfn(__virt_to_mfn((t)->shared_raw[i]))
+#define gnttab_shared_mfn(t, i) virt_to_mfn((t)->shared_raw[i])
 
 #define gnttab_shared_gfn(d, t, i) mfn_to_gfn(d, gnttab_shared_mfn(t, i))
 
-#define gnttab_status_mfn(t, i) _mfn(__virt_to_mfn((t)->status[i]))
+#define gnttab_status_mfn(t, i) virt_to_mfn((t)->status[i])
 
 #define gnttab_status_gfn(d, t, i) mfn_to_gfn(d, gnttab_status_mfn(t, i))
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 9764362a38..83058fb8d1 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -667,7 +667,7 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
 {
     unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
 
-    return mfn <= (virt_to_mfn(eva - 1) + 1);
+    return mfn <= mfn_x(mfn_add(virt_to_mfn(eva - 1),  1));
 }
 
 int arch_acquire_resource(struct domain *d, unsigned int type,
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index c98d8f5ede..624dbbb949 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -236,8 +236,8 @@ void copy_page_sse2(void *, const void *);
 #define __va(x)             (maddr_to_virt(x))
 
 /* Convert between Xen-heap virtual addresses and machine frame numbers. */
-#define __virt_to_mfn(va)   (virt_to_maddr(va) >> PAGE_SHIFT)
-#define __mfn_to_virt(mfn)  (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
+#define virt_to_mfn(va)     maddr_to_mfn(virt_to_maddr(va))
+#define mfn_to_virt(mfn)    maddr_to_virt(mfn_to_maddr(mfn))
 
 /* Convert between machine frame numbers and page-info structures. */
 #define mfn_to_page(mfn)    (frame_table + mfn_to_pdx(mfn))
@@ -260,8 +260,6 @@ void copy_page_sse2(void *, const void *);
  * overridden in various source files while underscored versions remain intact.
  */
 #define mfn_valid(mfn)      __mfn_valid(mfn_x(mfn))
-#define virt_to_mfn(va)     __virt_to_mfn(va)
-#define mfn_to_virt(mfn)    __mfn_to_virt(mfn)
 #define virt_to_maddr(va)   __virt_to_maddr((unsigned long)(va))
 #define maddr_to_virt(ma)   __maddr_to_virt((unsigned long)(ma))
 #define maddr_to_page(ma)   __maddr_to_page(ma)
diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
index ab2be7b719..0314845921 100644
--- a/xen/include/xen/domain_page.h
+++ b/xen/include/xen/domain_page.h
@@ -53,14 +53,14 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
 
 #else /* !CONFIG_DOMAIN_PAGE */
 
-#define map_domain_page(mfn)                __mfn_to_virt(mfn_x(mfn))
+#define map_domain_page(mfn)                mfn_to_virt(mfn)
 #define __map_domain_page(pg)               page_to_virt(pg)
 #define unmap_domain_page(va)               ((void)(va))
-#define domain_page_map_to_mfn(va)          _mfn(virt_to_mfn((unsigned long)(va)))
+#define domain_page_map_to_mfn(va)          virt_to_mfn((unsigned long)(va))
 
 static inline void *map_domain_page_global(mfn_t mfn)
 {
-    return mfn_to_virt(mfn_x(mfn));
+    return mfn_to_virt(mfn);
 }
 
 static inline void *__map_domain_page_global(const struct page_info *pg)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (3 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN julien
@ 2020-03-22 16:14 ` julien
  2020-03-26 15:39   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn' julien
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, julien, Jun Nakajima, Wei Liu, Andrew Cooper,
	Julien Grall, Tim Deegan, George Dunlap, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Most of the users of the pagetable_* helpers can use the typesafe
version. Therefore, it is time to convert the callers still using
non-typesafe version to use the typesafe one.

Some part of the code assume that a pagetable is NULL when the MFN 0.
When possible this is replaced with the helper pagetable_is_null().

There are still someplace which test against MFN 0 and it is not clear
if other unconverted part of the code rely on the value. So, for now,
the NULL value is not changed to INVALID_MFN.

No functional changes intented.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/domain.c          | 18 ++++++++-------
 xen/arch/x86/domctl.c          |  6 ++---
 xen/arch/x86/hvm/vmx/vmcs.c    |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c     |  2 +-
 xen/arch/x86/hvm/vmx/vvmx.c    |  2 +-
 xen/arch/x86/mm.c              | 40 +++++++++++++++++-----------------
 xen/arch/x86/mm/hap/hap.c      |  2 +-
 xen/arch/x86/mm/p2m-ept.c      |  2 +-
 xen/arch/x86/mm/p2m-pt.c       |  4 ++--
 xen/arch/x86/mm/p2m.c          |  2 +-
 xen/arch/x86/mm/shadow/multi.c | 24 ++++++++++----------
 xen/arch/x86/pv/dom0_build.c   | 10 ++++-----
 xen/arch/x86/traps.c           |  6 ++---
 xen/include/asm-x86/page.h     | 19 ++++++++--------
 14 files changed, 70 insertions(+), 69 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 15750ce210..18d8fda9bd 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -952,25 +952,27 @@ int arch_set_info_guest(
     }
     else
     {
-        unsigned long pfn = pagetable_get_pfn(v->arch.guest_table);
+        mfn_t mfn = pagetable_get_mfn(v->arch.guest_table);
         bool fail;
 
         if ( !compat )
         {
-            fail = xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[3];
+            fail = mfn_to_cr3(mfn) != c.nat->ctrlreg[3];
             if ( pagetable_is_null(v->arch.guest_table_user) )
                 fail |= c.nat->ctrlreg[1] || !(flags & VGCF_in_kernel);
             else
             {
-                pfn = pagetable_get_pfn(v->arch.guest_table_user);
-                fail |= xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[1];
+                mfn = pagetable_get_mfn(v->arch.guest_table_user);
+                fail |= mfn_to_cr3(mfn) != c.nat->ctrlreg[1];
             }
-        } else {
-            l4_pgentry_t *l4tab = map_domain_page(_mfn(pfn));
+        }
+        else
+        {
+            l4_pgentry_t *l4tab = map_domain_page(mfn);
 
-            pfn = l4e_get_pfn(*l4tab);
+            mfn = l4e_get_mfn(*l4tab);
             unmap_domain_page(l4tab);
-            fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
+            fail = compat_pfn_to_cr3(mfn_x(mfn)) != c.cmp->ctrlreg[3];
         }
 
         for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ed86762fa6..02596c3810 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1611,11 +1611,11 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
 
         if ( !compat )
         {
-            c.nat->ctrlreg[3] = xen_pfn_to_cr3(
-                pagetable_get_pfn(v->arch.guest_table));
+            c.nat->ctrlreg[3] = mfn_to_cr3(
+                pagetable_get_mfn(v->arch.guest_table));
             c.nat->ctrlreg[1] =
                 pagetable_is_null(v->arch.guest_table_user) ? 0
-                : xen_pfn_to_cr3(pagetable_get_pfn(v->arch.guest_table_user));
+                : mfn_to_cr3(pagetable_get_mfn(v->arch.guest_table_user));
         }
         else
         {
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 4c23645454..1f39367253 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1290,7 +1290,7 @@ static int construct_vmcs(struct vcpu *v)
         struct p2m_domain *p2m = p2m_get_hostp2m(d);
         struct ept_data *ept = &p2m->ept;
 
-        ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m)));
         __vmwrite(EPT_POINTER, ept->eptp);
 
         __vmwrite(HOST_PAT, XEN_MSR_PAT);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d265ed46ad..a1e3a19c0a 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2110,7 +2110,7 @@ static void vmx_vcpu_update_eptp(struct vcpu *v)
         p2m = p2m_get_hostp2m(d);
 
     ept = &p2m->ept;
-    ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m)));
 
     vmx_vmcs_enter(v);
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f049920196..84b47ef277 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1149,7 +1149,7 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     struct p2m_domain *p2m = p2m_get_nestedp2m(v);
     struct ept_data *ept = &p2m->ept;
 
-    ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m)));
     return ept->eptp;
 }
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7c0f81759a..aa0bf3d0ee 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3085,7 +3085,7 @@ int put_old_guest_table(struct vcpu *v)
 
 int vcpu_destroy_pagetables(struct vcpu *v)
 {
-    unsigned long mfn = pagetable_get_pfn(v->arch.guest_table);
+    mfn_t mfn = pagetable_get_mfn(v->arch.guest_table);
     struct page_info *page = NULL;
     int rc = put_old_guest_table(v);
     bool put_guest_table_user = false;
@@ -3102,9 +3102,9 @@ int vcpu_destroy_pagetables(struct vcpu *v)
      */
     if ( is_pv_32bit_vcpu(v) )
     {
-        l4_pgentry_t *l4tab = map_domain_page(_mfn(mfn));
+        l4_pgentry_t *l4tab = map_domain_page(mfn);
 
-        mfn = l4e_get_pfn(*l4tab);
+        mfn = l4e_get_mfn(*l4tab);
         l4e_write(l4tab, l4e_empty());
         unmap_domain_page(l4tab);
     }
@@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
 
     /* Free that page if non-zero */
     do {
-        if ( mfn )
+        if ( !mfn_eq(mfn, _mfn(0)) )
         {
-            page = mfn_to_page(_mfn(mfn));
+            page = mfn_to_page(mfn);
             if ( paging_mode_refcounts(v->domain) )
                 put_page(page);
             else
                 rc = put_page_and_type_preemptible(page);
-            mfn = 0;
+            mfn = _mfn(0);
         }
 
         if ( !rc && put_guest_table_user )
         {
             /* Drop ref to guest_table_user (from MMUEXT_NEW_USER_BASEPTR) */
-            mfn = pagetable_get_pfn(v->arch.guest_table_user);
+            mfn = pagetable_get_mfn(v->arch.guest_table_user);
             v->arch.guest_table_user = pagetable_null();
             put_guest_table_user = false;
         }
-    } while ( mfn );
+    } while ( !mfn_eq(mfn, _mfn(0)) );
 
     /*
      * If a "put" operation was interrupted, finish things off in
@@ -3551,7 +3551,8 @@ long do_mmuext_op(
             break;
 
         case MMUEXT_NEW_USER_BASEPTR: {
-            unsigned long old_mfn;
+            mfn_t old_mfn;
+            mfn_t new_mfn = _mfn(op.arg1.mfn);
 
             if ( unlikely(currd != pg_owner) )
                 rc = -EPERM;
@@ -3560,19 +3561,18 @@ long do_mmuext_op(
             if ( unlikely(rc) )
                 break;
 
-            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
+            old_mfn = pagetable_get_mfn(curr->arch.guest_table_user);
             /*
              * This is particularly important when getting restarted after the
              * previous attempt got preempted in the put-old-MFN phase.
              */
-            if ( old_mfn == op.arg1.mfn )
+            if ( mfn_eq(old_mfn, new_mfn) )
                 break;
 
-            if ( op.arg1.mfn != 0 )
+            if ( !mfn_eq(new_mfn, _mfn(0)) )
             {
-                rc = get_page_and_type_from_mfn(
-                    _mfn(op.arg1.mfn), PGT_root_page_table, currd, PTF_preemptible);
-
+                rc = get_page_and_type_from_mfn(new_mfn, PGT_root_page_table,
+                                                currd, PTF_preemptible);
                 if ( unlikely(rc) )
                 {
                     if ( rc == -EINTR )
@@ -3580,19 +3580,19 @@ long do_mmuext_op(
                     else if ( rc != -ERESTART )
                         gdprintk(XENLOG_WARNING,
                                  "Error %d installing new mfn %" PRI_mfn "\n",
-                                 rc, op.arg1.mfn);
+                                 rc, mfn_x(new_mfn));
                     break;
                 }
 
                 if ( VM_ASSIST(currd, m2p_strict) )
-                    zap_ro_mpt(_mfn(op.arg1.mfn));
+                    zap_ro_mpt(new_mfn);
             }
 
-            curr->arch.guest_table_user = pagetable_from_pfn(op.arg1.mfn);
+            curr->arch.guest_table_user = pagetable_from_mfn(new_mfn);
 
-            if ( old_mfn != 0 )
+            if ( !mfn_eq(old_mfn, _mfn(0)) )
             {
-                page = mfn_to_page(_mfn(old_mfn));
+                page = mfn_to_page(old_mfn);
 
                 switch ( rc = put_page_and_type_preemptible(page) )
                 {
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a6d5e39b02..051e92169a 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -394,7 +394,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     l4_pgentry_t *l4e;
     mfn_t m4mfn;
 
-    ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0);
+    ASSERT(pagetable_is_null(v->arch.monitor_table));
 
     if ( (pg = hap_alloc(d)) == NULL )
         goto oom;
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index eb0f0edfef..346696e469 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1366,7 +1366,7 @@ void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
 
     p2m->ept.ad = hostp2m->ept.ad;
     ept = &p2m->ept;
-    ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m)));
     d->arch.altp2m_eptp[array_index_nospec(i, MAX_EPTP)] = ept->eptp;
 }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index eb66077496..cccb06c26e 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -867,7 +867,7 @@ static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
     unsigned long gfn = 0;
     unsigned int i, changed;
 
-    if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) == 0 )
+    if ( pagetable_is_null(p2m_get_pagetable(p2m)) )
         return;
 
     ASSERT(hap_enabled(p2m->domain));
@@ -950,7 +950,7 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
     ASSERT(pod_locked_by_me(p2m));
 
     /* Audit part one: walk the domain's p2m table, checking the entries. */
-    if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) != 0 )
+    if ( !pagetable_is_null(p2m_get_pagetable(p2m)) )
     {
         l2_pgentry_t *l2e;
         l1_pgentry_t *l1e;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9f51370327..45b4b784d3 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -702,7 +702,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
         return -EINVAL;
     }
 
-    if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) != 0 )
+    if ( !pagetable_is_null(p2m_get_pagetable(p2m)) )
     {
         P2M_ERROR("p2m already allocated for this domain\n");
         p2m_unlock(p2m);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index b6afc0fba4..5751dae344 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1520,7 +1520,7 @@ sh_make_monitor_table(struct vcpu *v)
 {
     struct domain *d = v->domain;
 
-    ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0);
+    ASSERT(pagetable_is_null(v->arch.monitor_table));
 
     /* Guarantee we can get the memory we need */
     shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
@@ -2351,11 +2351,11 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
     ASSERT(mfn_valid(smfn));
 #endif
 
-    if ( pagetable_get_pfn(v->arch.shadow_table[0]) == mfn_x(smfn)
+    if ( mfn_eq(pagetable_get_mfn(v->arch.shadow_table[0]), smfn)
 #if (SHADOW_PAGING_LEVELS == 3)
-         || pagetable_get_pfn(v->arch.shadow_table[1]) == mfn_x(smfn)
-         || pagetable_get_pfn(v->arch.shadow_table[2]) == mfn_x(smfn)
-         || pagetable_get_pfn(v->arch.shadow_table[3]) == mfn_x(smfn)
+         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[1]), smfn)
+         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[2]), smfn)
+         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[3]), smfn)
 #endif
         )
         return 0;
@@ -3707,7 +3707,7 @@ sh_update_linear_entries(struct vcpu *v)
 
     /* Don't try to update the monitor table if it doesn't exist */
     if ( shadow_mode_external(d)
-         && pagetable_get_pfn(v->arch.monitor_table) == 0 )
+         && pagetable_is_null(v->arch.monitor_table) )
         return;
 
 #if SHADOW_PAGING_LEVELS == 4
@@ -3722,7 +3722,7 @@ sh_update_linear_entries(struct vcpu *v)
         if ( v == current )
         {
             __linear_l4_table[l4_linear_offset(SH_LINEAR_PT_VIRT_START)] =
-                l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]),
+                l4e_from_mfn(pagetable_get_mfn(v->arch.shadow_table[0]),
                              __PAGE_HYPERVISOR_RW);
         }
         else
@@ -3730,7 +3730,7 @@ sh_update_linear_entries(struct vcpu *v)
             l4_pgentry_t *ml4e;
             ml4e = map_domain_page(pagetable_get_mfn(v->arch.monitor_table));
             ml4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)] =
-                l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]),
+                l4e_from_mfn(pagetable_get_mfn(v->arch.shadow_table[0]),
                              __PAGE_HYPERVISOR_RW);
             unmap_domain_page(ml4e);
         }
@@ -3964,15 +3964,15 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     {
         ASSERT(shadow_mode_external(d));
         if ( hvm_paging_enabled(v) )
-            ASSERT(pagetable_get_pfn(v->arch.guest_table));
+            ASSERT(!pagetable_is_null(v->arch.guest_table));
         else
-            ASSERT(v->arch.guest_table.pfn
-                   == d->arch.paging.shadow.unpaged_pagetable.pfn);
+            ASSERT(mfn_eq(pagetable_get_mfn(v->arch.guest_table),
+                          pagetable_get_mfn(d->arch.paging.shadow.unpaged_pagetable)));
     }
 #endif
 
     SHADOW_PRINTK("%pv guest_table=%"PRI_mfn"\n",
-                  v, (unsigned long)pagetable_get_pfn(v->arch.guest_table));
+                  v, mfn_x(pagetable_get_mfn(v->arch.guest_table)));
 
 #if GUEST_PAGING_LEVELS == 4
     if ( !(v->arch.flags & TF_kernel_mode) )
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 30846b5f97..8abd5d255c 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -93,14 +93,14 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
     }
 }
 
-static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
+static __init void setup_pv_physmap(struct domain *d, mfn_t pgtbl_mfn,
                                     unsigned long v_start, unsigned long v_end,
                                     unsigned long vphysmap_start,
                                     unsigned long vphysmap_end,
                                     unsigned long nr_pages)
 {
     struct page_info *page = NULL;
-    l4_pgentry_t *pl4e, *l4start = map_domain_page(_mfn(pgtbl_pfn));
+    l4_pgentry_t *pl4e, *l4start = map_domain_page(pgtbl_mfn);
     l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e = NULL;
     l1_pgentry_t *pl1e = NULL;
@@ -760,11 +760,9 @@ int __init dom0_construct_pv(struct domain *d,
 
     /* Set up the phys->machine table if not part of the initial mapping. */
     if ( parms.p2m_base != UNSET_ADDR )
-    {
-        pfn = pagetable_get_pfn(v->arch.guest_table);
-        setup_pv_physmap(d, pfn, v_start, v_end, vphysmap_start, vphysmap_end,
+        setup_pv_physmap(d, pagetable_get_mfn(v->arch.guest_table),
+                         v_start, v_end, vphysmap_start, vphysmap_end,
                          nr_pages);
-    }
 
     /* Write the phys->machine and machine->phys table entries. */
     for ( pfn = 0; pfn < count; pfn++ )
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 4aa7c35be4..04a3ebc0a2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -247,12 +247,12 @@ static void compat_show_guest_stack(struct vcpu *v,
     if ( v != current )
     {
         struct vcpu *vcpu;
-        unsigned long mfn;
+        mfn_t mfn;
 
         ASSERT(guest_kernel_mode(v, regs));
-        mfn = read_cr3() >> PAGE_SHIFT;
+        mfn = cr3_to_mfn(read_cr3());
         for_each_vcpu( v->domain, vcpu )
-            if ( pagetable_get_pfn(vcpu->arch.guest_table) == mfn )
+            if ( mfn_eq(pagetable_get_mfn(vcpu->arch.guest_table), mfn) )
                 break;
         if ( !vcpu )
         {
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 624dbbb949..377ba14f6e 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -18,6 +18,7 @@
 #ifndef __ASSEMBLY__
 # include <asm/types.h>
 # include <xen/lib.h>
+# include <xen/mm_types.h>
 #endif
 
 #include <asm/x86_64/page.h>
@@ -213,17 +214,17 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #ifndef __ASSEMBLY__
 
 /* Page-table type. */
-typedef struct { u64 pfn; } pagetable_t;
-#define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
+typedef struct { mfn_t mfn; } pagetable_t;
+#define PAGETABLE_NULL_MFN      _mfn(0)
+
+#define pagetable_get_paddr(x)  mfn_to_maddr((x).mfn)
 #define pagetable_get_page(x)   mfn_to_page(pagetable_get_mfn(x))
-#define pagetable_get_pfn(x)    ((x).pfn)
-#define pagetable_get_mfn(x)    _mfn(((x).pfn))
-#define pagetable_is_null(x)    ((x).pfn == 0)
-#define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) })
-#define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) })
+#define pagetable_get_mfn(x)    ((x).mfn)
+#define pagetable_is_null(x)    mfn_eq((x).mfn, PAGETABLE_NULL_MFN)
+#define pagetable_from_mfn(mfn) ((pagetable_t) { mfn })
 #define pagetable_from_page(pg) pagetable_from_mfn(page_to_mfn(pg))
-#define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT)
-#define pagetable_null()        pagetable_from_pfn(0)
+#define pagetable_from_paddr(p) pagetable_from_mfn(maddr_to_mfn(p))
+#define pagetable_null()        pagetable_from_mfn(PAGETABLE_NULL_MFN)
 
 void clear_page_sse2(void *);
 void copy_page_sse2(void *, const void *);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn'
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (4 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers julien
@ 2020-03-22 16:14 ` julien
  2020-03-26 15:51   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN julien
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

We are using the 'mfn' to refer to machine frame. As this function deal
with 'mfn', replace 'pfn' with 'mfn'.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

I am not entirely sure to understand the comment on top of the
function, so this change may be wrong.
---
 xen/arch/x86/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index aa0bf3d0ee..65bc03984d 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1321,7 +1321,7 @@ static int put_data_pages(struct page_info *page, bool writeable, int pt_shift)
 }
 
 /*
- * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'.
+ * NB. Virtual address 'l2e' maps to a machine address within frame 'mfn'.
  * Note also that this automatically deals correctly with linear p.t.'s.
  */
 static int put_page_from_l2e(l2_pgentry_t l2e, mfn_t l2mfn, unsigned int flags)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (5 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn' julien
@ 2020-03-22 16:14 ` julien
  2020-03-26 15:54   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 08/17] xen/x86: traps: Convert show_page_walk() " julien
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Note that the code is now using cr3_to_mfn() to get the MFN. This is
slightly different as the top 12-bits will now be masked.

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/traps.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 04a3ebc0a2..4f524dc71e 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1232,7 +1232,8 @@ enum pf_type {
 static enum pf_type __page_fault_type(unsigned long addr,
                                       const struct cpu_user_regs *regs)
 {
-    unsigned long mfn, cr3 = read_cr3();
+    mfn_t mfn;
+    unsigned long cr3 = read_cr3();
     l4_pgentry_t l4e, *l4t;
     l3_pgentry_t l3e, *l3t;
     l2_pgentry_t l2e, *l2t;
@@ -1264,20 +1265,20 @@ static enum pf_type __page_fault_type(unsigned long addr,
 
     page_user = _PAGE_USER;
 
-    mfn = cr3 >> PAGE_SHIFT;
+    mfn = cr3_to_mfn(cr3);
 
-    l4t = map_domain_page(_mfn(mfn));
+    l4t = map_domain_page(mfn);
     l4e = l4e_read_atomic(&l4t[l4_table_offset(addr)]);
-    mfn = l4e_get_pfn(l4e);
+    mfn = l4e_get_mfn(l4e);
     unmap_domain_page(l4t);
     if ( ((l4e_get_flags(l4e) & required_flags) != required_flags) ||
          (l4e_get_flags(l4e) & disallowed_flags) )
         return real_fault;
     page_user &= l4e_get_flags(l4e);
 
-    l3t  = map_domain_page(_mfn(mfn));
+    l3t  = map_domain_page(mfn);
     l3e = l3e_read_atomic(&l3t[l3_table_offset(addr)]);
-    mfn = l3e_get_pfn(l3e);
+    mfn = l3e_get_mfn(l3e);
     unmap_domain_page(l3t);
     if ( ((l3e_get_flags(l3e) & required_flags) != required_flags) ||
          (l3e_get_flags(l3e) & disallowed_flags) )
@@ -1286,9 +1287,9 @@ static enum pf_type __page_fault_type(unsigned long addr,
     if ( l3e_get_flags(l3e) & _PAGE_PSE )
         goto leaf;
 
-    l2t = map_domain_page(_mfn(mfn));
+    l2t = map_domain_page(mfn);
     l2e = l2e_read_atomic(&l2t[l2_table_offset(addr)]);
-    mfn = l2e_get_pfn(l2e);
+    mfn = l2e_get_mfn(l2e);
     unmap_domain_page(l2t);
     if ( ((l2e_get_flags(l2e) & required_flags) != required_flags) ||
          (l2e_get_flags(l2e) & disallowed_flags) )
@@ -1297,9 +1298,9 @@ static enum pf_type __page_fault_type(unsigned long addr,
     if ( l2e_get_flags(l2e) & _PAGE_PSE )
         goto leaf;
 
-    l1t = map_domain_page(_mfn(mfn));
+    l1t = map_domain_page(mfn);
     l1e = l1e_read_atomic(&l1t[l1_table_offset(addr)]);
-    mfn = l1e_get_pfn(l1e);
+    mfn = l1e_get_mfn(l1e);
     unmap_domain_page(l1t);
     if ( ((l1e_get_flags(l1e) & required_flags) != required_flags) ||
          (l1e_get_flags(l1e) & disallowed_flags) )
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 08/17] xen/x86: traps: Convert show_page_walk() to use typesafe MFN
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (6 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN julien
@ 2020-03-22 16:14 ` julien
  2020-03-22 16:14 ` [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn() julien
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Note that the code is now using cr3_to_mfn() to get the MFN. This is
slightly different as the top 12-bits will now be masked.

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/x86_64/traps.c | 42 ++++++++++++++++++-------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index c3d4faea6b..811c2cb37b 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -184,7 +184,8 @@ void vcpu_show_registers(const struct vcpu *v)
 
 void show_page_walk(unsigned long addr)
 {
-    unsigned long pfn, mfn = read_cr3() >> PAGE_SHIFT;
+    unsigned long pfn;
+    mfn_t mfn = cr3_to_mfn(read_cr3());
     l4_pgentry_t l4e, *l4t;
     l3_pgentry_t l3e, *l3t;
     l2_pgentry_t l2e, *l2t;
@@ -194,52 +195,51 @@ void show_page_walk(unsigned long addr)
     if ( !is_canonical_address(addr) )
         return;
 
-    l4t = map_domain_page(_mfn(mfn));
+    l4t = map_domain_page(mfn);
     l4e = l4t[l4_table_offset(addr)];
     unmap_domain_page(l4t);
-    mfn = l4e_get_pfn(l4e);
-    pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
+    mfn = l4e_get_mfn(l4e);
+    pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
+          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
     printk(" L4[0x%03lx] = %"PRIpte" %016lx\n",
            l4_table_offset(addr), l4e_get_intpte(l4e), pfn);
-    if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) ||
-         !mfn_valid(_mfn(mfn)) )
+    if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || !mfn_valid(mfn) )
         return;
 
-    l3t = map_domain_page(_mfn(mfn));
+    l3t = map_domain_page(mfn);
     l3e = l3t[l3_table_offset(addr)];
     unmap_domain_page(l3t);
-    mfn = l3e_get_pfn(l3e);
-    pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
+    mfn = l3e_get_mfn(l3e);
+    pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
+          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
     printk(" L3[0x%03lx] = %"PRIpte" %016lx%s\n",
            l3_table_offset(addr), l3e_get_intpte(l3e), pfn,
            (l3e_get_flags(l3e) & _PAGE_PSE) ? " (PSE)" : "");
     if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
          (l3e_get_flags(l3e) & _PAGE_PSE) ||
-         !mfn_valid(_mfn(mfn)) )
+         !mfn_valid(mfn) )
         return;
 
-    l2t = map_domain_page(_mfn(mfn));
+    l2t = map_domain_page(mfn);
     l2e = l2t[l2_table_offset(addr)];
     unmap_domain_page(l2t);
-    mfn = l2e_get_pfn(l2e);
-    pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
+    mfn = l2e_get_mfn(l2e);
+    pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
+          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
     printk(" L2[0x%03lx] = %"PRIpte" %016lx%s\n",
            l2_table_offset(addr), l2e_get_intpte(l2e), pfn,
            (l2e_get_flags(l2e) & _PAGE_PSE) ? " (PSE)" : "");
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) ||
-         !mfn_valid(_mfn(mfn)) )
+         !mfn_valid(mfn) )
         return;
 
-    l1t = map_domain_page(_mfn(mfn));
+    l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(addr)];
     unmap_domain_page(l1t);
-    mfn = l1e_get_pfn(l1e);
-    pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
+    mfn = l1e_get_mfn(l1e);
+    pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
+          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
     printk(" L1[0x%03lx] = %"PRIpte" %016lx\n",
            l1_table_offset(addr), l1e_get_intpte(l1e), pfn);
 }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn()
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (7 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 08/17] xen/x86: traps: Convert show_page_walk() " julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 10:52   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version julien
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

It is preferable to use the typesafe l*e_{from, get}_mfn(). Sadly, this
can't be used everywhere easily, so for now only replace the simple ones.

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/machine_kexec.c |  2 +-
 xen/arch/x86/mm.c            | 30 +++++++++++++++---------------
 xen/arch/x86/setup.c         |  2 +-
 xen/include/asm-x86/page.h   |  2 +-
 4 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/machine_kexec.c b/xen/arch/x86/machine_kexec.c
index b70d5a6a86..b69c2e5fad 100644
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -86,7 +86,7 @@ int machine_kexec_add_page(struct kexec_image *image, unsigned long vaddr,
 
     l1 = __map_domain_page(l1_page);
     l1 += l1_table_offset(vaddr);
-    l1e_write(l1, l1e_from_pfn(maddr >> PAGE_SHIFT, __PAGE_HYPERVISOR));
+    l1e_write(l1, l1e_from_mfn(maddr_to_mfn(maddr), __PAGE_HYPERVISOR));
 
     ret = 0;
 out:
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 65bc03984d..2516548e49 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1138,7 +1138,7 @@ static int
 get_page_from_l2e(
     l2_pgentry_t l2e, mfn_t l2mfn, struct domain *d, unsigned int flags)
 {
-    unsigned long mfn = l2e_get_pfn(l2e);
+    mfn_t mfn = l2e_get_mfn(l2e);
     int rc;
 
     if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) )
@@ -1150,7 +1150,7 @@ get_page_from_l2e(
 
     ASSERT(!(flags & PTF_preemptible));
 
-    rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, flags);
+    rc = get_page_and_type_from_mfn(mfn, PGT_l1_page_table, d, flags);
     if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, l2mfn, d) )
         rc = 0;
 
@@ -1209,14 +1209,14 @@ static int _put_page_type(struct page_info *page, unsigned int flags,
 
 void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
 {
-    unsigned long     pfn = l1e_get_pfn(l1e);
+    mfn_t mfn = l1e_get_mfn(l1e);
     struct page_info *page;
     struct domain    *pg_owner;
 
-    if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || is_iomem_page(_mfn(pfn)) )
+    if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || is_iomem_page(mfn) )
         return;
 
-    page = mfn_to_page(_mfn(pfn));
+    page = mfn_to_page(mfn);
     pg_owner = page_get_owner(page);
 
     /*
@@ -5219,8 +5219,8 @@ int map_pages_to_xen(
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
-                          l2e_from_pfn(l3e_get_pfn(ol3e) +
-                                       (i << PAGETABLE_ORDER),
+                          l2e_from_mfn(mfn_add(l3e_get_mfn(ol3e),
+                                               (i << PAGETABLE_ORDER)),
                                        l3e_get_flags(ol3e)));
 
             if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL )
@@ -5320,7 +5320,7 @@ int map_pages_to_xen(
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
-                              l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
+                              l1e_from_mfn(mfn_add(l2e_get_mfn(*pl2e), i),
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
 
                 if ( l2e_get_flags(*pl2e) & _PAGE_GLOBAL )
@@ -5391,7 +5391,7 @@ int map_pages_to_xen(
                 l1t = l2e_to_l1e(ol2e);
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
+                    if ( !mfn_eq(l1e_get_mfn(l1t[i]), _mfn(base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
                 if ( i == L1_PAGETABLE_ENTRIES )
@@ -5521,7 +5521,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 /* PAGE1GB: whole superpage is modified. */
                 l3_pgentry_t nl3e = !(nf & _PAGE_PRESENT) ? l3e_empty()
-                    : l3e_from_pfn(l3e_get_pfn(*pl3e),
+                    : l3e_from_mfn(l3e_get_mfn(*pl3e),
                                    (l3e_get_flags(*pl3e) & ~FLAGS_MASK) | nf);
 
                 l3e_write_atomic(pl3e, nl3e);
@@ -5535,8 +5535,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 return -ENOMEM;
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
-                          l2e_from_pfn(l3e_get_pfn(*pl3e) +
-                                       (i << PAGETABLE_ORDER),
+                          l2e_from_mfn(mfn_add(l3e_get_mfn(*pl3e),
+                                               (i << PAGETABLE_ORDER)),
                                        l3e_get_flags(*pl3e)));
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5576,7 +5576,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 /* PSE: whole superpage is modified. */
                 l2_pgentry_t nl2e = !(nf & _PAGE_PRESENT) ? l2e_empty()
-                    : l2e_from_pfn(l2e_get_pfn(*pl2e),
+                    : l2e_from_mfn(l2e_get_mfn(*pl2e),
                                    (l2e_get_flags(*pl2e) & ~FLAGS_MASK) | nf);
 
                 l2e_write_atomic(pl2e, nl2e);
@@ -5592,7 +5592,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                     return -ENOMEM;
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
-                              l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
+                              l1e_from_mfn(mfn_add(l2e_get_mfn(*pl2e), i),
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5625,7 +5625,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 ASSERT(!(nf & _PAGE_PRESENT));
 
             nl1e = !(nf & _PAGE_PRESENT) ? l1e_empty()
-                : l1e_from_pfn(l1e_get_pfn(*pl1e),
+                : l1e_from_mfn(l1e_get_mfn(*pl1e),
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index cfe95c5dac..4d1d38dae3 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1147,7 +1147,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
             BUG_ON(using_2M_mapping() &&
                    l2_table_offset((unsigned long)_erodata) ==
                    l2_table_offset((unsigned long)_stext));
-            *pl2e++ = l2e_from_pfn(xen_phys_start >> PAGE_SHIFT,
+            *pl2e++ = l2e_from_mfn(maddr_to_mfn(xen_phys_start),
                                    PAGE_HYPERVISOR_RX | _PAGE_PSE);
             for ( i = 1; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ )
             {
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 377ba14f6e..8d581cd1e7 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -270,7 +270,7 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
+#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (8 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn() julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 11:34   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga() julien
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

_mfn(addr >> PAGE_SHIFT) is equivalent to maddr_to_mfn(addr).

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/pv/grant_table.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c
index 0325618c98..f80e233621 100644
--- a/xen/arch/x86/pv/grant_table.c
+++ b/xen/arch/x86/pv/grant_table.c
@@ -72,7 +72,7 @@ int create_grant_pv_mapping(uint64_t addr, mfn_t frame,
             goto out;
         }
 
-        gl1mfn = _mfn(addr >> PAGE_SHIFT);
+        gl1mfn = maddr_to_mfn(addr);
 
         page = get_page_from_mfn(gl1mfn, currd);
         if ( !page )
@@ -228,7 +228,7 @@ int replace_grant_pv_mapping(uint64_t addr, mfn_t frame,
             goto out;
         }
 
-        gl1mfn = _mfn(addr >> PAGE_SHIFT);
+        gl1mfn = maddr_to_mfn(addr);
 
         page = get_page_from_mfn(gl1mfn, currd);
         if ( !page )
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga()
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (9 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 11:35   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m() julien
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, George Dunlap,
	Jan Beulich, Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/mm/hap/nested_ept.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 1cb7fefc37..7bae71cc47 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -255,7 +255,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         }
         else
         {
-            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            gdprintk(XENLOG_ERR, "Incorrect l1 entry!\n");
             BUG();
         }
         if ( nept_permission_check(rwx_acc, rwx_bits) )
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m()
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (10 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga() julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 11:35   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s " julien
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, George Dunlap, Julien Grall,
	Jan Beulich, Roger Pau Monné

From: Julien Grall <julien.grall@arm.com>

p2m_pt_audit_p2m() has one place where the same message may be printed
twice via printk and P2M_PRINTK.

Remove the one printed using printk to stay consistent with the rest of
the code.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---

    This was originally sent as part of "xen/arm: Properly disable M2P
    on Arm" [1].

    Changes since the original version:
        - Move the reflow in a separate patch.

    [1] <20190603160350.29806-1-julien.grall@arm.com>
---
 xen/arch/x86/mm/p2m-pt.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index cccb06c26e..77450a9484 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -1061,8 +1061,6 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                              !p2m_is_shared(type) )
                         {
                             pmbad++;
-                            printk("mismatch: gfn %#lx -> mfn %#lx"
-                                   " -> gfn %#lx\n", gfn, mfn, m2pfn);
                             P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx"
                                        " -> gfn %#lx\n", gfn, mfn, m2pfn);
                             BUG();
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s in p2m_pt_audit_p2m()
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (11 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m() julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 11:36   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function julien
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, George Dunlap,
	Jan Beulich, Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

We tend to avoid splitting message on multiple line, so it is easier to
find it.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/mm/p2m-pt.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 77450a9484..e9da34d668 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -994,9 +994,8 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                         if ( m2pfn != (gfn + i2) )
                         {
                             pmbad++;
-                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx"
-                                       " -> gfn %#lx\n", gfn+i2, mfn+i2,
-                                       m2pfn);
+                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
+                                       gfn + i2, mfn + i2, m2pfn);
                             BUG();
                         }
                         gfn += 1 << (L3_PAGETABLE_SHIFT - PAGE_SHIFT);
@@ -1029,9 +1028,8 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                             if ( (m2pfn != (gfn + i1)) && !SHARED_M2P(m2pfn) )
                             {
                                 pmbad++;
-                                P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx"
-                                           " -> gfn %#lx\n", gfn+i1, mfn+i1,
-                                           m2pfn);
+                                P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
+                                           gfn + i1, mfn + i1, m2pfn);
                                 BUG();
                             }
                         }
@@ -1061,8 +1059,8 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                              !p2m_is_shared(type) )
                         {
                             pmbad++;
-                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx"
-                                       " -> gfn %#lx\n", gfn, mfn, m2pfn);
+                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
+                                       gfn, mfn, m2pfn);
                             BUG();
                         }
                     }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (12 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s " julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 12:44   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m() julien
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, Julien Grall, Jan Beulich,
	Roger Pau Monné

From: Julien Grall <julien.grall@arm.com>

set_gpfn_from_mfn() is currently implement in a 2 part macros. The
second macro is only called within the first macro, so they can be
folded together.

Furthermore, this is now converted to a static inline making the code
more readable and safer.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---

    This was originally sent as part of "xen/arm: Properly disable M2P
    on Arm" [1].

    Changes since the original version:
        - Remove the paragraph in the comment about dom_* as we don't
          need to move them anymore.
        - Constify 'd' as it is never modified within the function

    [1] <20190603160350.29806-1-julien.grall@arm.com>
---
 xen/include/asm-x86/mm.h | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 83058fb8d1..53f2ed7c7d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -493,24 +493,25 @@ extern paddr_t mem_hotplug;
 #define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
 
 #define compat_machine_to_phys_mapping ((unsigned int *)RDWR_COMPAT_MPT_VIRT_START)
-#define _set_gpfn_from_mfn(mfn, pfn) ({                        \
-    struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn))); \
-    unsigned long entry = (d && (d == dom_cow)) ?              \
-        SHARED_M2P_ENTRY : (pfn);                              \
-    ((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 || \
-            (compat_machine_to_phys_mapping[(mfn)] = (unsigned int)(entry))), \
-     machine_to_phys_mapping[(mfn)] = (entry));                \
-    })
 
 /*
  * Disable some users of set_gpfn_from_mfn() (e.g., free_heap_pages()) until
  * the machine_to_phys_mapping is actually set up.
  */
 extern bool machine_to_phys_mapping_valid;
-#define set_gpfn_from_mfn(mfn, pfn) do {        \
-    if ( machine_to_phys_mapping_valid )        \
-        _set_gpfn_from_mfn(mfn, pfn);           \
-} while (0)
+
+static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
+{
+    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
+    unsigned long entry = (d && (d == dom_cow)) ? SHARED_M2P_ENTRY : pfn;
+
+    if ( !machine_to_phys_mapping_valid )
+        return;
+
+    if ( mfn < (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 )
+        compat_machine_to_phys_mapping[mfn] = entry;
+    machine_to_phys_mapping[mfn] = entry;
+}
 
 extern struct rangeset *mmio_ro_ranges;
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m()
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (13 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function julien
@ 2020-03-22 16:14 ` julien
  2020-03-27 12:45   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN julien
  2020-03-22 16:14 ` [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn julien
  16 siblings, 1 reply; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Wei Liu, Andrew Cooper, George Dunlap, Julien Grall,
	Jan Beulich, Roger Pau Monné

From: Julien Grall <julien.grall@arm.com>

One of the printk format in audit_p2m() may be difficult to read as it
is not clear what is the first number.

Furthermore, the format can now take advantage of %pd.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    This was originally sent as part of "xen/arm: Properly disable M2P
    on Arm" [1].

    [1] <20190603160350.29806-1-julien.grall@arm.com>
---
 xen/arch/x86/mm/p2m.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 45b4b784d3..b6b01a71c8 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2851,8 +2851,7 @@ void audit_p2m(struct domain *d,
 
         if ( od != d )
         {
-            P2M_PRINTK("wrong owner %#lx -> %p(%u) != %p(%u)\n",
-                       mfn, od, (od?od->domain_id:-1), d, d->domain_id);
+            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn, od, d);
             continue;
         }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (14 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m() julien
@ 2020-03-22 16:14 ` julien
  2020-03-23 12:11   ` Hongyan Xia
  2020-03-27 13:15   ` Jan Beulich
  2020-03-22 16:14 ` [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn julien
  16 siblings, 2 replies; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Volodymyr Babchuk, Roger Pau Monné

From: Julien Grall <julien.grall@arm.com>

The first parameter of {s,g}et_gpfn_from_mfn() is an MFN, so it can be
switched to use the typesafe.

At the same time, replace gpfn with pfn in the helpers as they all deal
with PFN and also turn the macros to static inline.

Note that the return of the getter and the 2nd parameter of the setter
have not been converted to use typesafe PFN because it was requiring
more changes than expected.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    This was originally sent as part of "xen/arm: Properly disable M2P
    on Arm" [1].

    Changes since the original version:
        - mfn_to_gmfn() is still present for now so update it
        - Remove stray +
        - Avoid churn in set_pfn_from_mfn() by inverting mfn and mfn_
        - Remove tags
        - Fix build in mem_sharing

    [1] <20190603160350.29806-1-julien.grall@arm.com>
---
 xen/arch/x86/cpu/mcheck/mcaction.c |  2 +-
 xen/arch/x86/mm.c                  | 14 +++----
 xen/arch/x86/mm/mem_sharing.c      | 20 ++++-----
 xen/arch/x86/mm/p2m-pod.c          |  4 +-
 xen/arch/x86/mm/p2m-pt.c           | 35 ++++++++--------
 xen/arch/x86/mm/p2m.c              | 66 +++++++++++++++---------------
 xen/arch/x86/mm/paging.c           |  4 +-
 xen/arch/x86/pv/dom0_build.c       |  6 +--
 xen/arch/x86/x86_64/traps.c        |  8 ++--
 xen/common/page_alloc.c            |  2 +-
 xen/include/asm-arm/mm.h           |  2 +-
 xen/include/asm-x86/grant_table.h  |  2 +-
 xen/include/asm-x86/mm.h           | 12 ++++--
 xen/include/asm-x86/p2m.h          |  2 +-
 14 files changed, 93 insertions(+), 86 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mcaction.c b/xen/arch/x86/cpu/mcheck/mcaction.c
index 69332fb84d..5e78fb7703 100644
--- a/xen/arch/x86/cpu/mcheck/mcaction.c
+++ b/xen/arch/x86/cpu/mcheck/mcaction.c
@@ -89,7 +89,7 @@ mc_memerr_dhandler(struct mca_binfo *binfo,
             {
                 d = get_domain_by_id(bank->mc_domid);
                 ASSERT(d);
-                gfn = get_gpfn_from_mfn((bank->mc_addr) >> PAGE_SHIFT);
+                gfn = get_pfn_from_mfn(maddr_to_mfn(bank->mc_addr));
 
                 if ( unmmap_broken_page(d, mfn, gfn) )
                 {
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2516548e49..2feb7a5993 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -476,7 +476,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     if ( page_get_owner(page) == d )
         return;
 
-    set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY);
+    set_pfn_from_mfn(page_to_mfn(page), INVALID_M2P_ENTRY);
 
     spin_lock(&d->page_alloc_lock);
 
@@ -1040,7 +1040,7 @@ get_page_from_l1e(
 
             gdprintk(XENLOG_WARNING, "Error updating mappings for mfn %" PRI_mfn
                      " (pfn %" PRI_pfn ", from L1 entry %" PRIpte ") for d%d\n",
-                     mfn, get_gpfn_from_mfn(mfn),
+                     mfn, get_pfn_from_mfn(_mfn(mfn)),
                      l1e_get_intpte(l1e), l1e_owner->domain_id);
             return err;
         }
@@ -1051,7 +1051,7 @@ get_page_from_l1e(
  could_not_pin:
     gdprintk(XENLOG_WARNING, "Error getting mfn %" PRI_mfn " (pfn %" PRI_pfn
              ") from L1 entry %" PRIpte " for l1e_owner d%d, pg_owner d%d\n",
-             mfn, get_gpfn_from_mfn(mfn),
+             mfn, get_pfn_from_mfn(_mfn(mfn)),
              l1e_get_intpte(l1e), l1e_owner->domain_id, pg_owner->domain_id);
     if ( real_pg_owner != NULL )
         put_page(page);
@@ -2636,7 +2636,7 @@ static int validate_page(struct page_info *page, unsigned long type,
                  " (pfn %" PRI_pfn ") for type %" PRtype_info
                  ": caf=%08lx taf=%" PRtype_info "\n",
                  mfn_x(page_to_mfn(page)),
-                 get_gpfn_from_mfn(mfn_x(page_to_mfn(page))),
+                 get_pfn_from_mfn(page_to_mfn(page)),
                  type, page->count_info, page->u.inuse.type_info);
         if ( page != current->arch.old_guest_table )
             page->u.inuse.type_info = 0;
@@ -2946,7 +2946,7 @@ static int _get_page_type(struct page_info *page, unsigned long type,
                      "Bad type (saw %" PRtype_info " != exp %" PRtype_info ") "
                      "for mfn %" PRI_mfn " (pfn %" PRI_pfn ")\n",
                      x, type, mfn_x(page_to_mfn(page)),
-                     get_gpfn_from_mfn(mfn_x(page_to_mfn(page))));
+                     get_pfn_from_mfn(page_to_mfn(page)));
             return -EINVAL;
         }
         else if ( unlikely(!(x & PGT_validated)) )
@@ -4106,7 +4106,7 @@ long do_mmu_update(
                 break;
             }
 
-            set_gpfn_from_mfn(mfn_x(mfn), gpfn);
+            set_pfn_from_mfn(mfn, gpfn);
             paging_mark_pfn_dirty(pg_owner, _pfn(gpfn));
 
             put_page(page);
@@ -4590,7 +4590,7 @@ int xenmem_add_to_physmap_one(
         goto put_both;
 
     /* Unmap from old location, if any. */
-    old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
+    old_gpfn = get_pfn_from_mfn(mfn);
     ASSERT(!SHARED_M2P(old_gpfn));
     if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
     {
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3835bc928f..018beec10f 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -426,15 +426,15 @@ static void mem_sharing_gfn_destroy(struct page_info *page, struct domain *d,
     xfree(gfn_info);
 }
 
-static struct page_info *mem_sharing_lookup(unsigned long mfn)
+static struct page_info *mem_sharing_lookup(mfn_t mfn)
 {
     struct page_info *page;
     unsigned long t;
 
-    if ( !mfn_valid(_mfn(mfn)) )
+    if ( !mfn_valid(mfn) )
         return NULL;
 
-    page = mfn_to_page(_mfn(mfn));
+    page = mfn_to_page(mfn);
     if ( page_get_owner(page) != dom_cow )
         return NULL;
 
@@ -446,7 +446,7 @@ static struct page_info *mem_sharing_lookup(unsigned long mfn)
     t = read_atomic(&page->u.inuse.type_info);
     ASSERT((t & PGT_type_mask) == PGT_shared_page);
     ASSERT((t & PGT_count_mask) >= 2);
-    ASSERT(SHARED_M2P(get_gpfn_from_mfn(mfn)));
+    ASSERT(SHARED_M2P(get_pfn_from_mfn(mfn)));
 
     return page;
 }
@@ -505,10 +505,10 @@ static int audit(void)
         }
 
         /* Check the m2p entry */
-        if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) )
+        if ( !SHARED_M2P(get_pfn_from_mfn(mfn)) )
         {
-            gdprintk(XENLOG_ERR, "mfn %lx shared, but wrong m2p entry (%lx)!\n",
-                     mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
+            gdprintk(XENLOG_ERR, "mfn %"PRI_mfn" shared, but wrong m2p entry (%lx)!\n",
+                     mfn_x(mfn), get_pfn_from_mfn(mfn));
             errors++;
         }
 
@@ -736,7 +736,7 @@ static struct page_info *__grab_shared_page(mfn_t mfn)
     if ( !mem_sharing_page_lock(pg) )
         return NULL;
 
-    if ( mem_sharing_lookup(mfn_x(mfn)) == NULL )
+    if ( mem_sharing_lookup(mfn) == NULL )
     {
         mem_sharing_page_unlock(pg);
         return NULL;
@@ -918,7 +918,7 @@ static int nominate_page(struct domain *d, gfn_t gfn,
     atomic_inc(&nr_shared_mfns);
 
     /* Update m2p entry to SHARED_M2P_ENTRY */
-    set_gpfn_from_mfn(mfn_x(mfn), SHARED_M2P_ENTRY);
+    set_pfn_from_mfn(mfn, SHARED_M2P_ENTRY);
 
     *phandle = page->sharing->handle;
     audit_add_list(page);
@@ -1306,7 +1306,7 @@ int __mem_sharing_unshare_page(struct domain *d,
     }
 
     /* Update m2p entry */
-    set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), gfn);
+    set_pfn_from_mfn(page_to_mfn(page), gfn);
 
     /*
      * Now that the gfn<->mfn map is properly established,
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 2a7b8c117b..a9ac44a65c 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -644,7 +644,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
             }
             p2m_tlb_flush_sync(p2m);
             for ( j = 0; j < n; ++j )
-                set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
+                set_pfn_from_mfn(mfn, INVALID_M2P_ENTRY);
             p2m_pod_cache_add(p2m, page, cur_order);
 
             steal_for_cache =  ( p2m->pod.entry_count > p2m->pod.count );
@@ -1194,7 +1194,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
 
     for( i = 0; i < (1UL << order); i++ )
     {
-        set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i);
+        set_pfn_from_mfn(mfn_add(mfn, i), gfn_x(gfn_aligned) + i);
         paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn_aligned) + i));
     }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index e9da34d668..1601e9e5e9 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -944,7 +944,8 @@ static int p2m_pt_change_entry_type_range(struct p2m_domain *p2m,
 long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
     unsigned long entry_count = 0, pmbad = 0;
-    unsigned long mfn, gfn, m2pfn;
+    unsigned long gfn, m2pfn;
+    mfn_t mfn;
 
     ASSERT(p2m_locked_by_me(p2m));
     ASSERT(pod_locked_by_me(p2m));
@@ -983,19 +984,20 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                 /* check for 1GB super page */
                 if ( l3e_get_flags(l3e[i3]) & _PAGE_PSE )
                 {
-                    mfn = l3e_get_pfn(l3e[i3]);
-                    ASSERT(mfn_valid(_mfn(mfn)));
+                    mfn = l3e_get_mfn(l3e[i3]);
+                    ASSERT(mfn_valid(mfn));
                     /* we have to cover 512x512 4K pages */
                     for ( i2 = 0; 
                           i2 < (L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES);
                           i2++)
                     {
-                        m2pfn = get_gpfn_from_mfn(mfn+i2);
+                        m2pfn = get_pfn_from_mfn(mfn_add(mfn, i2));
                         if ( m2pfn != (gfn + i2) )
                         {
                             pmbad++;
-                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
-                                       gfn + i2, mfn + i2, m2pfn);
+                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" gfn %#lx\n",
+                                       gfn + i2, mfn_x(mfn_add(mfn, i2)),
+                                       m2pfn);
                             BUG();
                         }
                         gfn += 1 << (L3_PAGETABLE_SHIFT - PAGE_SHIFT);
@@ -1019,17 +1021,18 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                     /* check for super page */
                     if ( l2e_get_flags(l2e[i2]) & _PAGE_PSE )
                     {
-                        mfn = l2e_get_pfn(l2e[i2]);
-                        ASSERT(mfn_valid(_mfn(mfn)));
+                        mfn = l2e_get_mfn(l2e[i2]);
+                        ASSERT(mfn_valid(mfn));
                         for ( i1 = 0; i1 < L1_PAGETABLE_ENTRIES; i1++)
                         {
-                            m2pfn = get_gpfn_from_mfn(mfn+i1);
+                            m2pfn = get_pfn_from_mfn(mfn_add(mfn, i1));
                             /* Allow shared M2Ps */
                             if ( (m2pfn != (gfn + i1)) && !SHARED_M2P(m2pfn) )
                             {
                                 pmbad++;
-                                P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
-                                           gfn + i1, mfn + i1, m2pfn);
+                                P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" -> gfn %#lx\n",
+                                           gfn + i1, mfn_x(mfn_add(mfn, i1)),
+                                           m2pfn);
                                 BUG();
                             }
                         }
@@ -1050,17 +1053,17 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
                                 entry_count++;
                             continue;
                         }
-                        mfn = l1e_get_pfn(l1e[i1]);
-                        ASSERT(mfn_valid(_mfn(mfn)));
-                        m2pfn = get_gpfn_from_mfn(mfn);
+                        mfn = l1e_get_mfn(l1e[i1]);
+                        ASSERT(mfn_valid(mfn));
+                        m2pfn = get_pfn_from_mfn(mfn);
                         if ( m2pfn != gfn &&
                              type != p2m_mmio_direct &&
                              !p2m_is_grant(type) &&
                              !p2m_is_shared(type) )
                         {
                             pmbad++;
-                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
-                                       gfn, mfn, m2pfn);
+                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" -> gfn %#lx\n",
+                                       gfn, mfn_x(mfn), m2pfn);
                             BUG();
                         }
                     }
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index b6b01a71c8..587c062481 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -769,7 +769,7 @@ void p2m_final_teardown(struct domain *d)
 
 
 static int
-p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
+p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, mfn_t mfn,
                 unsigned int page_order)
 {
     unsigned long i;
@@ -783,17 +783,17 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
         return 0;
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
-    P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_l, mfn);
+    P2M_DEBUG("removing gfn=%#lx mfn=%"PRI_mfn"\n", gfn_l, mfn_x(mfn));
 
-    if ( mfn_valid(_mfn(mfn)) )
+    if ( mfn_valid(mfn) )
     {
         for ( i = 0; i < (1UL << page_order); i++ )
         {
             mfn_return = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
                                         NULL, NULL);
             if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) )
-                set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
-            ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
+                set_pfn_from_mfn(mfn_add(mfn, i), INVALID_M2P_ENTRY);
+            ASSERT( !p2m_is_valid(t) || mfn_eq(mfn_add(mfn, i), mfn_return) );
         }
     }
     return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
@@ -807,7 +807,7 @@ guest_physmap_remove_page(struct domain *d, gfn_t gfn,
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
     gfn_lock(p2m, gfn, page_order);
-    rc = p2m_remove_page(p2m, gfn_x(gfn), mfn_x(mfn), page_order);
+    rc = p2m_remove_page(p2m, gfn_x(gfn), mfn, page_order);
     gfn_unlock(p2m, gfn, page_order);
     return rc;
 }
@@ -842,7 +842,7 @@ guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
             else
                 return -EINVAL;
 
-            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
+            set_pfn_from_mfn(mfn_add(mfn, i), gfn_x(gfn) + i);
         }
 
         return 0;
@@ -930,7 +930,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
         else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
         {
             ASSERT(mfn_valid(omfn));
-            set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
+            set_pfn_from_mfn(omfn, INVALID_M2P_ENTRY);
         }
         else if ( ot == p2m_populate_on_demand )
         {
@@ -974,7 +974,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                 P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
                           gfn_x(ogfn) , mfn_x(omfn));
                 if ( mfn_eq(omfn, mfn_add(mfn, i)) )
-                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_x(mfn_add(mfn, i)),
+                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_add(mfn, i),
                                     0);
             }
         }
@@ -992,8 +992,8 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
-                set_gpfn_from_mfn(mfn_x(mfn_add(mfn, i)),
-                                  gfn_x(gfn_add(gfn, i)));
+                set_pfn_from_mfn(mfn_add(mfn, i),
+                                 gfn_x(gfn_add(gfn, i)));
         }
     }
 
@@ -1279,7 +1279,7 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
         for ( i = 0; i < (1UL << order); ++i )
         {
             ASSERT(mfn_valid(mfn_add(omfn, i)));
-            set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+            set_pfn_from_mfn(mfn_add(omfn, i), INVALID_M2P_ENTRY);
         }
     }
 
@@ -1475,7 +1475,7 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn)
     pg_type = read_atomic(&(mfn_to_page(omfn)->u.inuse.type_info));
     if ( (pg_type & PGT_count_mask) == 0
          || (pg_type & PGT_type_mask) != PGT_shared_page )
-        set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
+        set_pfn_from_mfn(omfn, INVALID_M2P_ENTRY);
 
     P2M_DEBUG("set shared %lx %lx\n", gfn_l, mfn_x(mfn));
     rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared,
@@ -1829,7 +1829,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer)
     ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                         paging_mode_log_dirty(d) ? p2m_ram_logdirty
                                                  : p2m_ram_rw, a);
-    set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
+    set_pfn_from_mfn(mfn, gfn_l);
 
     if ( !page_extant )
         atomic_dec(&d->paged_pages);
@@ -1880,7 +1880,7 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
                                    p2m_ram_rw, a);
 
             if ( !rc )
-                set_gpfn_from_mfn(mfn_x(mfn), gfn_x(gfn));
+                set_pfn_from_mfn(mfn, gfn_x(gfn));
         }
         gfn_unlock(p2m, gfn, 0);
     }
@@ -2706,7 +2706,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     {
         mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL);
         if ( mfn_valid(mfn) )
-            p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
+            p2m_remove_page(ap2m, gfn_x(old_gfn), mfn, PAGE_ORDER_4K);
         rc = 0;
         goto out;
     }
@@ -2820,8 +2820,8 @@ void audit_p2m(struct domain *d,
 {
     struct page_info *page;
     struct domain *od;
-    unsigned long mfn, gfn;
-    mfn_t p2mfn;
+    unsigned long gfn;
+    mfn_t p2mfn, mfn;
     unsigned long orphans_count = 0, mpbad = 0, pmbad = 0;
     p2m_access_t p2ma;
     p2m_type_t type;
@@ -2843,53 +2843,53 @@ void audit_p2m(struct domain *d,
     spin_lock(&d->page_alloc_lock);
     page_list_for_each ( page, &d->page_list )
     {
-        mfn = mfn_x(page_to_mfn(page));
+        mfn = page_to_mfn(page);
 
-        P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn);
+        P2M_PRINTK("auditing guest page, mfn=%"PRI_mfn"\n", mfn_x(mfn));
 
         od = page_get_owner(page);
 
         if ( od != d )
         {
-            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn, od, d);
+            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn_x(mfn), od, d);
             continue;
         }
 
-        gfn = get_gpfn_from_mfn(mfn);
+        gfn = get_pfn_from_mfn(mfn);
         if ( gfn == INVALID_M2P_ENTRY )
         {
             orphans_count++;
-            P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n",
-                           mfn);
+            P2M_PRINTK("orphaned guest page: mfn=%"PRI_mfn" has invalid gfn\n",
+                       mfn_x(mfn));
             continue;
         }
 
         if ( SHARED_M2P(gfn) )
         {
-            P2M_PRINTK("shared mfn (%lx) on domain page list!\n",
-                    mfn);
+            P2M_PRINTK("shared mfn (%"PRI_mfn") on domain page list!\n",
+                       mfn_x(mfn));
             continue;
         }
 
         p2mfn = get_gfn_type_access(p2m, gfn, &type, &p2ma, 0, NULL);
-        if ( mfn_x(p2mfn) != mfn )
+        if ( !mfn_eq(p2mfn, mfn) )
         {
             mpbad++;
-            P2M_PRINTK("map mismatch mfn %#lx -> gfn %#lx -> mfn %#lx"
+            P2M_PRINTK("map mismatch mfn %"PRI_mfn" -> gfn %#lx -> mfn %"PRI_mfn""
                        " (-> gfn %#lx)\n",
-                       mfn, gfn, mfn_x(p2mfn),
+                       mfn_x(mfn), gfn, mfn_x(p2mfn),
                        (mfn_valid(p2mfn)
-                        ? get_gpfn_from_mfn(mfn_x(p2mfn))
+                        ? get_pfn_from_mfn(p2mfn)
                         : -1u));
             /* This m2p entry is stale: the domain has another frame in
              * this physical slot.  No great disaster, but for neatness,
              * blow away the m2p entry. */
-            set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY);
+            set_pfn_from_mfn(mfn, INVALID_M2P_ENTRY);
         }
         __put_gfn(p2m, gfn);
 
-        P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx\n",
-                       mfn, gfn, mfn_x(p2mfn));
+        P2M_PRINTK("OK: mfn=%"PRI_mfn", gfn=%#lx, p2mfn=%"PRI_mfn"\n",
+                   mfn_x(mfn), gfn, mfn_x(p2mfn));
     }
     spin_unlock(&d->page_alloc_lock);
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 469bb76429..2f6df74135 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -344,7 +344,7 @@ void paging_mark_dirty(struct domain *d, mfn_t gmfn)
         return;
 
     /* We /really/ mean PFN here, even for non-translated guests. */
-    pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
+    pfn = _pfn(get_pfn_from_mfn(gmfn));
 
     paging_mark_pfn_dirty(d, pfn);
 }
@@ -362,7 +362,7 @@ int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn)
     ASSERT(paging_mode_log_dirty(d));
 
     /* We /really/ mean PFN here, even for non-translated guests. */
-    pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
+    pfn = _pfn(get_pfn_from_mfn(gmfn));
     /* Invalid pages can't be dirty. */
     if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
         return 0;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 8abd5d255c..9f558b2932 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -39,7 +39,7 @@ void __init dom0_update_physmap(struct domain *d, unsigned long pfn,
     else
         ((unsigned int *)vphysmap_s)[pfn] = mfn;
 
-    set_gpfn_from_mfn(mfn, pfn);
+    set_pfn_from_mfn(_mfn(mfn), pfn);
 }
 
 static __init void mark_pv_pt_pages_rdonly(struct domain *d,
@@ -789,8 +789,8 @@ int __init dom0_construct_pv(struct domain *d,
     page_list_for_each ( page, &d->page_list )
     {
         mfn = mfn_x(page_to_mfn(page));
-        BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn)));
-        if ( get_gpfn_from_mfn(mfn) >= count )
+        BUG_ON(SHARED_M2P(get_pfn_from_mfn(_mfn(mfn))));
+        if ( get_pfn_from_mfn(_mfn(mfn)) >= count )
         {
             BUG_ON(is_pv_32bit_domain(d));
             if ( !page->u.inuse.type_info &&
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 811c2cb37b..bf5c2060e7 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -200,7 +200,7 @@ void show_page_walk(unsigned long addr)
     unmap_domain_page(l4t);
     mfn = l4e_get_mfn(l4e);
     pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
+          get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
     printk(" L4[0x%03lx] = %"PRIpte" %016lx\n",
            l4_table_offset(addr), l4e_get_intpte(l4e), pfn);
     if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || !mfn_valid(mfn) )
@@ -211,7 +211,7 @@ void show_page_walk(unsigned long addr)
     unmap_domain_page(l3t);
     mfn = l3e_get_mfn(l3e);
     pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
+          get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
     printk(" L3[0x%03lx] = %"PRIpte" %016lx%s\n",
            l3_table_offset(addr), l3e_get_intpte(l3e), pfn,
            (l3e_get_flags(l3e) & _PAGE_PSE) ? " (PSE)" : "");
@@ -225,7 +225,7 @@ void show_page_walk(unsigned long addr)
     unmap_domain_page(l2t);
     mfn = l2e_get_mfn(l2e);
     pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
+          get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
     printk(" L2[0x%03lx] = %"PRIpte" %016lx%s\n",
            l2_table_offset(addr), l2e_get_intpte(l2e), pfn,
            (l2e_get_flags(l2e) & _PAGE_PSE) ? " (PSE)" : "");
@@ -239,7 +239,7 @@ void show_page_walk(unsigned long addr)
     unmap_domain_page(l1t);
     mfn = l1e_get_mfn(l1e);
     pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ?
-          get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY;
+          get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY;
     printk(" L1[0x%03lx] = %"PRIpte" %016lx\n",
            l1_table_offset(addr), l1e_get_intpte(l1e), pfn);
 }
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 41e4fa899d..239aac18dd 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1430,7 +1430,7 @@ static void free_heap_pages(
 
         /* This page is not a guest frame any more. */
         page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */
-        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
+        set_pfn_from_mfn(mfn_add(mfn, i), INVALID_M2P_ENTRY);
 
         if ( need_scrub )
         {
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index abf4cc23e4..11614f9107 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -319,7 +319,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 #define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
 
 /* Xen always owns P2M on ARM */
-#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
+static inline void set_pfn_from_mfn(mfn_t mfn, unsigned long pfn) {}
 #define mfn_to_gmfn(_d, mfn)  (mfn)
 
 
diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h
index 5871238f6d..b6a09c4c6c 100644
--- a/xen/include/asm-x86/grant_table.h
+++ b/xen/include/asm-x86/grant_table.h
@@ -41,7 +41,7 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame,
 #define gnttab_get_frame_gfn(gt, st, idx) ({                             \
     mfn_t mfn_ = (st) ? gnttab_status_mfn(gt, idx)                       \
                       : gnttab_shared_mfn(gt, idx);                      \
-    unsigned long gpfn_ = get_gpfn_from_mfn(mfn_x(mfn_));                \
+    unsigned long gpfn_ = get_pfn_from_mfn(mfn_);                        \
     VALID_M2P(gpfn_) ? _gfn(gpfn_) : INVALID_GFN;                        \
 })
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 53f2ed7c7d..2a4f42e78f 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -500,9 +500,10 @@ extern paddr_t mem_hotplug;
  */
 extern bool machine_to_phys_mapping_valid;
 
-static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
+static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn)
 {
-    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
+    const unsigned long mfn = mfn_x(mfn_);
+    const struct domain *d = page_get_owner(mfn_to_page(mfn_));
     unsigned long entry = (d && (d == dom_cow)) ? SHARED_M2P_ENTRY : pfn;
 
     if ( !machine_to_phys_mapping_valid )
@@ -515,11 +516,14 @@ static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
 
 extern struct rangeset *mmio_ro_ranges;
 
-#define get_gpfn_from_mfn(mfn)      (machine_to_phys_mapping[(mfn)])
+static inline unsigned long get_pfn_from_mfn(mfn_t mfn)
+{
+    return machine_to_phys_mapping[mfn_x(mfn)];
+}
 
 #define mfn_to_gmfn(_d, mfn)                            \
     ( (paging_mode_translate(_d))                       \
-      ? get_gpfn_from_mfn(mfn)                          \
+      ? get_pfn_from_mfn(_mfn(mfn))                     \
       : (mfn) )
 
 #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index a2c6049834..39dae242b0 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -505,7 +505,7 @@ static inline struct page_info *get_page_from_gfn(
 static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
 {
     if ( paging_mode_translate(d) )
-        return _gfn(get_gpfn_from_mfn(mfn_x(mfn)));
+        return _gfn(get_pfn_from_mfn(mfn));
     else
         return _gfn(mfn_x(mfn));
 }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
  2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
                   ` (15 preceding siblings ...)
  2020-03-22 16:14 ` [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN julien
@ 2020-03-22 16:14 ` julien
  2020-03-23  8:37   ` Paul Durrant
  2020-03-27 13:50   ` Jan Beulich
  16 siblings, 2 replies; 61+ messages in thread
From: julien @ 2020-03-22 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, julien, Jun Nakajima, Wei Liu,
	Paul Durrant, Andrew Cooper, Ian Jackson, George Dunlap,
	Tim Deegan, Julien Grall, Jan Beulich, Volodymyr Babchuk,
	Roger Pau Monné

From: Julien Grall <julien.grall@arm.com>

No functional change intended.

Only reasonable clean-ups are done in this patch. The rest will use _gfn
for the time being.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---

get_page_from_gfn() is currently using an unsafe pattern as an MFN
should be validated via mfn_valid() before using mfn_to_page().

At Jan's request, this was dropped for this patch as this was unrelated.
If we want to fix it properly, then it should be done in a separate
patch along with them modifications of all the other callers using this
bad behavior..

    This was originally sent as part of "More typesafe conversion of common
    interface." [1].

    Changes since the original patch:
        - Use cr3_to_gfn()
        - Remove the re-ordering of mfn_valid() and mfn_to_page() (see
          above).

    [1] <20190819142651.11058-1-julien.grall@arm.com>
---
 xen/arch/arm/guestcopy.c             |  2 +-
 xen/arch/arm/mm.c                    |  2 +-
 xen/arch/x86/cpu/vpmu.c              |  2 +-
 xen/arch/x86/domctl.c                |  6 +++---
 xen/arch/x86/hvm/dm.c                |  2 +-
 xen/arch/x86/hvm/domain.c            |  6 ++++--
 xen/arch/x86/hvm/hvm.c               |  9 +++++----
 xen/arch/x86/hvm/svm/svm.c           |  8 ++++----
 xen/arch/x86/hvm/viridian/viridian.c | 16 ++++++++--------
 xen/arch/x86/hvm/vmx/vmx.c           |  4 ++--
 xen/arch/x86/hvm/vmx/vvmx.c          | 12 ++++++------
 xen/arch/x86/mm.c                    | 24 ++++++++++++++----------
 xen/arch/x86/mm/p2m.c                |  2 +-
 xen/arch/x86/mm/shadow/hvm.c         |  6 +++---
 xen/arch/x86/physdev.c               |  3 ++-
 xen/arch/x86/pv/descriptor-tables.c  |  4 ++--
 xen/arch/x86/pv/emul-priv-op.c       |  6 +++---
 xen/arch/x86/pv/mm.c                 |  2 +-
 xen/arch/x86/traps.c                 | 11 ++++++-----
 xen/common/domain.c                  |  2 +-
 xen/common/event_fifo.c              | 12 ++++++------
 xen/common/memory.c                  |  4 ++--
 xen/include/asm-arm/p2m.h            |  6 +++---
 xen/include/asm-x86/p2m.h            | 12 ++++++++----
 24 files changed, 88 insertions(+), 75 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 7a0f3e9d5f..55892062bb 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -37,7 +37,7 @@ static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,
         return get_page_from_gva(info.gva.v, addr,
                                  write ? GV2M_WRITE : GV2M_READ);
 
-    page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
+    page = get_page_from_gfn(info.gpa.d, gaddr_to_gfn(addr), &p2mt, P2M_ALLOC);
 
     if ( !page )
         return NULL;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 1075e5fcaf..d0ad06add4 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1446,7 +1446,7 @@ int xenmem_add_to_physmap_one(
 
         /* Take reference to the foreign domain page.
          * Reference will be released in XENMEM_remove_from_physmap */
-        page = get_page_from_gfn(od, idx, &p2mt, P2M_ALLOC);
+        page = get_page_from_gfn(od, _gfn(idx), &p2mt, P2M_ALLOC);
         if ( !page )
         {
             put_pg_owner(od);
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index e50d478d23..9777efa4fb 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -617,7 +617,7 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
     struct vcpu *v;
     struct vpmu_struct *vpmu;
     struct page_info *page;
-    uint64_t gfn = params->val;
+    gfn_t gfn = _gfn(params->val);
 
     if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) )
         return -EINVAL;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 02596c3810..8f5010fd58 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -391,7 +391,7 @@ long arch_do_domctl(
                 break;
             }
 
-            page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
+            page = get_page_from_gfn(d, _gfn(gfn), &t, P2M_ALLOC);
 
             if ( unlikely(!page) ||
                  unlikely(is_xen_heap_page(page)) )
@@ -461,11 +461,11 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_hypercall_init:
     {
-        unsigned long gmfn = domctl->u.hypercall_init.gmfn;
+        gfn_t gfn = _gfn(domctl->u.hypercall_init.gmfn);
         struct page_info *page;
         void *hypercall_page;
 
-        page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+        page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
 
         if ( !page || !get_page_type(page, PGT_writable_page) )
         {
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 96c5042b75..a09622007c 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -188,7 +188,7 @@ static int modified_memory(struct domain *d,
         {
             struct page_info *page;
 
-            page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
+            page = get_page_from_gfn(d, _gfn(pfn), NULL, P2M_UNSHARE);
             if ( page )
             {
                 paging_mark_pfn_dirty(d, _pfn(pfn));
diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
index 5d5a746a25..3c29ff86be 100644
--- a/xen/arch/x86/hvm/domain.c
+++ b/xen/arch/x86/hvm/domain.c
@@ -296,8 +296,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
     if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) )
     {
         /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
-        struct page_info *page = get_page_from_gfn(v->domain,
-                                 v->arch.hvm.guest_cr[3] >> PAGE_SHIFT,
+        struct page_info *page;
+
+        page = get_page_from_gfn(v->domain,
+                                 gaddr_to_gfn(v->arch.hvm.guest_cr[3]),
                                  NULL, P2M_ALLOC);
         if ( !page )
         {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a3d115b650..9f720e7aa1 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2216,7 +2216,7 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
 {
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    unsigned long gfn, old_value = v->arch.hvm.guest_cr[0];
+    unsigned long old_value = v->arch.hvm.guest_cr[0];
     struct page_info *page;
 
     HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR0 value = %lx", value);
@@ -2271,7 +2271,8 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
         if ( !paging_mode_hap(d) )
         {
             /* The guest CR3 must be pointing to the guest physical. */
-            gfn = v->arch.hvm.guest_cr[3] >> PAGE_SHIFT;
+            gfn_t gfn = gaddr_to_gfn(v->arch.hvm.guest_cr[3]);
+
             page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
             if ( !page )
             {
@@ -2363,7 +2364,7 @@ int hvm_set_cr3(unsigned long value, bool may_defer)
     {
         /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
         HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR3 value = %lx", value);
-        page = get_page_from_gfn(v->domain, value >> PAGE_SHIFT,
+        page = get_page_from_gfn(v->domain, cr3_to_gfn(value),
                                  NULL, P2M_ALLOC);
         if ( !page )
             goto bad_cr3;
@@ -3191,7 +3192,7 @@ enum hvm_translation_result hvm_translate_get_page(
          && hvm_mmio_internal(gfn_to_gaddr(gfn)) )
         return HVMTRANS_bad_gfn_to_mfn;
 
-    page = get_page_from_gfn(v->domain, gfn_x(gfn), &p2mt, P2M_UNSHARE);
+    page = get_page_from_gfn(v->domain, gfn, &p2mt, P2M_UNSHARE);
 
     if ( !page )
         return HVMTRANS_bad_gfn_to_mfn;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 32d8d847f2..a9abd6d3f1 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -299,7 +299,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
     {
         if ( c->cr0 & X86_CR0_PG )
         {
-            page = get_page_from_gfn(v->domain, c->cr3 >> PAGE_SHIFT,
+            page = get_page_from_gfn(v->domain, cr3_to_gfn(c->cr3),
                                      NULL, P2M_ALLOC);
             if ( !page )
             {
@@ -2230,9 +2230,9 @@ nsvm_get_nvmcb_page(struct vcpu *v, uint64_t vmcbaddr)
         return NULL;
 
     /* Need to translate L1-GPA to MPA */
-    page = get_page_from_gfn(v->domain, 
-                            nv->nv_vvmcxaddr >> PAGE_SHIFT, 
-                            &p2mt, P2M_ALLOC | P2M_UNSHARE);
+    page = get_page_from_gfn(v->domain,
+                             gaddr_to_gfn(nv->nv_vvmcxaddr),
+                             &p2mt, P2M_ALLOC | P2M_UNSHARE);
     if ( !page )
         return NULL;
 
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 977c1bc54f..3d75a0f133 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -242,16 +242,16 @@ static void dump_hypercall(const struct domain *d)
 
 static void enable_hypercall_page(struct domain *d)
 {
-    unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.pfn;
-    struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+    gfn_t gfn = _gfn(d->arch.hvm.viridian->hypercall_gpa.pfn);
+    struct page_info *page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
     uint8_t *p;
 
     if ( !page || !get_page_type(page, PGT_writable_page) )
     {
         if ( page )
             put_page(page);
-        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
-                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
+        gdprintk(XENLOG_WARNING, "Bad GFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
+                 gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
         return;
     }
 
@@ -719,13 +719,13 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name,
 
 void viridian_map_guest_page(struct domain *d, struct viridian_page *vp)
 {
-    unsigned long gmfn = vp->msr.pfn;
+    gfn_t gfn = _gfn(vp->msr.pfn);
     struct page_info *page;
 
     if ( vp->ptr )
         return;
 
-    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
     if ( !page )
         goto fail;
 
@@ -746,8 +746,8 @@ void viridian_map_guest_page(struct domain *d, struct viridian_page *vp)
     return;
 
  fail:
-    gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
-             gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
+    gdprintk(XENLOG_WARNING, "Bad GFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
+             gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
 }
 
 void viridian_unmap_guest_page(struct viridian_page *vp)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a1e3a19c0a..f1898c63c5 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -681,7 +681,7 @@ static int vmx_restore_cr0_cr3(
     {
         if ( cr0 & X86_CR0_PG )
         {
-            page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT,
+            page = get_page_from_gfn(v->domain, gaddr_to_gfn(cr3),
                                      NULL, P2M_ALLOC);
             if ( !page )
             {
@@ -1321,7 +1321,7 @@ static void vmx_load_pdptrs(struct vcpu *v)
     if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) )
         goto crash;
 
-    page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UNSHARE);
+    page = get_page_from_gfn(v->domain, gaddr_to_gfn(cr3), &p2mt, P2M_UNSHARE);
     if ( !page )
     {
         /* Ideally you don't want to crash but rather go into a wait 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 84b47ef277..eee4af3206 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -718,11 +718,11 @@ static void nvmx_update_apic_access_address(struct vcpu *v)
     if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
     {
         p2m_type_t p2mt;
-        unsigned long apic_gpfn;
+        gfn_t apic_gfn;
         struct page_info *apic_pg;
 
-        apic_gpfn = get_vvmcs(v, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
-        apic_pg = get_page_from_gfn(v->domain, apic_gpfn, &p2mt, P2M_ALLOC);
+        apic_gfn = gaddr_to_gfn(get_vvmcs(v, APIC_ACCESS_ADDR));
+        apic_pg = get_page_from_gfn(v->domain, apic_gfn, &p2mt, P2M_ALLOC);
         ASSERT(apic_pg && !p2m_is_paging(p2mt));
         __vmwrite(APIC_ACCESS_ADDR, page_to_maddr(apic_pg));
         put_page(apic_pg);
@@ -739,11 +739,11 @@ static void nvmx_update_virtual_apic_address(struct vcpu *v)
     if ( ctrl & CPU_BASED_TPR_SHADOW )
     {
         p2m_type_t p2mt;
-        unsigned long vapic_gpfn;
+        gfn_t vapic_gfn;
         struct page_info *vapic_pg;
 
-        vapic_gpfn = get_vvmcs(v, VIRTUAL_APIC_PAGE_ADDR) >> PAGE_SHIFT;
-        vapic_pg = get_page_from_gfn(v->domain, vapic_gpfn, &p2mt, P2M_ALLOC);
+        vapic_gfn = gaddr_to_gfn(get_vvmcs(v, VIRTUAL_APIC_PAGE_ADDR));
+        vapic_pg = get_page_from_gfn(v->domain, vapic_gfn, &p2mt, P2M_ALLOC);
         ASSERT(vapic_pg && !p2m_is_paging(p2mt));
         __vmwrite(VIRTUAL_APIC_PAGE_ADDR, page_to_maddr(vapic_pg));
         put_page(vapic_pg);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2feb7a5993..b9a656643b 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2150,7 +2150,7 @@ static int mod_l1_entry(l1_pgentry_t *pl1e, l1_pgentry_t nl1e,
             p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ?
                             P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC;
 
-            page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q);
+            page = get_page_from_gfn(pg_dom, _gfn(l1e_get_pfn(nl1e)), &p2mt, q);
 
             if ( p2m_is_paged(p2mt) )
             {
@@ -3433,7 +3433,8 @@ long do_mmuext_op(
             if ( paging_mode_refcounts(pg_owner) )
                 break;
 
-            page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);
+            page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), NULL,
+                                     P2M_ALLOC);
             if ( unlikely(!page) )
             {
                 rc = -EINVAL;
@@ -3499,7 +3500,8 @@ long do_mmuext_op(
             if ( paging_mode_refcounts(pg_owner) )
                 break;
 
-            page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);
+            page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), NULL,
+                                     P2M_ALLOC);
             if ( unlikely(!page) )
             {
                 gdprintk(XENLOG_WARNING,
@@ -3724,7 +3726,8 @@ long do_mmuext_op(
         }
 
         case MMUEXT_CLEAR_PAGE:
-            page = get_page_from_gfn(pg_owner, op.arg1.mfn, &p2mt, P2M_ALLOC);
+            page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), &p2mt,
+                                     P2M_ALLOC);
             if ( unlikely(p2mt != p2m_ram_rw) && page )
             {
                 put_page(page);
@@ -3752,7 +3755,7 @@ long do_mmuext_op(
         {
             struct page_info *src_page, *dst_page;
 
-            src_page = get_page_from_gfn(pg_owner, op.arg2.src_mfn, &p2mt,
+            src_page = get_page_from_gfn(pg_owner, _gfn(op.arg2.src_mfn), &p2mt,
                                          P2M_ALLOC);
             if ( unlikely(p2mt != p2m_ram_rw) && src_page )
             {
@@ -3768,7 +3771,7 @@ long do_mmuext_op(
                 break;
             }
 
-            dst_page = get_page_from_gfn(pg_owner, op.arg1.mfn, &p2mt,
+            dst_page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), &p2mt,
                                          P2M_ALLOC);
             if ( unlikely(p2mt != p2m_ram_rw) && dst_page )
             {
@@ -3856,7 +3859,8 @@ long do_mmu_update(
 {
     struct mmu_update req;
     void *va = NULL;
-    unsigned long gpfn, gmfn;
+    unsigned long gpfn;
+    gfn_t gfn;
     struct page_info *page;
     unsigned int cmd, i = 0, done = 0, pt_dom;
     struct vcpu *curr = current, *v = curr;
@@ -3969,8 +3973,8 @@ long do_mmu_update(
             rc = -EINVAL;
 
             req.ptr -= cmd;
-            gmfn = req.ptr >> PAGE_SHIFT;
-            page = get_page_from_gfn(pt_owner, gmfn, &p2mt, P2M_ALLOC);
+            gfn = gaddr_to_gfn(req.ptr);
+            page = get_page_from_gfn(pt_owner, gfn, &p2mt, P2M_ALLOC);
 
             if ( unlikely(!page) || p2mt != p2m_ram_rw )
             {
@@ -3978,7 +3982,7 @@ long do_mmu_update(
                     put_page(page);
                 if ( p2m_is_paged(p2mt) )
                 {
-                    p2m_mem_paging_populate(pt_owner, gmfn);
+                    p2m_mem_paging_populate(pt_owner, gfn_x(gfn));
                     rc = -ENOENT;
                 }
                 else
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 587c062481..1ce012600c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2967,7 +2967,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
      * Take a refcnt on the mfn. NB: following supported for foreign mapping:
      *     ram_rw | ram_logdirty | ram_ro | paging_out.
      */
-    page = get_page_from_gfn(fdom, fgfn, &p2mt, P2M_ALLOC);
+    page = get_page_from_gfn(fdom, _gfn(fgfn), &p2mt, P2M_ALLOC);
     if ( !page ||
          !p2m_is_ram(p2mt) || p2m_is_shared(p2mt) || p2m_is_hole(p2mt) )
     {
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index 1e6024c71f..bb11f28531 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -398,15 +398,15 @@ void shadow_continue_emulation(struct sh_emulate_ctxt *sh_ctxt,
 static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
                                 struct sh_emulate_ctxt *sh_ctxt)
 {
-    unsigned long gfn;
+    gfn_t gfn;
     struct page_info *page;
     mfn_t mfn;
     p2m_type_t p2mt;
     uint32_t pfec = PFEC_page_present | PFEC_write_access;
 
     /* Translate the VA to a GFN. */
-    gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
-    if ( gfn == gfn_x(INVALID_GFN) )
+    gfn = _gfn(paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec));
+    if ( gfn_eq(gfn, INVALID_GFN) )
     {
         x86_emul_pagefault(pfec, vaddr, &sh_ctxt->ctxt);
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 3a3c15890b..4f3f438614 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -229,7 +229,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             break;
 
         ret = -EINVAL;
-        page = get_page_from_gfn(current->domain, info.gmfn, NULL, P2M_ALLOC);
+        page = get_page_from_gfn(current->domain, _gfn(info.gmfn),
+                                 NULL, P2M_ALLOC);
         if ( !page )
             break;
         if ( !get_page_type(page, PGT_writable_page) )
diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
index f22beb1f3c..899ed45c6a 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -112,7 +112,7 @@ long pv_set_gdt(struct vcpu *v, unsigned long *frames, unsigned int entries)
     {
         struct page_info *page;
 
-        page = get_page_from_gfn(d, frames[i], NULL, P2M_ALLOC);
+        page = get_page_from_gfn(d, _gfn(frames[i]), NULL, P2M_ALLOC);
         if ( !page )
             goto fail;
         if ( !get_page_type(page, PGT_seg_desc_page) )
@@ -219,7 +219,7 @@ long do_update_descriptor(uint64_t gaddr, seg_desc_t d)
     if ( !IS_ALIGNED(gaddr, sizeof(d)) || !check_descriptor(currd, &d) )
         return -EINVAL;
 
-    page = get_page_from_gfn(currd, gfn_x(gfn), NULL, P2M_ALLOC);
+    page = get_page_from_gfn(currd, gfn, NULL, P2M_ALLOC);
     if ( !page )
         return -EINVAL;
 
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index e24b84f46a..552b669623 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -756,12 +756,12 @@ static int write_cr(unsigned int reg, unsigned long val,
     case 3: /* Write CR3 */
     {
         struct domain *currd = curr->domain;
-        unsigned long gfn;
+        gfn_t gfn;
         struct page_info *page;
         int rc;
 
-        gfn = !is_pv_32bit_domain(currd)
-              ? xen_cr3_to_pfn(val) : compat_cr3_to_pfn(val);
+        gfn = _gfn(!is_pv_32bit_domain(currd)
+                   ? xen_cr3_to_pfn(val) : compat_cr3_to_pfn(val));
         page = get_page_from_gfn(currd, gfn, NULL, P2M_ALLOC);
         if ( !page )
             break;
diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
index 2b0dadc8da..00df5edd6f 100644
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -110,7 +110,7 @@ bool pv_map_ldt_shadow_page(unsigned int offset)
     if ( unlikely(!(l1e_get_flags(gl1e) & _PAGE_PRESENT)) )
         return false;
 
-    page = get_page_from_gfn(currd, l1e_get_pfn(gl1e), NULL, P2M_ALLOC);
+    page = get_page_from_gfn(currd, _gfn(l1e_get_pfn(gl1e)), NULL, P2M_ALLOC);
     if ( unlikely(!page) )
         return false;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 4f524dc71e..e5de86845f 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -826,7 +826,7 @@ int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
     case 0: /* Write hypercall page */
     {
         void *hypercall_page;
-        unsigned long gmfn = val >> PAGE_SHIFT;
+        gfn_t gfn = gaddr_to_gfn(val);
         unsigned int page_index = val & (PAGE_SIZE - 1);
         struct page_info *page;
         p2m_type_t t;
@@ -839,7 +839,7 @@ int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
             return X86EMUL_EXCEPTION;
         }
 
-        page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
+        page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
 
         if ( !page || !get_page_type(page, PGT_writable_page) )
         {
@@ -848,13 +848,14 @@ int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
 
             if ( p2m_is_paging(t) )
             {
-                p2m_mem_paging_populate(d, gmfn);
+                p2m_mem_paging_populate(d, gfn_x(gfn));
                 return X86EMUL_RETRY;
             }
 
             gdprintk(XENLOG_WARNING,
-                     "Bad GMFN %lx (MFN %#"PRI_mfn") to MSR %08x\n",
-                     gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN), base);
+                     "Bad GFN %"PRI_gfn" (MFN %"PRI_mfn") to MSR %08x\n",
+                     gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN),
+                     base);
             return X86EMUL_EXCEPTION;
         }
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index b4eb476a9c..8435528383 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1237,7 +1237,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     if ( (v != current) && !(v->pause_flags & VPF_down) )
         return -EINVAL;
 
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, _gfn(gfn), NULL, P2M_ALLOC);
     if ( !page )
         return -EINVAL;
 
diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index 230f440f14..073981ab43 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -361,7 +361,7 @@ static const struct evtchn_port_ops evtchn_port_ops_fifo =
     .print_state   = evtchn_fifo_print_state,
 };
 
-static int map_guest_page(struct domain *d, uint64_t gfn, void **virt)
+static int map_guest_page(struct domain *d, gfn_t gfn, void **virt)
 {
     struct page_info *p;
 
@@ -422,7 +422,7 @@ static int setup_control_block(struct vcpu *v)
     return 0;
 }
 
-static int map_control_block(struct vcpu *v, uint64_t gfn, uint32_t offset)
+static int map_control_block(struct vcpu *v, gfn_t gfn, uint32_t offset)
 {
     void *virt;
     unsigned int i;
@@ -508,7 +508,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
 {
     struct domain *d = current->domain;
     uint32_t vcpu_id;
-    uint64_t gfn;
+    gfn_t gfn;
     uint32_t offset;
     struct vcpu *v;
     int rc;
@@ -516,7 +516,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
     init_control->link_bits = EVTCHN_FIFO_LINK_BITS;
 
     vcpu_id = init_control->vcpu;
-    gfn     = init_control->control_gfn;
+    gfn     = _gfn(init_control->control_gfn);
     offset  = init_control->offset;
 
     if ( (v = domain_vcpu(d, vcpu_id)) == NULL )
@@ -578,7 +578,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
     return rc;
 }
 
-static int add_page_to_event_array(struct domain *d, unsigned long gfn)
+static int add_page_to_event_array(struct domain *d, gfn_t gfn)
 {
     void *virt;
     unsigned int slot;
@@ -628,7 +628,7 @@ int evtchn_fifo_expand_array(const struct evtchn_expand_array *expand_array)
         return -EOPNOTSUPP;
 
     spin_lock(&d->event_lock);
-    rc = add_page_to_event_array(d, expand_array->array_gfn);
+    rc = add_page_to_event_array(d, _gfn(expand_array->array_gfn));
     spin_unlock(&d->event_lock);
 
     return rc;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 6e4b85674d..7e3c3bb7af 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1388,7 +1388,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return rc;
         }
 
-        page = get_page_from_gfn(d, xrfp.gpfn, NULL, P2M_ALLOC);
+        page = get_page_from_gfn(d, _gfn(xrfp.gpfn), NULL, P2M_ALLOC);
         if ( page )
         {
             rc = guest_physmap_remove_page(d, _gfn(xrfp.gpfn),
@@ -1659,7 +1659,7 @@ int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
     p2m_type_t p2mt;
     struct page_info *page;
 
-    page = get_page_from_gfn(d, gfn_x(gfn), &p2mt, q);
+    page = get_page_from_gfn(d, gfn, &p2mt, q);
 
 #ifdef CONFIG_HAS_MEM_PAGING
     if ( p2m_is_paging(p2mt) )
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8183..f1d01ceb3f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -304,7 +304,7 @@ struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
                                         p2m_type_t *t);
 
 static inline struct page_info *get_page_from_gfn(
-    struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q)
+    struct domain *d, gfn_t gfn, p2m_type_t *t, p2m_query_t q)
 {
     mfn_t mfn;
     p2m_type_t _t;
@@ -315,7 +315,7 @@ static inline struct page_info *get_page_from_gfn(
      * not auto-translated.
      */
     if ( likely(d != dom_xen) )
-        return p2m_get_page_from_gfn(d, _gfn(gfn), t);
+        return p2m_get_page_from_gfn(d, gfn, t);
 
     if ( !t )
         t = &_t;
@@ -326,7 +326,7 @@ static inline struct page_info *get_page_from_gfn(
      * DOMID_XEN sees 1-1 RAM. The p2m_type is based on the type of the
      * page.
      */
-    mfn = _mfn(gfn);
+    mfn = _mfn(gfn_x(gfn));
     page = mfn_to_page(mfn);
 
     if ( !mfn_valid(mfn) || !get_page(page, d) )
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 39dae242b0..da842487bb 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -487,18 +487,22 @@ struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
                                         p2m_query_t q);
 
 static inline struct page_info *get_page_from_gfn(
-    struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q)
+    struct domain *d, gfn_t gfn, p2m_type_t *t, p2m_query_t q)
 {
     struct page_info *page;
+    mfn_t mfn;
 
     if ( paging_mode_translate(d) )
-        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t, NULL, q);
+        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), gfn, t, NULL, q);
 
     /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
     if ( t )
         *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct;
-    page = mfn_to_page(_mfn(gfn));
-    return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
+
+    mfn = _mfn(gfn_x(gfn));
+
+    page = mfn_to_page(mfn);
+    return mfn_valid(mfn) && get_page(page, d) ? page : NULL;
 }
 
 /* General conversion function from mfn to gfn */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
  2020-03-22 16:14 ` [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn julien
@ 2020-03-23  8:37   ` Paul Durrant
  2020-03-23 10:26     ` Julien Grall
  2020-03-27 13:50   ` Jan Beulich
  1 sibling, 1 reply; 61+ messages in thread
From: Paul Durrant @ 2020-03-23  8:37 UTC (permalink / raw)
  To: julien, xen-devel
  Cc: 'Kevin Tian', 'Stefano Stabellini',
	'Jun Nakajima', 'Wei Liu',
	'Andrew Cooper', 'Ian Jackson',
	'George Dunlap', 'Tim Deegan',
	'Julien Grall', 'Jan Beulich',
	'Volodymyr Babchuk', 'Roger Pau Monné'

> -----Original Message-----
> From: julien@xen.org <julien@xen.org>
> Sent: 22 March 2020 16:14
> To: xen-devel@lists.xenproject.org
> Cc: julien@xen.org; Julien Grall <julien.grall@arm.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George
> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Roger Pau Monné <roger.pau@citrix.com>; Paul Durrant
> <paul@xen.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Tim Deegan
> <tim@xen.org>
> Subject: [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
> 
> From: Julien Grall <julien.grall@arm.com>
> 
> No functional change intended.
> 
> Only reasonable clean-ups are done in this patch. The rest will use _gfn
> for the time being.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Definitely an improvement so...

Reviewed-by: Paul Durrant <paul@xen.org>

But a couple of things I noticed...

[snip]
> diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
> index 5d5a746a25..3c29ff86be 100644
> --- a/xen/arch/x86/hvm/domain.c
> +++ b/xen/arch/x86/hvm/domain.c
> @@ -296,8 +296,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>      if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) )
>      {
>          /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
> -        struct page_info *page = get_page_from_gfn(v->domain,
> -                                 v->arch.hvm.guest_cr[3] >> PAGE_SHIFT,
> +        struct page_info *page;
> +
> +        page = get_page_from_gfn(v->domain,
> +                                 gaddr_to_gfn(v->arch.hvm.guest_cr[3]),

Should this be cr3_to_gfn?

>                                   NULL, P2M_ALLOC);
>          if ( !page )
>          {
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index a3d115b650..9f720e7aa1 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2216,7 +2216,7 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
>  {
>      struct vcpu *v = current;
>      struct domain *d = v->domain;
> -    unsigned long gfn, old_value = v->arch.hvm.guest_cr[0];
> +    unsigned long old_value = v->arch.hvm.guest_cr[0];
>      struct page_info *page;
> 
>      HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR0 value = %lx", value);
> @@ -2271,7 +2271,8 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
>          if ( !paging_mode_hap(d) )
>          {
>              /* The guest CR3 must be pointing to the guest physical. */
> -            gfn = v->arch.hvm.guest_cr[3] >> PAGE_SHIFT;
> +            gfn_t gfn = gaddr_to_gfn(v->arch.hvm.guest_cr[3]);
> +

Same here.

>              page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
>              if ( !page )
>              {
> @@ -2363,7 +2364,7 @@ int hvm_set_cr3(unsigned long value, bool may_defer)
>      {
>          /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
>          HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR3 value = %lx", value);
> -        page = get_page_from_gfn(v->domain, value >> PAGE_SHIFT,
> +        page = get_page_from_gfn(v->domain, cr3_to_gfn(value),
>                                   NULL, P2M_ALLOC);
>          if ( !page )
>              goto bad_cr3;
> @@ -3191,7 +3192,7 @@ enum hvm_translation_result hvm_translate_get_page(
>           && hvm_mmio_internal(gfn_to_gaddr(gfn)) )
>          return HVMTRANS_bad_gfn_to_mfn;
> 
> -    page = get_page_from_gfn(v->domain, gfn_x(gfn), &p2mt, P2M_UNSHARE);
> +    page = get_page_from_gfn(v->domain, gfn, &p2mt, P2M_UNSHARE);
> 
>      if ( !page )
>          return HVMTRANS_bad_gfn_to_mfn;
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 32d8d847f2..a9abd6d3f1 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -299,7 +299,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
>      {
>          if ( c->cr0 & X86_CR0_PG )
>          {
> -            page = get_page_from_gfn(v->domain, c->cr3 >> PAGE_SHIFT,
> +            page = get_page_from_gfn(v->domain, cr3_to_gfn(c->cr3),
>                                       NULL, P2M_ALLOC);
>              if ( !page )
>              {
> @@ -2230,9 +2230,9 @@ nsvm_get_nvmcb_page(struct vcpu *v, uint64_t vmcbaddr)
>          return NULL;
> 
>      /* Need to translate L1-GPA to MPA */
> -    page = get_page_from_gfn(v->domain,
> -                            nv->nv_vvmcxaddr >> PAGE_SHIFT,
> -                            &p2mt, P2M_ALLOC | P2M_UNSHARE);
> +    page = get_page_from_gfn(v->domain,
> +                             gaddr_to_gfn(nv->nv_vvmcxaddr),
> +                             &p2mt, P2M_ALLOC | P2M_UNSHARE);
>      if ( !page )
>          return NULL;
> 
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
> index 977c1bc54f..3d75a0f133 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -242,16 +242,16 @@ static void dump_hypercall(const struct domain *d)
> 
>  static void enable_hypercall_page(struct domain *d)
>  {
> -    unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.pfn;
> -    struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
> +    gfn_t gfn = _gfn(d->arch.hvm.viridian->hypercall_gpa.pfn);
> +    struct page_info *page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
>      uint8_t *p;
> 
>      if ( !page || !get_page_type(page, PGT_writable_page) )
>      {
>          if ( page )
>              put_page(page);
> -        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> -                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
> +        gdprintk(XENLOG_WARNING, "Bad GFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> +                 gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
>          return;
>      }
> 
> @@ -719,13 +719,13 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name,
> 
>  void viridian_map_guest_page(struct domain *d, struct viridian_page *vp)
>  {
> -    unsigned long gmfn = vp->msr.pfn;
> +    gfn_t gfn = _gfn(vp->msr.pfn);
>      struct page_info *page;
> 
>      if ( vp->ptr )
>          return;
> 
> -    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
> +    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
>      if ( !page )
>          goto fail;
> 
> @@ -746,8 +746,8 @@ void viridian_map_guest_page(struct domain *d, struct viridian_page *vp)
>      return;
> 
>   fail:
> -    gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> -             gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
> +    gdprintk(XENLOG_WARNING, "Bad GFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
> +             gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
>  }
> 
>  void viridian_unmap_guest_page(struct viridian_page *vp)
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index a1e3a19c0a..f1898c63c5 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -681,7 +681,7 @@ static int vmx_restore_cr0_cr3(
>      {
>          if ( cr0 & X86_CR0_PG )
>          {
> -            page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT,
> +            page = get_page_from_gfn(v->domain, gaddr_to_gfn(cr3),

And here.

>                                       NULL, P2M_ALLOC);
>              if ( !page )
>              {
> @@ -1321,7 +1321,7 @@ static void vmx_load_pdptrs(struct vcpu *v)
>      if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) )
>          goto crash;
> 
> -    page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UNSHARE);
> +    page = get_page_from_gfn(v->domain, gaddr_to_gfn(cr3), &p2mt, P2M_UNSHARE);

And here.

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
  2020-03-23  8:37   ` Paul Durrant
@ 2020-03-23 10:26     ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-03-23 10:26 UTC (permalink / raw)
  To: paul, xen-devel
  Cc: 'Kevin Tian', 'Stefano Stabellini',
	'Jun Nakajima', 'Wei Liu',
	'Andrew Cooper', 'Ian Jackson',
	'George Dunlap', 'Tim Deegan',
	'Julien Grall', 'Jan Beulich',
	'Volodymyr Babchuk', 'Roger Pau Monné'

Hi Paul,

On 23/03/2020 08:37, Paul Durrant wrote:
>> -----Original Message-----
>> From: julien@xen.org <julien@xen.org>
>> Sent: 22 March 2020 16:14
>> To: xen-devel@lists.xenproject.org
>> Cc: julien@xen.org; Julien Grall <julien.grall@arm.com>; Stefano Stabellini <sstabellini@kernel.org>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George
>> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
>> <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Roger Pau Monné <roger.pau@citrix.com>; Paul Durrant
>> <paul@xen.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Tim Deegan
>> <tim@xen.org>
>> Subject: [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
>>
>> From: Julien Grall <julien.grall@arm.com>
>>
>> No functional change intended.
>>
>> Only reasonable clean-ups are done in this patch. The rest will use _gfn
>> for the time being.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> Definitely an improvement so...
> 
> Reviewed-by: Paul Durrant <paul@xen.org>
> 
> But a couple of things I noticed...
> 
> [snip]
>> diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
>> index 5d5a746a25..3c29ff86be 100644
>> --- a/xen/arch/x86/hvm/domain.c
>> +++ b/xen/arch/x86/hvm/domain.c
>> @@ -296,8 +296,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>>       if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) )
>>       {
>>           /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
>> -        struct page_info *page = get_page_from_gfn(v->domain,
>> -                                 v->arch.hvm.guest_cr[3] >> PAGE_SHIFT,
>> +        struct page_info *page;
>> +
>> +        page = get_page_from_gfn(v->domain,
>> +                                 gaddr_to_gfn(v->arch.hvm.guest_cr[3]),
> 
> Should this be cr3_to_gfn?

Definitely yes. I thought I spotted all the use when introducing the new 
helper but it looks like not. I will update the patch in the new version 
to use cr3_to_gfn() everywhere you suggested.

Thanks.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  2020-03-22 16:14 ` [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN julien
@ 2020-03-23 12:11   ` Hongyan Xia
  2020-03-23 12:26     ` Julien Grall
  2020-03-27 13:15   ` Jan Beulich
  1 sibling, 1 reply; 61+ messages in thread
From: Hongyan Xia @ 2020-03-23 12:11 UTC (permalink / raw)
  To: julien, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Volodymyr Babchuk, Roger Pau Monné

On Sun, 2020-03-22 at 16:14 +0000, julien@xen.org wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> The first parameter of {s,g}et_gpfn_from_mfn() is an MFN, so it can
> be
> switched to use the typesafe.
> 
> At the same time, replace gpfn with pfn in the helpers as they all
> deal
> with PFN and also turn the macros to static inline.
> 
> Note that the return of the getter and the 2nd parameter of the
> setter
> have not been converted to use typesafe PFN because it was requiring
> more changes than expected.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
>     This was originally sent as part of "xen/arm: Properly disable
> M2P
>     on Arm" [1].
> 
>     Changes since the original version:
>         - mfn_to_gmfn() is still present for now so update it
>         - Remove stray +
>         - Avoid churn in set_pfn_from_mfn() by inverting mfn and mfn_
>         - Remove tags
>         - Fix build in mem_sharing
> 
>     [1] <20190603160350.29806-1-julien.grall@arm.com>
> ---
>  xen/arch/x86/cpu/mcheck/mcaction.c |  2 +-
>  xen/arch/x86/mm.c                  | 14 +++----
>  xen/arch/x86/mm/mem_sharing.c      | 20 ++++-----
>  xen/arch/x86/mm/p2m-pod.c          |  4 +-
>  xen/arch/x86/mm/p2m-pt.c           | 35 ++++++++--------
>  xen/arch/x86/mm/p2m.c              | 66 +++++++++++++++-------------
> --
>  xen/arch/x86/mm/paging.c           |  4 +-
>  xen/arch/x86/pv/dom0_build.c       |  6 +--
>  xen/arch/x86/x86_64/traps.c        |  8 ++--
>  xen/common/page_alloc.c            |  2 +-
>  xen/include/asm-arm/mm.h           |  2 +-
>  xen/include/asm-x86/grant_table.h  |  2 +-
>  xen/include/asm-x86/mm.h           | 12 ++++--
>  xen/include/asm-x86/p2m.h          |  2 +-
>  14 files changed, 93 insertions(+), 86 deletions(-)
> 
> 

[...]

> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index abf4cc23e4..11614f9107 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -319,7 +319,7 @@ struct page_info *get_page_from_gva(struct vcpu
> *v, vaddr_t va,
>  #define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
>  
>  /* Xen always owns P2M on ARM */
> -#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn);
> } while (0)
> +static inline void set_pfn_from_mfn(mfn_t mfn, unsigned long pfn) {}
>  #define mfn_to_gmfn(_d, mfn)  (mfn) 

I do not have a setup to compile and test code for Arm, but wouldn't
the compiler complain about unused arguments here? The marco version
explicitly silenced compiler complaints.
 
> diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-
> x86/grant_table.h
> index 5871238f6d..b6a09c4c6c 100644
> --- a/xen/include/asm-x86/grant_table.h
> +++ b/xen/include/asm-x86/grant_table.h
> @@ -41,7 +41,7 @@ static inline int
> replace_grant_host_mapping(uint64_t addr, mfn_t frame,
>  #define gnttab_get_frame_gfn(gt, st, idx)
> ({                             \
>      mfn_t mfn_ = (st) ? gnttab_status_mfn(gt,
> idx)                       \
>                        : gnttab_shared_mfn(gt,
> idx);                      \
> -    unsigned long gpfn_ =
> get_gpfn_from_mfn(mfn_x(mfn_));                \
> +    unsigned long gpfn_ =
> get_pfn_from_mfn(mfn_);                        \
>      VALID_M2P(gpfn_) ? _gfn(gpfn_) :
> INVALID_GFN;                        \
>  })
>  
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index 53f2ed7c7d..2a4f42e78f 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -500,9 +500,10 @@ extern paddr_t mem_hotplug;
>   */
>  extern bool machine_to_phys_mapping_valid;
>  
> -static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned
> long pfn)
> +static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn)
>  {
> -    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
> +    const unsigned long mfn = mfn_x(mfn_);
> +    const struct domain *d = page_get_owner(mfn_to_page(mfn_));
>      unsigned long entry = (d && (d == dom_cow)) ? SHARED_M2P_ENTRY :
> pfn;
>  
>      if ( !machine_to_phys_mapping_valid )
> @@ -515,11 +516,14 @@ static inline void set_gpfn_from_mfn(unsigned
> long mfn, unsigned long pfn)
>  
>  extern struct rangeset *mmio_ro_ranges;
>  
> -#define get_gpfn_from_mfn(mfn)      (machine_to_phys_mapping[(mfn)])
> +static inline unsigned long get_pfn_from_mfn(mfn_t mfn)
> +{
> +    return machine_to_phys_mapping[mfn_x(mfn)];
> +}

Any specific reason this (and some other macros) are turned into static
inline? I don't have a problem with them being inline functions but
just wondering if there is a reason to do so.
 
>  #define mfn_to_gmfn(_d, mfn)                            \
>      ( (paging_mode_translate(_d))                       \
> -      ? get_gpfn_from_mfn(mfn)                          \
> +      ? get_pfn_from_mfn(_mfn(mfn))                     \
>        : (mfn) )
>  
>  #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) |
> ((unsigned)(pfn) >> 20))
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index a2c6049834..39dae242b0 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -505,7 +505,7 @@ static inline struct page_info
> *get_page_from_gfn(
>  static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>  {
>      if ( paging_mode_translate(d) )
> -        return _gfn(get_gpfn_from_mfn(mfn_x(mfn)));
> +        return _gfn(get_pfn_from_mfn(mfn));
>      else
>          return _gfn(mfn_x(mfn));
>  }

Apart from the two comments above, looks good to me.

Reviewed-by: Hongyan Xia <hongyxia@amazon.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  2020-03-23 12:11   ` Hongyan Xia
@ 2020-03-23 12:26     ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-03-23 12:26 UTC (permalink / raw)
  To: Hongyan Xia, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Julien Grall, Tamas K Lengyel, Jan Beulich,
	Volodymyr Babchuk, Roger Pau Monné

Hi,

On 23/03/2020 12:11, Hongyan Xia wrote:
> On Sun, 2020-03-22 at 16:14 +0000, julien@xen.org wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> The first parameter of {s,g}et_gpfn_from_mfn() is an MFN, so it can
>> be
>> switched to use the typesafe.
>>
>> At the same time, replace gpfn with pfn in the helpers as they all
>> deal
>> with PFN and also turn the macros to static inline.
>>
>> Note that the return of the getter and the 2nd parameter of the
>> setter
>> have not been converted to use typesafe PFN because it was requiring
>> more changes than expected.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>
>> ---
>>      This was originally sent as part of "xen/arm: Properly disable
>> M2P
>>      on Arm" [1].
>>
>>      Changes since the original version:
>>          - mfn_to_gmfn() is still present for now so update it
>>          - Remove stray +
>>          - Avoid churn in set_pfn_from_mfn() by inverting mfn and mfn_
>>          - Remove tags
>>          - Fix build in mem_sharing
>>
>>      [1] <20190603160350.29806-1-julien.grall@arm.com>
>> ---
>>   xen/arch/x86/cpu/mcheck/mcaction.c |  2 +-
>>   xen/arch/x86/mm.c                  | 14 +++----
>>   xen/arch/x86/mm/mem_sharing.c      | 20 ++++-----
>>   xen/arch/x86/mm/p2m-pod.c          |  4 +-
>>   xen/arch/x86/mm/p2m-pt.c           | 35 ++++++++--------
>>   xen/arch/x86/mm/p2m.c              | 66 +++++++++++++++-------------
>> --
>>   xen/arch/x86/mm/paging.c           |  4 +-
>>   xen/arch/x86/pv/dom0_build.c       |  6 +--
>>   xen/arch/x86/x86_64/traps.c        |  8 ++--
>>   xen/common/page_alloc.c            |  2 +-
>>   xen/include/asm-arm/mm.h           |  2 +-
>>   xen/include/asm-x86/grant_table.h  |  2 +-
>>   xen/include/asm-x86/mm.h           | 12 ++++--
>>   xen/include/asm-x86/p2m.h          |  2 +-
>>   14 files changed, 93 insertions(+), 86 deletions(-)
>>
>>
> 
> [...]
> 
>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>> index abf4cc23e4..11614f9107 100644
>> --- a/xen/include/asm-arm/mm.h
>> +++ b/xen/include/asm-arm/mm.h
>> @@ -319,7 +319,7 @@ struct page_info *get_page_from_gva(struct vcpu
>> *v, vaddr_t va,
>>   #define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
>>   
>>   /* Xen always owns P2M on ARM */
>> -#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn);
>> } while (0)
>> +static inline void set_pfn_from_mfn(mfn_t mfn, unsigned long pfn) {}
>>   #define mfn_to_gmfn(_d, mfn)  (mfn)
> 
> I do not have a setup to compile and test code for Arm, but wouldn't
> the compiler complain about unused arguments here? The marco version
> explicitly silenced compiler complaints.

The macro version does not use (void)(arg) for silencing unused 
parameter. It is for evaluating (mfn) but ignore the result. A compiler 
would warn without (void) because we build Xen with -Wall which include 
-Wunused-value.

Xen is not used with -Wunused-parameter, so there is no concern about 
unused parameters. If we ever decided to turn on -Wunused-parameter (or 
-Wextra), then we will have quite a bit of code to modify (such as 
callbacks not using all the parameters) to make it compile.

>   
>> diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-
>> x86/grant_table.h
>> index 5871238f6d..b6a09c4c6c 100644
>> --- a/xen/include/asm-x86/grant_table.h
>> +++ b/xen/include/asm-x86/grant_table.h
>> @@ -41,7 +41,7 @@ static inline int
>> replace_grant_host_mapping(uint64_t addr, mfn_t frame,
>>   #define gnttab_get_frame_gfn(gt, st, idx)
>> ({                             \
>>       mfn_t mfn_ = (st) ? gnttab_status_mfn(gt,
>> idx)                       \
>>                         : gnttab_shared_mfn(gt,
>> idx);                      \
>> -    unsigned long gpfn_ =
>> get_gpfn_from_mfn(mfn_x(mfn_));                \
>> +    unsigned long gpfn_ =
>> get_pfn_from_mfn(mfn_);                        \
>>       VALID_M2P(gpfn_) ? _gfn(gpfn_) :
>> INVALID_GFN;                        \
>>   })
>>   
>> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
>> index 53f2ed7c7d..2a4f42e78f 100644
>> --- a/xen/include/asm-x86/mm.h
>> +++ b/xen/include/asm-x86/mm.h
>> @@ -500,9 +500,10 @@ extern paddr_t mem_hotplug;
>>    */
>>   extern bool machine_to_phys_mapping_valid;
>>   
>> -static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned
>> long pfn)
>> +static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn)
>>   {
>> -    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
>> +    const unsigned long mfn = mfn_x(mfn_);
>> +    const struct domain *d = page_get_owner(mfn_to_page(mfn_));
>>       unsigned long entry = (d && (d == dom_cow)) ? SHARED_M2P_ENTRY :
>> pfn;
>>   
>>       if ( !machine_to_phys_mapping_valid )
>> @@ -515,11 +516,14 @@ static inline void set_gpfn_from_mfn(unsigned
>> long mfn, unsigned long pfn)
>>   
>>   extern struct rangeset *mmio_ro_ranges;
>>   
>> -#define get_gpfn_from_mfn(mfn)      (machine_to_phys_mapping[(mfn)])
>> +static inline unsigned long get_pfn_from_mfn(mfn_t mfn)
>> +{
>> +    return machine_to_phys_mapping[mfn_x(mfn)];
>> +}
> 
> Any specific reason this (and some other macros) are turned into static
> inline? I don't have a problem with them being inline functions but
> just wondering if there is a reason to do so.

static inline provides better safety check than macro. So we tend to 
switch to static inline whenever the headers inter-dependency madness is 
not interplaying.

>   
>>   #define mfn_to_gmfn(_d, mfn)                            \
>>       ( (paging_mode_translate(_d))                       \
>> -      ? get_gpfn_from_mfn(mfn)                          \
>> +      ? get_pfn_from_mfn(_mfn(mfn))                     \
>>         : (mfn) )
>>   
>>   #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) |
>> ((unsigned)(pfn) >> 20))
>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>> index a2c6049834..39dae242b0 100644
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -505,7 +505,7 @@ static inline struct page_info
>> *get_page_from_gfn(
>>   static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>>   {
>>       if ( paging_mode_translate(d) )
>> -        return _gfn(get_gpfn_from_mfn(mfn_x(mfn)));
>> +        return _gfn(get_pfn_from_mfn(mfn));
>>       else
>>           return _gfn(mfn_x(mfn));
>>   }
> 
> Apart from the two comments above, looks good to me.
> 
> Reviewed-by: Hongyan Xia <hongyxia@amazon.com>

Thank you!

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN
  2020-03-22 16:14 ` [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN julien
@ 2020-03-25 14:46   ` Jan Beulich
  2020-03-28 10:14     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-25 14:46 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Introduce handy helpers to generate/convert the CR3 from/to a MFN/GFN.
> 
> Note that we are using cr3_pa() rather than xen_cr3_to_pfn() because the
> latter does not ignore the top 12-bits.

I'm afraid this remark of yours points at some issue here:
cr3_pa() is meant to act on (real or virtual) CR3 values, but
not (necessarily) on para-virtual ones. E.g. ...

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1096,7 +1096,7 @@ int arch_set_info_guest(
>      set_bit(_VPF_in_reset, &v->pause_flags);
>  
>      if ( !compat )
> -        cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3]));
> +        cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[3]);

... you're now losing the top 12 bits here, potentially
making ...

>      else
>          cr3_mfn = _mfn(compat_cr3_to_pfn(c.cmp->ctrlreg[3]));
>      cr3_page = get_page_from_mfn(cr3_mfn, d);

... this succeed when it shouldn't.

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -524,6 +524,26 @@ extern struct rangeset *mmio_ro_ranges;
>  #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
>  #define compat_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20))
>  
> +static inline unsigned long mfn_to_cr3(mfn_t mfn)
> +{
> +    return xen_pfn_to_cr3(mfn_x(mfn));
> +}
> +
> +static inline mfn_t cr3_to_mfn(unsigned long cr3)
> +{
> +    return maddr_to_mfn(cr3_pa(cr3));
> +}
> +
> +static inline unsigned long gfn_to_cr3(gfn_t gfn)
> +{
> +    return xen_pfn_to_cr3(gfn_x(gfn));
> +}
> +
> +static inline gfn_t cr3_to_gfn(unsigned long cr3)
> +{
> +    return gaddr_to_gfn(cr3_pa(cr3));
> +}

Overall I think that when introducing such helpers we need to be
very clear about their intended uses: Bare underlying hardware,
PV guests, or HVM guests. From this perspective I also think that
having MFN and GFN conversions next to each other may be more
confusing than helpful, the more that there are no uses
introduced here for the latter. When applied to HVM guests,
xen_pfn_to_cr3() also shouldn't be used, as that's a PV construct
in the public headers. Yet I thing conversions to/from GFNs
should first and foremost be applicable to HVM guests.

A possible route to go may be to e.g. accompany
{xen,compat}_pfn_to_cr3() with {xen,compat}_mfn_to_cr3(), and
leave the GFN aspect out until such patch that would actually
use them (which may then make clear that these actually want
to live in a header specifically applicable to translated
guests).

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN
  2020-03-22 16:14 ` [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN julien
@ 2020-03-25 14:51   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-25 14:51 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> No functional changes intended.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header
  2020-03-22 16:14 ` [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header julien
@ 2020-03-25 15:00   ` Jan Beulich
  2020-03-25 18:09     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-25 15:00 UTC (permalink / raw)
  To: julien
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Julien Grall,
	Ian Jackson, George Dunlap, xen-devel

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> It is getting incredibly difficult to use typesafe GFN/MFN/PFN in the
> headers because of circular dependency. For instance, asm-x86/page.h
> cannot include xen/mm.h.
> 
> In order to convert more code to use typesafe, the types are now moved
> in a separate header that requires only a few dependencies.

We definitely need to do this, so thanks for investing the
time. I think though that we want to settle up front (and
perhaps record in a comment in the new header) what is or
is not suitable to go into the new header. After all you're
moving not just type definitions, but also simple helper
functions.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -1,50 +1,7 @@
>  /******************************************************************************
>   * include/xen/mm.h
>   *
> - * Definitions for memory pages, frame numbers, addresses, allocations, etc.
> - *
>   * Copyright (c) 2002-2006, K A Fraser <keir@xensource.com>
> - *
> - *                         +---------------------+
> - *                          Xen Memory Management
> - *                         +---------------------+
> - *
> - * Xen has to handle many different address spaces.  It is important not to
> - * get these spaces mixed up.  The following is a consistent terminology which
> - * should be adhered to.
> - *
> - * mfn: Machine Frame Number
> - *   The values Xen puts into its own pagetables.  This is the host physical
> - *   memory address space with RAM, MMIO etc.
> - *
> - * gfn: Guest Frame Number
> - *   The values a guest puts in its own pagetables.  For an auto-translated
> - *   guest (hardware assisted with 2nd stage translation, or shadowed), gfn !=
> - *   mfn.  For a non-translated guest which is aware of Xen, gfn == mfn.
> - *
> - * pfn: Pseudophysical Frame Number
> - *   A linear idea of a guest physical address space. For an auto-translated
> - *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
> - *
> - * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
> - *   The linear frame numbers of device DMA address space. All initiators for
> - *   (i.e. all devices assigned to) a guest share a single DMA address space
> - *   and, by default, Xen will ensure dfn == pfn.
> - *
> - * WARNING: Some of these terms have changed over time while others have been
> - * used inconsistently, meaning that a lot of existing code does not match the
> - * definitions above.  New code should use these terms as described here, and
> - * over time older code should be corrected to be consistent.
> - *
> - * An incomplete list of larger work area:
> - * - Phase out the use of 'pfn' from the x86 pagetable code.  Callers should
> - *   know explicitly whether they are talking about mfns or gfns.
> - * - Phase out the use of 'pfn' from the ARM mm code.  A cursory glance
> - *   suggests that 'mfn' and 'pfn' are currently used interchangeably, where
> - *   'mfn' is the appropriate term to use.
> - * - Phase out the use of gpfn/gmfn where pfn/mfn are meant.  This excludes
> - *   the x86 shadow code, which uses gmfn/smfn pairs with different,
> - *   documented, meanings.
>   */
>  
>  #ifndef __XEN_MM_H__
> @@ -54,100 +11,11 @@
>  #include <xen/types.h>
>  #include <xen/list.h>
>  #include <xen/spinlock.h>
> -#include <xen/typesafe.h>
>  #include <xen/kernel.h>
> +#include <xen/mm_types.h>

Is there anything left in the header here which requires the
explicit inclusion of xen/kernel.h?

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN
  2020-03-22 16:14 ` [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN julien
@ 2020-03-25 15:27   ` Jan Beulich
  2020-03-25 18:21     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-25 15:27 UTC (permalink / raw)
  To: julien
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Julien Grall, Ian Jackson, George Dunlap,
	Ross Lagerwall, Lukasz Hawrylko, xen-devel, Volodymyr Babchuk,
	Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> @@ -785,21 +781,21 @@ bool is_iomem_page(mfn_t mfn)
>      return (page_get_owner(page) == dom_io);
>  }
>  
> -static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
> +static int update_xen_mappings(mfn_t mfn, unsigned int cacheattr)
>  {
>      int err = 0;
> -    bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
> -         mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
> +    bool alias = mfn_x(mfn) >= PFN_DOWN(xen_phys_start) &&
> +         mfn_x(mfn) < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>      unsigned long xen_va =
> -        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
> +        XEN_VIRT_START + mfn_to_maddr(mfn_add(mfn, -PFN_DOWN(xen_phys_start)));

Depending on the types involved (e.g. in PFN_DOWN()) this may
or may not be safe, so I consider such a transformation at
least fragile. I think we either want to gain mfn_sub() or
keep this as a "real" subtraction.

> @@ -584,21 +584,21 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>          needed = 0;
>      }
>      else if ( *use_tail && nr >= needed &&
> -              arch_mfn_in_directmap(mfn + nr) &&
> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, nr))) &&
>                (!xenheap_bits ||
> -               !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
> +               !((mfn_x(mfn) + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )

May I suggest consistency here: This one uses +, while ...

>      {
> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
> +        _heap[node] = mfn_to_virt(mfn_add(mfn, nr - needed));
> +        avail[node] = mfn_to_virt(mfn_add(mfn, nr - 1)) +
>                        PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>      }
>      else if ( nr >= needed &&
> -              arch_mfn_in_directmap(mfn + needed) &&
> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, needed))) &&

... this one uses mfn_add() despite the mfn_x() around it, and ...

>                (!xenheap_bits ||
> -               !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
> +               !((mfn_x(mfn) + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )

... here you use + again. My personal preference would be to avoid
constructs like mfn_x(mfn_add()).

> @@ -269,10 +270,10 @@ out_dealloc:
>              continue;
>          for ( i = 0; i < pages; i++ )
>          {
> -            uint32_t mfn = t_info_mfn_list[offset + i];
> -            if ( !mfn )
> +            mfn_t mfn = _mfn(t_info_mfn_list[offset + i]);
> +            if ( mfn_eq(mfn, _mfn(0)) )

Please could you take the opportunity and add the missing blank line
between these two?

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -667,7 +667,7 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
>  {
>      unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
>  
> -    return mfn <= (virt_to_mfn(eva - 1) + 1);
> +    return mfn <= mfn_x(mfn_add(virt_to_mfn(eva - 1),  1));

Even if you wanted to stick to using mfn_add() here, there's one
blank too many after the comma.

With these taken care of
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header
  2020-03-25 15:00   ` Jan Beulich
@ 2020-03-25 18:09     ` Julien Grall
  2020-03-26  9:02       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-03-25 18:09 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Julien Grall,
	Ian Jackson, George Dunlap, xen-devel

Hi Jan,

On 25/03/2020 15:00, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> It is getting incredibly difficult to use typesafe GFN/MFN/PFN in the
>> headers because of circular dependency. For instance, asm-x86/page.h
>> cannot include xen/mm.h.
>>
>> In order to convert more code to use typesafe, the types are now moved
>> in a separate header that requires only a few dependencies.
> 
> We definitely need to do this, so thanks for investing the
> time. I think though that we want to settle up front (and
> perhaps record in a comment in the new header) what is or
> is not suitable to go into the new header. After all you're
> moving not just type definitions, but also simple helper
> functions.

I am expecting headers to use the typesafe helpers (such mfn_add) in the 
long term. So I would like the new header to contain the type 
definitions and any wrappers that would turn 'generic' operations safe.

I am not entirely sure yet how to formalize the rules in the header. Any 
ideas?

> 
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -1,50 +1,7 @@
>>   /******************************************************************************
>>    * include/xen/mm.h
>>    *
>> - * Definitions for memory pages, frame numbers, addresses, allocations, etc.
>> - *
>>    * Copyright (c) 2002-2006, K A Fraser <keir@xensource.com>
>> - *
>> - *                         +---------------------+
>> - *                          Xen Memory Management
>> - *                         +---------------------+
>> - *
>> - * Xen has to handle many different address spaces.  It is important not to
>> - * get these spaces mixed up.  The following is a consistent terminology which
>> - * should be adhered to.
>> - *
>> - * mfn: Machine Frame Number
>> - *   The values Xen puts into its own pagetables.  This is the host physical
>> - *   memory address space with RAM, MMIO etc.
>> - *
>> - * gfn: Guest Frame Number
>> - *   The values a guest puts in its own pagetables.  For an auto-translated
>> - *   guest (hardware assisted with 2nd stage translation, or shadowed), gfn !=
>> - *   mfn.  For a non-translated guest which is aware of Xen, gfn == mfn.
>> - *
>> - * pfn: Pseudophysical Frame Number
>> - *   A linear idea of a guest physical address space. For an auto-translated
>> - *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
>> - *
>> - * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
>> - *   The linear frame numbers of device DMA address space. All initiators for
>> - *   (i.e. all devices assigned to) a guest share a single DMA address space
>> - *   and, by default, Xen will ensure dfn == pfn.
>> - *
>> - * WARNING: Some of these terms have changed over time while others have been
>> - * used inconsistently, meaning that a lot of existing code does not match the
>> - * definitions above.  New code should use these terms as described here, and
>> - * over time older code should be corrected to be consistent.
>> - *
>> - * An incomplete list of larger work area:
>> - * - Phase out the use of 'pfn' from the x86 pagetable code.  Callers should
>> - *   know explicitly whether they are talking about mfns or gfns.
>> - * - Phase out the use of 'pfn' from the ARM mm code.  A cursory glance
>> - *   suggests that 'mfn' and 'pfn' are currently used interchangeably, where
>> - *   'mfn' is the appropriate term to use.
>> - * - Phase out the use of gpfn/gmfn where pfn/mfn are meant.  This excludes
>> - *   the x86 shadow code, which uses gmfn/smfn pairs with different,
>> - *   documented, meanings.
>>    */
>>   
>>   #ifndef __XEN_MM_H__
>> @@ -54,100 +11,11 @@
>>   #include <xen/types.h>
>>   #include <xen/list.h>
>>   #include <xen/spinlock.h>
>> -#include <xen/typesafe.h>
>>   #include <xen/kernel.h>
>> +#include <xen/mm_types.h>
> 
> Is there anything left in the header here which requires the
> explicit inclusion of xen/kernel.h?

The header was introduced for the sole purpose of the typesafe version 
of the min/max helpers. So it should be possible to drop the include.

I will have a look and remove it if we can.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN
  2020-03-25 15:27   ` Jan Beulich
@ 2020-03-25 18:21     ` Julien Grall
  2020-03-26  9:09       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-03-25 18:21 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Julien Grall, Ian Jackson, George Dunlap,
	Ross Lagerwall, Lukasz Hawrylko, xen-devel, Volodymyr Babchuk,
	Roger Pau Monné

Hi Jan,

On 25/03/2020 15:27, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> @@ -785,21 +781,21 @@ bool is_iomem_page(mfn_t mfn)
>>       return (page_get_owner(page) == dom_io);
>>   }
>>   
>> -static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
>> +static int update_xen_mappings(mfn_t mfn, unsigned int cacheattr)
>>   {
>>       int err = 0;
>> -    bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
>> -         mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>> +    bool alias = mfn_x(mfn) >= PFN_DOWN(xen_phys_start) &&
>> +         mfn_x(mfn) < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>>       unsigned long xen_va =
>> -        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
>> +        XEN_VIRT_START + mfn_to_maddr(mfn_add(mfn, -PFN_DOWN(xen_phys_start)));
> 
> Depending on the types involved (e.g. in PFN_DOWN()) this may
> or may not be safe, so I consider such a transformation at
> least fragile. I think we either want to gain mfn_sub() or
> keep this as a "real" subtraction.
I want to avoid mfn_x() as much as possible when everything can be done 
using typesafe operation. But i am not sure how mfn_sub() would solve 
the problem. Do you mind providing more information?

> 
>> @@ -584,21 +584,21 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>>           needed = 0;
>>       }
>>       else if ( *use_tail && nr >= needed &&
>> -              arch_mfn_in_directmap(mfn + nr) &&
>> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, nr))) &&
>>                 (!xenheap_bits ||
>> -               !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>> +               !((mfn_x(mfn) + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
> 
> May I suggest consistency here: This one uses +, while ...
> 
>>       {
>> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
>> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
>> +        _heap[node] = mfn_to_virt(mfn_add(mfn, nr - needed));
>> +        avail[node] = mfn_to_virt(mfn_add(mfn, nr - 1)) +
>>                         PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>>       }
>>       else if ( nr >= needed &&
>> -              arch_mfn_in_directmap(mfn + needed) &&
>> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, needed))) &&
> 
> ... this one uses mfn_add() despite the mfn_x() around it, and ...

So the reason I used mfn_x(mfn_add(mfn, needed)) here is I plan to 
convert arch_mfn_in_directmap() to use typesafe soon. In the two others 
cases...

>>                 (!xenheap_bits ||
>> -               !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>> +               !((mfn_x(mfn) + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
> 
> ... here you use + again. My personal preference would be to avoid
> constructs like mfn_x(mfn_add()).

... I am still unsure how to avoid mfn_x(). Do you have any ideas?
> 
>> @@ -269,10 +270,10 @@ out_dealloc:
>>               continue;
>>           for ( i = 0; i < pages; i++ )
>>           {
>> -            uint32_t mfn = t_info_mfn_list[offset + i];
>> -            if ( !mfn )
>> +            mfn_t mfn = _mfn(t_info_mfn_list[offset + i]);
>> +            if ( mfn_eq(mfn, _mfn(0)) )
> 
> Please could you take the opportunity and add the missing blank line
> between these two?

Sure.

> 
>> --- a/xen/include/asm-x86/mm.h
>> +++ b/xen/include/asm-x86/mm.h
>> @@ -667,7 +667,7 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
>>   {
>>       unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
>>   
>> -    return mfn <= (virt_to_mfn(eva - 1) + 1);
>> +    return mfn <= mfn_x(mfn_add(virt_to_mfn(eva - 1),  1));
> 
> Even if you wanted to stick to using mfn_add() here, there's one
> blank too many after the comma.

I will remove the extra blank. Regarding the construction, I have been 
wondering for a couple of years now whether we should introduce mfn_{lt, 
gt}. What do you think?


> 
> With these taken care of
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Thank you for the review.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header
  2020-03-25 18:09     ` Julien Grall
@ 2020-03-26  9:02       ` Jan Beulich
  2020-03-28 10:15         ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-26  9:02 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Julien Grall,
	Ian Jackson, George Dunlap, xen-devel

On 25.03.2020 19:09, Julien Grall wrote:
> Hi Jan,
> 
> On 25/03/2020 15:00, Jan Beulich wrote:
>> On 22.03.2020 17:14, julien@xen.org wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> It is getting incredibly difficult to use typesafe GFN/MFN/PFN in the
>>> headers because of circular dependency. For instance, asm-x86/page.h
>>> cannot include xen/mm.h.
>>>
>>> In order to convert more code to use typesafe, the types are now moved
>>> in a separate header that requires only a few dependencies.
>>
>> We definitely need to do this, so thanks for investing the
>> time. I think though that we want to settle up front (and
>> perhaps record in a comment in the new header) what is or
>> is not suitable to go into the new header. After all you're
>> moving not just type definitions, but also simple helper
>> functions.
> 
> I am expecting headers to use the typesafe helpers (such mfn_add)
> in the long term. So I would like the new header to contain the
> type definitions and any wrappers that would turn 'generic'
> operations safe.
> 
> I am not entirely sure yet how to formalize the rules in the
> header. Any ideas?

Well, if the header was just for the typesafe types, it could be
renamed (to e.g. mm-typesafe.h) and be left without any respective
comment. The issue I've mentioned arises if, with its currently
suggested name, further types get added. In such a case perhaps it
could be "type definitions and their immediate accessors,
involving no other non-trivial types"?

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN
  2020-03-25 18:21     ` Julien Grall
@ 2020-03-26  9:09       ` Jan Beulich
  2020-03-28 10:33         ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-26  9:09 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Julien Grall, Ian Jackson, George Dunlap,
	Ross Lagerwall, Lukasz Hawrylko, xen-devel, Volodymyr Babchuk,
	Roger Pau Monné

On 25.03.2020 19:21, Julien Grall wrote:
> On 25/03/2020 15:27, Jan Beulich wrote:
>> On 22.03.2020 17:14, julien@xen.org wrote:
>>> @@ -785,21 +781,21 @@ bool is_iomem_page(mfn_t mfn)
>>>       return (page_get_owner(page) == dom_io);
>>>   }
>>>   -static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
>>> +static int update_xen_mappings(mfn_t mfn, unsigned int cacheattr)
>>>   {
>>>       int err = 0;
>>> -    bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
>>> -         mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>>> +    bool alias = mfn_x(mfn) >= PFN_DOWN(xen_phys_start) &&
>>> +         mfn_x(mfn) < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>>>       unsigned long xen_va =
>>> -        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
>>> +        XEN_VIRT_START + mfn_to_maddr(mfn_add(mfn, -PFN_DOWN(xen_phys_start)));
>>
>> Depending on the types involved (e.g. in PFN_DOWN()) this may
>> or may not be safe, so I consider such a transformation at
>> least fragile. I think we either want to gain mfn_sub() or
>> keep this as a "real" subtraction.
> I want to avoid mfn_x() as much as possible when everything can
> be done using typesafe operation. But i am not sure how
> mfn_sub() would solve the problem. Do you mind providing more
> information?

Consider PFN_DOWN() potentially returning "unsigned int". The
negation of an unsigned int is still an unsigned int, and hence
e.g. -1U (which might result here) is really 0xFFFFFFFF rather
than -1L / -1UL as intended. Whereas with mfn_sub() the
conversion to unsigned long of the (positive) value to subtract
would occur as part of evaluating function arguments, and the
resulting subtraction would then be correct.

>>> @@ -584,21 +584,21 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>>>           needed = 0;
>>>       }
>>>       else if ( *use_tail && nr >= needed &&
>>> -              arch_mfn_in_directmap(mfn + nr) &&
>>> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, nr))) &&
>>>                 (!xenheap_bits ||
>>> -               !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>> +               !((mfn_x(mfn) + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>
>> May I suggest consistency here: This one uses +, while ...
>>
>>>       {
>>> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
>>> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
>>> +        _heap[node] = mfn_to_virt(mfn_add(mfn, nr - needed));
>>> +        avail[node] = mfn_to_virt(mfn_add(mfn, nr - 1)) +
>>>                         PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>>>       }
>>>       else if ( nr >= needed &&
>>> -              arch_mfn_in_directmap(mfn + needed) &&
>>> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, needed))) &&
>>
>> ... this one uses mfn_add() despite the mfn_x() around it, and ...
> 
> So the reason I used mfn_x(mfn_add(mfn, needed)) here is I plan
> to convert arch_mfn_in_directmap() to use typesafe soon. In the
> two others cases...
> 
>>>                 (!xenheap_bits ||
>>> -               !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>> +               !((mfn_x(mfn) + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>
>> ... here you use + again. My personal preference would be to avoid
>> constructs like mfn_x(mfn_add()).
> 
> ... I am still unsure how to avoid mfn_x(). Do you have any ideas?

I don't see how it can be avoided right now. But I also don't see
why - for consistency, as said - you couldn't use mfn_x() also in
the middle case. You could then still convert to mfn_add() with
that future change of yours.

>>> --- a/xen/include/asm-x86/mm.h
>>> +++ b/xen/include/asm-x86/mm.h
>>> @@ -667,7 +667,7 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
>>>   {
>>>       unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
>>>   -    return mfn <= (virt_to_mfn(eva - 1) + 1);
>>> +    return mfn <= mfn_x(mfn_add(virt_to_mfn(eva - 1),  1));
>>
>> Even if you wanted to stick to using mfn_add() here, there's one
>> blank too many after the comma.
> 
> I will remove the extra blank. Regarding the construction, I have
> been wondering for a couple of years now whether we should
> introduce mfn_{lt, gt}. What do you think?

I too have been wondering, and wouldn't mind their introduction
(plus mfn_le / mfn_ge perhaps). But it'll truly help you here
anyway only once the function parameter is also mfn_t.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-03-22 16:14 ` [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers julien
@ 2020-03-26 15:39   ` Jan Beulich
  2020-03-28 10:52     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-26 15:39 UTC (permalink / raw)
  To: julien
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -952,25 +952,27 @@ int arch_set_info_guest(
>      }
>      else
>      {
> -        unsigned long pfn = pagetable_get_pfn(v->arch.guest_table);
> +        mfn_t mfn = pagetable_get_mfn(v->arch.guest_table);
>          bool fail;
>  
>          if ( !compat )
>          {
> -            fail = xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[3];
> +            fail = mfn_to_cr3(mfn) != c.nat->ctrlreg[3];

The patch, besides a few other comments further down, looks fine
on its own, but I don't think it can be acked without seeing the
effects of the adjustments pending to the patch introducing
mfn_to_cr3() and friends.

> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>  
>      /* Free that page if non-zero */
>      do {
> -        if ( mfn )
> +        if ( !mfn_eq(mfn, _mfn(0)) )

I admit I'm not fully certain either, but at the first glance

        if ( mfn_x(mfn) )

would seem more in line with the original code to me (and then
also elsewhere).

> @@ -3560,19 +3561,18 @@ long do_mmuext_op(
>              if ( unlikely(rc) )
>                  break;
>  
> -            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
> +            old_mfn = pagetable_get_mfn(curr->arch.guest_table_user);
>              /*
>               * This is particularly important when getting restarted after the
>               * previous attempt got preempted in the put-old-MFN phase.
>               */
> -            if ( old_mfn == op.arg1.mfn )
> +            if ( mfn_eq(old_mfn, new_mfn) )
>                  break;
>  
> -            if ( op.arg1.mfn != 0 )
> +            if ( !mfn_eq(new_mfn, _mfn(0)) )

At least here I would clearly prefer the old code to be kept.

> @@ -3580,19 +3580,19 @@ long do_mmuext_op(
>                      else if ( rc != -ERESTART )
>                          gdprintk(XENLOG_WARNING,
>                                   "Error %d installing new mfn %" PRI_mfn "\n",
> -                                 rc, op.arg1.mfn);
> +                                 rc, mfn_x(new_mfn));

Here I'm also not sure I see the point of the conversion.

> @@ -2351,11 +2351,11 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
>      ASSERT(mfn_valid(smfn));
>  #endif
>  
> -    if ( pagetable_get_pfn(v->arch.shadow_table[0]) == mfn_x(smfn)
> +    if ( mfn_eq(pagetable_get_mfn(v->arch.shadow_table[0]), smfn)
>  #if (SHADOW_PAGING_LEVELS == 3)
> -         || pagetable_get_pfn(v->arch.shadow_table[1]) == mfn_x(smfn)
> -         || pagetable_get_pfn(v->arch.shadow_table[2]) == mfn_x(smfn)
> -         || pagetable_get_pfn(v->arch.shadow_table[3]) == mfn_x(smfn)
> +         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[1]), smfn)
> +         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[2]), smfn)
> +         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[3]), smfn)
>  #endif
>          )

While here moving the || to their designated places would make
the code look worse overall, ...

> @@ -3707,7 +3707,7 @@ sh_update_linear_entries(struct vcpu *v)
>  
>      /* Don't try to update the monitor table if it doesn't exist */
>      if ( shadow_mode_external(d)
> -         && pagetable_get_pfn(v->arch.monitor_table) == 0 )
> +         && pagetable_is_null(v->arch.monitor_table) )

... could I talk you into moving the && here to the end of the
previous line, as you're touching this anyway?

Also, seeing there's quite a few conversions to pagetable_is_null()
and also seeing that this patch is quite big - could this
conversion be split out?

> @@ -213,17 +214,17 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
>  #ifndef __ASSEMBLY__
>  
>  /* Page-table type. */
> -typedef struct { u64 pfn; } pagetable_t;
> -#define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
> +typedef struct { mfn_t mfn; } pagetable_t;
> +#define PAGETABLE_NULL_MFN      _mfn(0)

I'd prefer to get away without this constant.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn'
  2020-03-22 16:14 ` [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn' julien
@ 2020-03-26 15:51   ` Jan Beulich
  2020-04-18 10:54     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-26 15:51 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> We are using the 'mfn' to refer to machine frame. As this function deal
> with 'mfn', replace 'pfn' with 'mfn'.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> I am not entirely sure to understand the comment on top of the
> function, so this change may be wrong.

Looking at the history of the function, ...

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1321,7 +1321,7 @@ static int put_data_pages(struct page_info *page, bool writeable, int pt_shift)
>  }
>  
>  /*
> - * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'.
> + * NB. Virtual address 'l2e' maps to a machine address within frame 'mfn'.
>   * Note also that this automatically deals correctly with linear p.t.'s.
>   */
>  static int put_page_from_l2e(l2_pgentry_t l2e, mfn_t l2mfn, unsigned int flags)

... it used to be

static int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn)

When the rename occurred (in the context of or as a follow-up to an
XSA iirc), the comment adjustment was apparently missed. With the
referenced name matching that of the function argument (l2mfn)
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN
  2020-03-22 16:14 ` [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN julien
@ 2020-03-26 15:54   ` Jan Beulich
  2020-04-18 11:01     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-26 15:54 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Note that the code is now using cr3_to_mfn() to get the MFN. This is
> slightly different as the top 12-bits will now be masked.

And here I agree with the change. Hence it is even more so important
that the patch introducing the new helper(s) first gets sorted.
Should there be further patches in this series with this same
interaction issue, I won't point it out again and may not respond at
all if I see no other issues.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn()
  2020-03-22 16:14 ` [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn() julien
@ 2020-03-27 10:52   ` Jan Beulich
  2020-03-28 10:53     ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 10:52 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1138,7 +1138,7 @@ static int
>  get_page_from_l2e(
>      l2_pgentry_t l2e, mfn_t l2mfn, struct domain *d, unsigned int flags)
>  {
> -    unsigned long mfn = l2e_get_pfn(l2e);
> +    mfn_t mfn = l2e_get_mfn(l2e);
>      int rc;
>  
>      if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) )
> @@ -1150,7 +1150,7 @@ get_page_from_l2e(
>  
>      ASSERT(!(flags & PTF_preemptible));
>  
> -    rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, flags);
> +    rc = get_page_and_type_from_mfn(mfn, PGT_l1_page_table, d, flags);

To bring this better in line with the L3 and L4 counterparts,
could you please drop the local variable instead? Then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version
  2020-03-22 16:14 ` [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version julien
@ 2020-03-27 11:34   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 11:34 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> _mfn(addr >> PAGE_SHIFT) is equivalent to maddr_to_mfn(addr).
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga()
  2020-03-22 16:14 ` [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga() julien
@ 2020-03-27 11:35   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 11:35 UTC (permalink / raw)
  To: julien
  Cc: Wei Liu, Andrew Cooper, Julien Grall, George Dunlap, xen-devel,
	Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m()
  2020-03-22 16:14 ` [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m() julien
@ 2020-03-27 11:35   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 11:35 UTC (permalink / raw)
  To: julien
  Cc: Wei Liu, Andrew Cooper, George Dunlap, Julien Grall, xen-devel,
	Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> p2m_pt_audit_p2m() has one place where the same message may be printed
> twice via printk and P2M_PRINTK.
> 
> Remove the one printed using printk to stay consistent with the rest of
> the code.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s in p2m_pt_audit_p2m()
  2020-03-22 16:14 ` [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s " julien
@ 2020-03-27 11:36   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 11:36 UTC (permalink / raw)
  To: julien
  Cc: Wei Liu, Andrew Cooper, Julien Grall, George Dunlap, xen-devel,
	Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> We tend to avoid splitting message on multiple line, so it is easier to
> find it.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function
  2020-03-22 16:14 ` [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function julien
@ 2020-03-27 12:44   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 12:44 UTC (permalink / raw)
  To: julien
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> set_gpfn_from_mfn() is currently implement in a 2 part macros. The
> second macro is only called within the first macro, so they can be
> folded together.
> 
> Furthermore, this is now converted to a static inline making the code
> more readable and safer.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m()
  2020-03-22 16:14 ` [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m() julien
@ 2020-03-27 12:45   ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 12:45 UTC (permalink / raw)
  To: julien
  Cc: Wei Liu, Andrew Cooper, George Dunlap, Julien Grall, xen-devel,
	Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> One of the printk format in audit_p2m() may be difficult to read as it
> is not clear what is the first number.
> 
> Furthermore, the format can now take advantage of %pd.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  2020-03-22 16:14 ` [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN julien
  2020-03-23 12:11   ` Hongyan Xia
@ 2020-03-27 13:15   ` Jan Beulich
  2020-03-28 11:14     ` Julien Grall
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 13:15 UTC (permalink / raw)
  To: julien
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Julien Grall, Tamas K Lengyel, xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> @@ -983,19 +984,20 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
>                  /* check for 1GB super page */
>                  if ( l3e_get_flags(l3e[i3]) & _PAGE_PSE )
>                  {
> -                    mfn = l3e_get_pfn(l3e[i3]);
> -                    ASSERT(mfn_valid(_mfn(mfn)));
> +                    mfn = l3e_get_mfn(l3e[i3]);
> +                    ASSERT(mfn_valid(mfn));
>                      /* we have to cover 512x512 4K pages */
>                      for ( i2 = 0; 
>                            i2 < (L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES);
>                            i2++)
>                      {
> -                        m2pfn = get_gpfn_from_mfn(mfn+i2);
> +                        m2pfn = get_pfn_from_mfn(mfn_add(mfn, i2));
>                          if ( m2pfn != (gfn + i2) )
>                          {
>                              pmbad++;
> -                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
> -                                       gfn + i2, mfn + i2, m2pfn);
> +                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" gfn %#lx\n",
> +                                       gfn + i2, mfn_x(mfn_add(mfn, i2)),

As in the earlier patch, "mfn_x(mfn) + i2" would be shorter and
hence imo preferable, especially in printk() and alike invocations.

I would also prefer if you left %#lx alone, with the 2nd best
option being to also use PRI_gfn alongside PRI_mfn. Primarily
I'd like to avoid having a mixture.

Same (for both) at least one more time further down.

> @@ -974,7 +974,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
>                  P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
>                            gfn_x(ogfn) , mfn_x(omfn));
>                  if ( mfn_eq(omfn, mfn_add(mfn, i)) )
> -                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_x(mfn_add(mfn, i)),
> +                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_add(mfn, i),
>                                      0);

Pull this up then onto the now shorter prior line?

> @@ -2843,53 +2843,53 @@ void audit_p2m(struct domain *d,
>      spin_lock(&d->page_alloc_lock);
>      page_list_for_each ( page, &d->page_list )
>      {
> -        mfn = mfn_x(page_to_mfn(page));
> +        mfn = page_to_mfn(page);
>  
> -        P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn);
> +        P2M_PRINTK("auditing guest page, mfn=%"PRI_mfn"\n", mfn_x(mfn));
>  
>          od = page_get_owner(page);
>  
>          if ( od != d )
>          {
> -            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn, od, d);
> +            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn_x(mfn), od, d);
>              continue;
>          }
>  
> -        gfn = get_gpfn_from_mfn(mfn);
> +        gfn = get_pfn_from_mfn(mfn);
>          if ( gfn == INVALID_M2P_ENTRY )
>          {
>              orphans_count++;
> -            P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n",
> -                           mfn);
> +            P2M_PRINTK("orphaned guest page: mfn=%"PRI_mfn" has invalid gfn\n",
> +                       mfn_x(mfn));
>              continue;
>          }
>  
>          if ( SHARED_M2P(gfn) )
>          {
> -            P2M_PRINTK("shared mfn (%lx) on domain page list!\n",
> -                    mfn);
> +            P2M_PRINTK("shared mfn (%"PRI_mfn") on domain page list!\n",
> +                       mfn_x(mfn));
>              continue;
>          }
>  
>          p2mfn = get_gfn_type_access(p2m, gfn, &type, &p2ma, 0, NULL);
> -        if ( mfn_x(p2mfn) != mfn )
> +        if ( !mfn_eq(p2mfn, mfn) )
>          {
>              mpbad++;
> -            P2M_PRINTK("map mismatch mfn %#lx -> gfn %#lx -> mfn %#lx"
> +            P2M_PRINTK("map mismatch mfn %"PRI_mfn" -> gfn %#lx -> mfn %"PRI_mfn""
>                         " (-> gfn %#lx)\n",
> -                       mfn, gfn, mfn_x(p2mfn),
> +                       mfn_x(mfn), gfn, mfn_x(p2mfn),
>                         (mfn_valid(p2mfn)
> -                        ? get_gpfn_from_mfn(mfn_x(p2mfn))
> +                        ? get_pfn_from_mfn(p2mfn)
>                          : -1u));

I realize this is an entirely unrelated change, but the -1u here
is standing out too much to not mention it: Could I talk you into
making this gfn_x(INVALID_GFN) at this occasion?

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -500,9 +500,10 @@ extern paddr_t mem_hotplug;
>   */
>  extern bool machine_to_phys_mapping_valid;
>  
> -static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
> +static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn)
>  {
> -    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
> +    const unsigned long mfn = mfn_x(mfn_);

I think it would be better overall if the parameter was named
"mfn" and there was no local variable altogether. This would
bring things in line with ...

> @@ -515,11 +516,14 @@ static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
>  
>  extern struct rangeset *mmio_ro_ranges;
>  
> -#define get_gpfn_from_mfn(mfn)      (machine_to_phys_mapping[(mfn)])
> +static inline unsigned long get_pfn_from_mfn(mfn_t mfn)
> +{
> +    return machine_to_phys_mapping[mfn_x(mfn)];
> +}

... this.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
  2020-03-22 16:14 ` [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn julien
  2020-03-23  8:37   ` Paul Durrant
@ 2020-03-27 13:50   ` Jan Beulich
  2020-03-27 13:59     ` Julien Grall
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-27 13:50 UTC (permalink / raw)
  To: julien
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Paul Durrant,
	Andrew Cooper, Ian Jackson, George Dunlap, Tim Deegan,
	Julien Grall, Jun Nakajima, xen-devel, Volodymyr Babchuk,
	Roger Pau Monné

On 22.03.2020 17:14, julien@xen.org wrote:
> --- a/xen/arch/x86/hvm/domain.c
> +++ b/xen/arch/x86/hvm/domain.c
> @@ -296,8 +296,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>      if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) )
>      {
>          /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
> -        struct page_info *page = get_page_from_gfn(v->domain,
> -                                 v->arch.hvm.guest_cr[3] >> PAGE_SHIFT,
> +        struct page_info *page;
> +
> +        page = get_page_from_gfn(v->domain,
> +                                 gaddr_to_gfn(v->arch.hvm.guest_cr[3]),

My earlier comment on this remains - I thing this conversion makes
the problem this expression has more hidden than with the shift.
This would better use a gfn_from_cr3() helper (or whatever it'll
be that it gets named). Same elsewhere in this patch then.

> @@ -2363,7 +2364,7 @@ int hvm_set_cr3(unsigned long value, bool may_defer)
>      {
>          /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
>          HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR3 value = %lx", value);
> -        page = get_page_from_gfn(v->domain, value >> PAGE_SHIFT,
> +        page = get_page_from_gfn(v->domain, cr3_to_gfn(value),

Oh, seeing this I recall Paul did point out the above already.

> @@ -508,7 +508,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>  {
>      struct domain *d = current->domain;
>      uint32_t vcpu_id;
> -    uint64_t gfn;
> +    gfn_t gfn;
>      uint32_t offset;
>      struct vcpu *v;
>      int rc;
> @@ -516,7 +516,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>      init_control->link_bits = EVTCHN_FIFO_LINK_BITS;
>  
>      vcpu_id = init_control->vcpu;
> -    gfn     = init_control->control_gfn;
> +    gfn     = _gfn(init_control->control_gfn);

There's silent truncation here now for Arm32, afaict. Are we really
okay with this?

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn
  2020-03-27 13:50   ` Jan Beulich
@ 2020-03-27 13:59     ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-03-27 13:59 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Paul Durrant,
	Andrew Cooper, Ian Jackson, George Dunlap, Tim Deegan,
	Julien Grall, Jun Nakajima, xen-devel, Volodymyr Babchuk,
	Roger Pau Monné

Hi Jan,

On 27/03/2020 13:50, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> --- a/xen/arch/x86/hvm/domain.c
>> +++ b/xen/arch/x86/hvm/domain.c
>> @@ -296,8 +296,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>>       if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) )
>>       {
>>           /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
>> -        struct page_info *page = get_page_from_gfn(v->domain,
>> -                                 v->arch.hvm.guest_cr[3] >> PAGE_SHIFT,
>> +        struct page_info *page;
>> +
>> +        page = get_page_from_gfn(v->domain,
>> +                                 gaddr_to_gfn(v->arch.hvm.guest_cr[3]),
> 
> My earlier comment on this remains - I thing this conversion makes
> the problem this expression has more hidden than with the shift.
> This would better use a gfn_from_cr3() helper (or whatever it'll
> be that it gets named). Same elsewhere in this patch then.

I will have a closer look the *cr3 helpers and reply with some suggestions.

> 
>> @@ -2363,7 +2364,7 @@ int hvm_set_cr3(unsigned long value, bool may_defer)
>>       {
>>           /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
>>           HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR3 value = %lx", value);
>> -        page = get_page_from_gfn(v->domain, value >> PAGE_SHIFT,
>> +        page = get_page_from_gfn(v->domain, cr3_to_gfn(value),
> 
> Oh, seeing this I recall Paul did point out the above already.
> 
>> @@ -508,7 +508,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>>   {
>>       struct domain *d = current->domain;
>>       uint32_t vcpu_id;
>> -    uint64_t gfn;
>> +    gfn_t gfn;
>>       uint32_t offset;
>>       struct vcpu *v;
>>       int rc;
>> @@ -516,7 +516,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>>       init_control->link_bits = EVTCHN_FIFO_LINK_BITS;
>>   
>>       vcpu_id = init_control->vcpu;
>> -    gfn     = init_control->control_gfn;
>> +    gfn     = _gfn(init_control->control_gfn);
> 
> There's silent truncation here now for Arm32, afaict. Are we really
> okay with this?

Well, the truncation was already silently happening as we call 
get_page_from_gfn() in map_guest_page(). So it is not worse than the 
current situation.

Although, there are a slight advantage with the new code as you can more 
easily spot potential truncation. Indeed, you could add some type check 
in _gfn().

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN
  2020-03-25 14:46   ` Jan Beulich
@ 2020-03-28 10:14     ` Julien Grall
  2020-03-30  7:38       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-03-28 10:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

Hi Jan,

On 25/03/2020 14:46, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Introduce handy helpers to generate/convert the CR3 from/to a MFN/GFN.
>>
>> Note that we are using cr3_pa() rather than xen_cr3_to_pfn() because the
>> latter does not ignore the top 12-bits.
> 
> I'm afraid this remark of yours points at some issue here:
> cr3_pa() is meant to act on (real or virtual) CR3 values, but
> not (necessarily) on para-virtual ones. E.g. ...
> 
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1096,7 +1096,7 @@ int arch_set_info_guest(
>>       set_bit(_VPF_in_reset, &v->pause_flags);
>>   
>>       if ( !compat )
>> -        cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3]));
>> +        cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[3]);
> 
> ... you're now losing the top 12 bits here, potentially
> making ...
> 
>>       else
>>           cr3_mfn = _mfn(compat_cr3_to_pfn(c.cmp->ctrlreg[3]));
>>       cr3_page = get_page_from_mfn(cr3_mfn, d);
> 
> ... this succeed when it shouldn't.
> 
>> --- a/xen/include/asm-x86/mm.h
>> +++ b/xen/include/asm-x86/mm.h
>> @@ -524,6 +524,26 @@ extern struct rangeset *mmio_ro_ranges;
>>   #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
>>   #define compat_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20))
>>   
>> +static inline unsigned long mfn_to_cr3(mfn_t mfn)
>> +{
>> +    return xen_pfn_to_cr3(mfn_x(mfn));
>> +}
>> +
>> +static inline mfn_t cr3_to_mfn(unsigned long cr3)
>> +{
>> +    return maddr_to_mfn(cr3_pa(cr3));
>> +}
>> +
>> +static inline unsigned long gfn_to_cr3(gfn_t gfn)
>> +{
>> +    return xen_pfn_to_cr3(gfn_x(gfn));
>> +}
>> +
>> +static inline gfn_t cr3_to_gfn(unsigned long cr3)
>> +{
>> +    return gaddr_to_gfn(cr3_pa(cr3));
>> +}
> 
> Overall I think that when introducing such helpers we need to be
> very clear about their intended uses: Bare underlying hardware,
> PV guests, or HVM guests. From this perspective I also think that
> having MFN and GFN conversions next to each other may be more
> confusing than helpful, the more that there are no uses
> introduced here for the latter. When applied to HVM guests,
> xen_pfn_to_cr3() also shouldn't be used, as that's a PV construct
> in the public headers. Yet I thing conversions to/from GFNs
> should first and foremost be applicable to HVM guests.

There are use of GFN helpers in the series, but I wanted to avoid 
introducing them in the middle of something else. I can try to find a 
couple of occurences I can switch to use them now.

Regarding the term GFN, it is not meant to be HVM only. So we may want 
to prefix the helpers with hvm_ to make it clear.

> 
> A possible route to go may be to e.g. accompany
> {xen,compat}_pfn_to_cr3() with {xen,compat}_mfn_to_cr3(), and
> leave the GFN aspect out until such patch that would actually
> use them (which may then make clear that these actually want
> to live in a header specifically applicable to translated
> guests).

I am thinking to introduce 3 sets of helpers:
     - hvm_cr3_to_gfn()/hvm_gfn_to_cr3(): Handle the CR3 for HVM guest
     - {xen, compat}_mfn_to_cr3()/{xen, compat}_cr3_to_mfn(): Handle the 
CR3 for PV guest.
     - host_cr3_to_mfn()/host_mfn_to_cr3(): To handle the host cr3.

What do you think?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header
  2020-03-26  9:02       ` Jan Beulich
@ 2020-03-28 10:15         ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-03-28 10:15 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Julien Grall,
	Ian Jackson, George Dunlap, xen-devel

Hi Jan,

On 26/03/2020 09:02, Jan Beulich wrote:
> On 25.03.2020 19:09, Julien Grall wrote:
>> Hi Jan,
>>
>> On 25/03/2020 15:00, Jan Beulich wrote:
>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> It is getting incredibly difficult to use typesafe GFN/MFN/PFN in the
>>>> headers because of circular dependency. For instance, asm-x86/page.h
>>>> cannot include xen/mm.h.
>>>>
>>>> In order to convert more code to use typesafe, the types are now moved
>>>> in a separate header that requires only a few dependencies.
>>>
>>> We definitely need to do this, so thanks for investing the
>>> time. I think though that we want to settle up front (and
>>> perhaps record in a comment in the new header) what is or
>>> is not suitable to go into the new header. After all you're
>>> moving not just type definitions, but also simple helper
>>> functions.
>>
>> I am expecting headers to use the typesafe helpers (such mfn_add)
>> in the long term. So I would like the new header to contain the
>> type definitions and any wrappers that would turn 'generic'
>> operations safe.
>>
>> I am not entirely sure yet how to formalize the rules in the
>> header. Any ideas?
> 
> Well, if the header was just for the typesafe types, it could be
> renamed (to e.g. mm-typesafe.h) and be left without any respective
> comment. The issue I've mentioned arises if, with its currently
> suggested name, further types get added. In such a case perhaps it
> could be "type definitions and their immediate accessors,
> involving no other non-trivial types"?

I will rename the file to mm-typesafe.h.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN
  2020-03-26  9:09       ` Jan Beulich
@ 2020-03-28 10:33         ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-03-28 10:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Julien Grall, Ian Jackson, George Dunlap,
	Ross Lagerwall, Lukasz Hawrylko, xen-devel, Volodymyr Babchuk,
	Roger Pau Monné

Hi,

On 26/03/2020 09:09, Jan Beulich wrote:
> On 25.03.2020 19:21, Julien Grall wrote:
>> On 25/03/2020 15:27, Jan Beulich wrote:
>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>> @@ -785,21 +781,21 @@ bool is_iomem_page(mfn_t mfn)
>>>>        return (page_get_owner(page) == dom_io);
>>>>    }
>>>>    -static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
>>>> +static int update_xen_mappings(mfn_t mfn, unsigned int cacheattr)
>>>>    {
>>>>        int err = 0;
>>>> -    bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
>>>> -         mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>>>> +    bool alias = mfn_x(mfn) >= PFN_DOWN(xen_phys_start) &&
>>>> +         mfn_x(mfn) < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
>>>>        unsigned long xen_va =
>>>> -        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
>>>> +        XEN_VIRT_START + mfn_to_maddr(mfn_add(mfn, -PFN_DOWN(xen_phys_start)));
>>>
>>> Depending on the types involved (e.g. in PFN_DOWN()) this may
>>> or may not be safe, so I consider such a transformation at
>>> least fragile. I think we either want to gain mfn_sub() or
>>> keep this as a "real" subtraction.
>> I want to avoid mfn_x() as much as possible when everything can
>> be done using typesafe operation. But i am not sure how
>> mfn_sub() would solve the problem. Do you mind providing more
>> information?
> 
> Consider PFN_DOWN() potentially returning "unsigned int". The
> negation of an unsigned int is still an unsigned int, and hence
> e.g. -1U (which might result here) is really 0xFFFFFFFF rather
> than -1L / -1UL as intended. Whereas with mfn_sub() the
> conversion to unsigned long of the (positive) value to subtract
> would occur as part of evaluating function arguments, and the
> resulting subtraction would then be correct.

I will have a look to introduce mfn_sub().

> 
>>>> @@ -584,21 +584,21 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>>>>            needed = 0;
>>>>        }
>>>>        else if ( *use_tail && nr >= needed &&
>>>> -              arch_mfn_in_directmap(mfn + nr) &&
>>>> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, nr))) &&
>>>>                  (!xenheap_bits ||
>>>> -               !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>>> +               !((mfn_x(mfn) + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>>
>>> May I suggest consistency here: This one uses +, while ...
>>>
>>>>        {
>>>> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
>>>> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
>>>> +        _heap[node] = mfn_to_virt(mfn_add(mfn, nr - needed));
>>>> +        avail[node] = mfn_to_virt(mfn_add(mfn, nr - 1)) +
>>>>                          PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>>>>        }
>>>>        else if ( nr >= needed &&
>>>> -              arch_mfn_in_directmap(mfn + needed) &&
>>>> +              arch_mfn_in_directmap(mfn_x(mfn_add(mfn, needed))) &&
>>>
>>> ... this one uses mfn_add() despite the mfn_x() around it, and ...
>>
>> So the reason I used mfn_x(mfn_add(mfn, needed)) here is I plan
>> to convert arch_mfn_in_directmap() to use typesafe soon. In the
>> two others cases...
>>
>>>>                  (!xenheap_bits ||
>>>> -               !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>>> +               !((mfn_x(mfn) + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>>
>>> ... here you use + again. My personal preference would be to avoid
>>> constructs like mfn_x(mfn_add()).
>>
>> ... I am still unsure how to avoid mfn_x(). Do you have any ideas?
> 
> I don't see how it can be avoided right now. But I also don't see
> why - for consistency, as said - you couldn't use mfn_x() also in
> the middle case. You could then still convert to mfn_add() with
> that future change of yours.

I could have as I could also have converted arch_mfn_in_directmap() to 
use typesafe MFN. Anything around typesafe is a can of worms and this is 
the fine line I found.

Anyway, I could not be bother to bikeshed... So I going to switch the 
other one to mfn_x(...) + needed.

> 
>>>> --- a/xen/include/asm-x86/mm.h
>>>> +++ b/xen/include/asm-x86/mm.h
>>>> @@ -667,7 +667,7 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
>>>>    {
>>>>        unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
>>>>    -    return mfn <= (virt_to_mfn(eva - 1) + 1);
>>>> +    return mfn <= mfn_x(mfn_add(virt_to_mfn(eva - 1),  1));
>>>
>>> Even if you wanted to stick to using mfn_add() here, there's one
>>> blank too many after the comma.
>>
>> I will remove the extra blank. Regarding the construction, I have
>> been wondering for a couple of years now whether we should
>> introduce mfn_{lt, gt}. What do you think?
> 
> I too have been wondering, and wouldn't mind their introduction
> (plus mfn_le / mfn_ge perhaps). But it'll truly help you here
> anyway only once the function parameter is also mfn_t.

This is a longer term plan. So I am going to leave it like that for now 
until I manage to find time.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-03-26 15:39   ` Jan Beulich
@ 2020-03-28 10:52     ` Julien Grall
  2020-03-30  7:52       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-03-28 10:52 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

Hi Jan,

On 26/03/2020 15:39, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -952,25 +952,27 @@ int arch_set_info_guest(
>>       }
>>       else
>>       {
>> -        unsigned long pfn = pagetable_get_pfn(v->arch.guest_table);
>> +        mfn_t mfn = pagetable_get_mfn(v->arch.guest_table);
>>           bool fail;
>>   
>>           if ( !compat )
>>           {
>> -            fail = xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[3];
>> +            fail = mfn_to_cr3(mfn) != c.nat->ctrlreg[3];
> 
> The patch, besides a few other comments further down, looks fine
> on its own, but I don't think it can be acked without seeing the
> effects of the adjustments pending to the patch introducing
> mfn_to_cr3() and friends.
> 
>> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>>   
>>       /* Free that page if non-zero */
>>       do {
>> -        if ( mfn )
>> +        if ( !mfn_eq(mfn, _mfn(0)) )
> 
> I admit I'm not fully certain either, but at the first glance
> 
>          if ( mfn_x(mfn) )
> 
> would seem more in line with the original code to me (and then
> also elsewhere).

It is doing *exactly* the same things. The whole point of typesafe is to 
use typesafe helper not open-coding test everywhere.

It is also easier to spot any use of MFN 0 within the code as you know 
could grep "_mfn(0)".

Therefore I will insist to the code as-is.

> 
>> @@ -3560,19 +3561,18 @@ long do_mmuext_op(
>>               if ( unlikely(rc) )
>>                   break;
>>   
>> -            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
>> +            old_mfn = pagetable_get_mfn(curr->arch.guest_table_user);
>>               /*
>>                * This is particularly important when getting restarted after the
>>                * previous attempt got preempted in the put-old-MFN phase.
>>                */
>> -            if ( old_mfn == op.arg1.mfn )
>> +            if ( mfn_eq(old_mfn, new_mfn) )
>>                   break;
>>   
>> -            if ( op.arg1.mfn != 0 )
>> +            if ( !mfn_eq(new_mfn, _mfn(0)) )
> 
> At least here I would clearly prefer the old code to be kept.

See above.

> 
>> @@ -3580,19 +3580,19 @@ long do_mmuext_op(
>>                       else if ( rc != -ERESTART )
>>                           gdprintk(XENLOG_WARNING,
>>                                    "Error %d installing new mfn %" PRI_mfn "\n",
>> -                                 rc, op.arg1.mfn);
>> +                                 rc, mfn_x(new_mfn));
> 
> Here I'm also not sure I see the point of the conversion.

op.arg1.mfn and mfn are technically not the same type. The former is a 
xen_pfn_t, whilst the latter is mfn_t.

In practice they are both unsigned long on x86, so it should be fine to 
use PRI_mfn. However, I think this is an abuse and we should aim to use 
the proper PRI_* for a type.

> 
>> @@ -2351,11 +2351,11 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
>>       ASSERT(mfn_valid(smfn));
>>   #endif
>>   
>> -    if ( pagetable_get_pfn(v->arch.shadow_table[0]) == mfn_x(smfn)
>> +    if ( mfn_eq(pagetable_get_mfn(v->arch.shadow_table[0]), smfn)
>>   #if (SHADOW_PAGING_LEVELS == 3)
>> -         || pagetable_get_pfn(v->arch.shadow_table[1]) == mfn_x(smfn)
>> -         || pagetable_get_pfn(v->arch.shadow_table[2]) == mfn_x(smfn)
>> -         || pagetable_get_pfn(v->arch.shadow_table[3]) == mfn_x(smfn)
>> +         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[1]), smfn)
>> +         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[2]), smfn)
>> +         || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[3]), smfn)
>>   #endif
>>           )
> 
> While here moving the || to their designated places would make
> the code look worse overall, ...
> 
>> @@ -3707,7 +3707,7 @@ sh_update_linear_entries(struct vcpu *v)
>>   
>>       /* Don't try to update the monitor table if it doesn't exist */
>>       if ( shadow_mode_external(d)
>> -         && pagetable_get_pfn(v->arch.monitor_table) == 0 )
>> +         && pagetable_is_null(v->arch.monitor_table) )
> 
> ... could I talk you into moving the && here to the end of the
> previous line, as you're touching this anyway?

I will do.

> 
> Also, seeing there's quite a few conversions to pagetable_is_null()
> and also seeing that this patch is quite big - could this
> conversion be split out?

I will have a look.

> 
>> @@ -213,17 +214,17 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
>>   #ifndef __ASSEMBLY__
>>   
>>   /* Page-table type. */
>> -typedef struct { u64 pfn; } pagetable_t;
>> -#define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
>> +typedef struct { mfn_t mfn; } pagetable_t;
>> +#define PAGETABLE_NULL_MFN      _mfn(0)
> 
> I'd prefer to get away without this constant.
I would rather keep the constant as it makes easier to understand what 
_mfn(0) means in the context of the pagetable.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn()
  2020-03-27 10:52   ` Jan Beulich
@ 2020-03-28 10:53     ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-03-28 10:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

Hi Jan,

On 27/03/2020 10:52, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -1138,7 +1138,7 @@ static int
>>   get_page_from_l2e(
>>       l2_pgentry_t l2e, mfn_t l2mfn, struct domain *d, unsigned int flags)
>>   {
>> -    unsigned long mfn = l2e_get_pfn(l2e);
>> +    mfn_t mfn = l2e_get_mfn(l2e);
>>       int rc;
>>   
>>       if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) )
>> @@ -1150,7 +1150,7 @@ get_page_from_l2e(
>>   
>>       ASSERT(!(flags & PTF_preemptible));
>>   
>> -    rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, flags);
>> +    rc = get_page_and_type_from_mfn(mfn, PGT_l1_page_table, d, flags);
> 
> To bring this better in line with the L3 and L4 counterparts,
> could you please drop the local variable instead? Then

I will do it.

> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  2020-03-27 13:15   ` Jan Beulich
@ 2020-03-28 11:14     ` Julien Grall
  2020-03-30  8:10       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-03-28 11:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Julien Grall, Tamas K Lengyel, xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

Hi,

On 27/03/2020 13:15, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> @@ -983,19 +984,20 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
>>                   /* check for 1GB super page */
>>                   if ( l3e_get_flags(l3e[i3]) & _PAGE_PSE )
>>                   {
>> -                    mfn = l3e_get_pfn(l3e[i3]);
>> -                    ASSERT(mfn_valid(_mfn(mfn)));
>> +                    mfn = l3e_get_mfn(l3e[i3]);
>> +                    ASSERT(mfn_valid(mfn));
>>                       /* we have to cover 512x512 4K pages */
>>                       for ( i2 = 0;
>>                             i2 < (L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES);
>>                             i2++)
>>                       {
>> -                        m2pfn = get_gpfn_from_mfn(mfn+i2);
>> +                        m2pfn = get_pfn_from_mfn(mfn_add(mfn, i2));
>>                           if ( m2pfn != (gfn + i2) )
>>                           {
>>                               pmbad++;
>> -                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
>> -                                       gfn + i2, mfn + i2, m2pfn);
>> +                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" gfn %#lx\n",
>> +                                       gfn + i2, mfn_x(mfn_add(mfn, i2)),
> 
> As in the earlier patch, "mfn_x(mfn) + i2" would be shorter and
> hence imo preferable, especially in printk() and alike invocations.

The goal of using typesafe is to make the code safer not try to 
open-code everything because it might be shorter to write.

> 
> I would also prefer if you left %#lx alone, with the 2nd best
> option being to also use PRI_gfn alongside PRI_mfn. Primarily
> I'd like to avoid having a mixture.
The two options would be wrong:
	* gfn is an unsigned long and not gfn_t, so using PRI_gfn would be 
incorrect
	* mfn is now an mfn_t so using %lx would be incorrect

So the format string used in the patch is correct based on the types 
used. This...

> 
> Same (for both) at least one more time further down.

... would likely be applicable for all the other uses.

>> @@ -974,7 +974,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
>>                   P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
>>                             gfn_x(ogfn) , mfn_x(omfn));
>>                   if ( mfn_eq(omfn, mfn_add(mfn, i)) )
>> -                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_x(mfn_add(mfn, i)),
>> +                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_add(mfn, i),
>>                                       0);
> 
> Pull this up then onto the now shorter prior line?

Ok.

> 
>> @@ -2843,53 +2843,53 @@ void audit_p2m(struct domain *d,
>>       spin_lock(&d->page_alloc_lock);
>>       page_list_for_each ( page, &d->page_list )
>>       {
>> -        mfn = mfn_x(page_to_mfn(page));
>> +        mfn = page_to_mfn(page);
>>   
>> -        P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn);
>> +        P2M_PRINTK("auditing guest page, mfn=%"PRI_mfn"\n", mfn_x(mfn));
>>   
>>           od = page_get_owner(page);
>>   
>>           if ( od != d )
>>           {
>> -            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn, od, d);
>> +            P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn_x(mfn), od, d);
>>               continue;
>>           }
>>   
>> -        gfn = get_gpfn_from_mfn(mfn);
>> +        gfn = get_pfn_from_mfn(mfn);
>>           if ( gfn == INVALID_M2P_ENTRY )
>>           {
>>               orphans_count++;
>> -            P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n",
>> -                           mfn);
>> +            P2M_PRINTK("orphaned guest page: mfn=%"PRI_mfn" has invalid gfn\n",
>> +                       mfn_x(mfn));
>>               continue;
>>           }
>>   
>>           if ( SHARED_M2P(gfn) )
>>           {
>> -            P2M_PRINTK("shared mfn (%lx) on domain page list!\n",
>> -                    mfn);
>> +            P2M_PRINTK("shared mfn (%"PRI_mfn") on domain page list!\n",
>> +                       mfn_x(mfn));
>>               continue;
>>           }
>>   
>>           p2mfn = get_gfn_type_access(p2m, gfn, &type, &p2ma, 0, NULL);
>> -        if ( mfn_x(p2mfn) != mfn )
>> +        if ( !mfn_eq(p2mfn, mfn) )
>>           {
>>               mpbad++;
>> -            P2M_PRINTK("map mismatch mfn %#lx -> gfn %#lx -> mfn %#lx"
>> +            P2M_PRINTK("map mismatch mfn %"PRI_mfn" -> gfn %#lx -> mfn %"PRI_mfn""
>>                          " (-> gfn %#lx)\n",
>> -                       mfn, gfn, mfn_x(p2mfn),
>> +                       mfn_x(mfn), gfn, mfn_x(p2mfn),
>>                          (mfn_valid(p2mfn)
>> -                        ? get_gpfn_from_mfn(mfn_x(p2mfn))
>> +                        ? get_pfn_from_mfn(p2mfn)
>>                           : -1u));
> 
> I realize this is an entirely unrelated change, but the -1u here
> is standing out too much to not mention it: Could I talk you into
> making this gfn_x(INVALID_GFN) at this occasion?

Hmmm, I am not sure why I missed this one. I will use gfn_x(INVALID_GFN).

> 
>> --- a/xen/include/asm-x86/mm.h
>> +++ b/xen/include/asm-x86/mm.h
>> @@ -500,9 +500,10 @@ extern paddr_t mem_hotplug;
>>    */
>>   extern bool machine_to_phys_mapping_valid;
>>   
>> -static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
>> +static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn)
>>   {
>> -    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
>> +    const unsigned long mfn = mfn_x(mfn_);
> 
> I think it would be better overall if the parameter was named
> "mfn" and there was no local variable altogether. This would
> bring things in line with ...

You asked for this approach on the previous version [1]:

"Btw, the cheaper (in terms of code churn) change here would seem to
be to name the function parameter mfn_, and the local variable mfn.
That'll also reduce the number of uses of the unfortunate trailing-
underscore-name."

So can you pick a side and stick with it?

> 
>> @@ -515,11 +516,14 @@ static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
>>   
>>   extern struct rangeset *mmio_ro_ranges;
>>   
>> -#define get_gpfn_from_mfn(mfn)      (machine_to_phys_mapping[(mfn)])
>> +static inline unsigned long get_pfn_from_mfn(mfn_t mfn)
>> +{
>> +    return machine_to_phys_mapping[mfn_x(mfn)];
>> +}
> 
> ... this.
> 
> Jan
> 

Cheers,

[1] <5CF7A1090200007800235782@prv1-mh.provo.novell.com>

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN
  2020-03-28 10:14     ` Julien Grall
@ 2020-03-30  7:38       ` Jan Beulich
  2020-04-16 11:50         ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-30  7:38 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 28.03.2020 11:14, Julien Grall wrote:
> On 25/03/2020 14:46, Jan Beulich wrote:
>> On 22.03.2020 17:14, julien@xen.org wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Introduce handy helpers to generate/convert the CR3 from/to a MFN/GFN.
>>>
>>> Note that we are using cr3_pa() rather than xen_cr3_to_pfn() because the
>>> latter does not ignore the top 12-bits.
>>
>> I'm afraid this remark of yours points at some issue here:
>> cr3_pa() is meant to act on (real or virtual) CR3 values, but
>> not (necessarily) on para-virtual ones. E.g. ...
>>
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -1096,7 +1096,7 @@ int arch_set_info_guest(
>>>       set_bit(_VPF_in_reset, &v->pause_flags);
>>>         if ( !compat )
>>> -        cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3]));
>>> +        cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[3]);
>>
>> ... you're now losing the top 12 bits here, potentially
>> making ...
>>
>>>       else
>>>           cr3_mfn = _mfn(compat_cr3_to_pfn(c.cmp->ctrlreg[3]));
>>>       cr3_page = get_page_from_mfn(cr3_mfn, d);
>>
>> ... this succeed when it shouldn't.
>>
>>> --- a/xen/include/asm-x86/mm.h
>>> +++ b/xen/include/asm-x86/mm.h
>>> @@ -524,6 +524,26 @@ extern struct rangeset *mmio_ro_ranges;
>>>   #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
>>>   #define compat_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20))
>>>   +static inline unsigned long mfn_to_cr3(mfn_t mfn)
>>> +{
>>> +    return xen_pfn_to_cr3(mfn_x(mfn));
>>> +}
>>> +
>>> +static inline mfn_t cr3_to_mfn(unsigned long cr3)
>>> +{
>>> +    return maddr_to_mfn(cr3_pa(cr3));
>>> +}
>>> +
>>> +static inline unsigned long gfn_to_cr3(gfn_t gfn)
>>> +{
>>> +    return xen_pfn_to_cr3(gfn_x(gfn));
>>> +}
>>> +
>>> +static inline gfn_t cr3_to_gfn(unsigned long cr3)
>>> +{
>>> +    return gaddr_to_gfn(cr3_pa(cr3));
>>> +}
>>
>> Overall I think that when introducing such helpers we need to be
>> very clear about their intended uses: Bare underlying hardware,
>> PV guests, or HVM guests. From this perspective I also think that
>> having MFN and GFN conversions next to each other may be more
>> confusing than helpful, the more that there are no uses
>> introduced here for the latter. When applied to HVM guests,
>> xen_pfn_to_cr3() also shouldn't be used, as that's a PV construct
>> in the public headers. Yet I thing conversions to/from GFNs
>> should first and foremost be applicable to HVM guests.
> 
> There are use of GFN helpers in the series, but I wanted to avoid
> introducing them in the middle of something else. I can try to
> find a couple of occurences I can switch to use them now.

With your proposal below splitting patches at the HVM/PV/host
boundaries may make sense nevertheless.

> Regarding the term GFN, it is not meant to be HVM only.

Of course, hence my "first and foremost".

> So we may want to prefix the helpers with hvm_ to make it clear.
> 
>>
>> A possible route to go may be to e.g. accompany
>> {xen,compat}_pfn_to_cr3() with {xen,compat}_mfn_to_cr3(), and
>> leave the GFN aspect out until such patch that would actually
>> use them (which may then make clear that these actually want
>> to live in a header specifically applicable to translated
>> guests).
> 
> I am thinking to introduce 3 sets of helpers:
>     - hvm_cr3_to_gfn()/hvm_gfn_to_cr3(): Handle the CR3 for HVM guest
>     - {xen, compat}_mfn_to_cr3()/{xen, compat}_cr3_to_mfn(): Handle the CR3 for PV guest.
>     - host_cr3_to_mfn()/host_mfn_to_cr3(): To handle the host cr3.
> 
> What do you think?

Maybe some variation thereof:

 - hvm_cr3_to_gfn()/hvm_gfn_to_cr3(): Handle the CR3 for HVM guest
 - {pv,compat}_mfn_to_cr3()/{pv,compat}_cr3_to_mfn(): Handle the CR3 for PV guest
 - cr3_to_mfn()/mfn_to_cr3(): To handle the host cr3

? This is because I'd prefer to avoid host_ prefixes (albeit I'm
not entirely opposed to such), and I'd also prefer to use xen_
prefixes as they're generally ambiguous as to what aspect of "Xen"
they actually mean.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-03-28 10:52     ` Julien Grall
@ 2020-03-30  7:52       ` Jan Beulich
  2020-04-18 10:23         ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-03-30  7:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

On 28.03.2020 11:52, Julien Grall wrote:
> On 26/03/2020 15:39, Jan Beulich wrote:
>> On 22.03.2020 17:14, julien@xen.org wrote:
>>> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>>>         /* Free that page if non-zero */
>>>       do {
>>> -        if ( mfn )
>>> +        if ( !mfn_eq(mfn, _mfn(0)) )
>>
>> I admit I'm not fully certain either, but at the first glance
>>
>>          if ( mfn_x(mfn) )
>>
>> would seem more in line with the original code to me (and then
>> also elsewhere).
> 
> It is doing *exactly* the same things. The whole point of typesafe
> is to use typesafe helper not open-coding test everywhere.
> 
> It is also easier to spot any use of MFN 0 within the code as you
> know could grep "_mfn(0)".
> 
> Therefore I will insist to the code as-is.

What I insit on is that readability of the result of such changes be
also kept in mind. The mfn_eq() construct is (I think) clearly less
easy to read and recognize than the simpler alternative suggested.
If you want to avoid mfn_x(), how about introducing (if possible
limited to x86, assuming that MFN 0 has no special meaning on Arm)
mfn_zero()?

>>> @@ -3560,19 +3561,18 @@ long do_mmuext_op(
>>>               if ( unlikely(rc) )
>>>                   break;
>>>   -            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
>>> +            old_mfn = pagetable_get_mfn(curr->arch.guest_table_user);
>>>               /*
>>>                * This is particularly important when getting restarted after the
>>>                * previous attempt got preempted in the put-old-MFN phase.
>>>                */
>>> -            if ( old_mfn == op.arg1.mfn )
>>> +            if ( mfn_eq(old_mfn, new_mfn) )
>>>                   break;
>>>   -            if ( op.arg1.mfn != 0 )
>>> +            if ( !mfn_eq(new_mfn, _mfn(0)) )
>>
>> At least here I would clearly prefer the old code to be kept.
> 
> See above.

I don't agree - here you're evaluating an aspect of the public
interface. MFN 0 internally having a special meaning is, while
connected to this aspect, still an implementation detail.

>>> @@ -3580,19 +3580,19 @@ long do_mmuext_op(
>>>                       else if ( rc != -ERESTART )
>>>                           gdprintk(XENLOG_WARNING,
>>>                                    "Error %d installing new mfn %" PRI_mfn "\n",
>>> -                                 rc, op.arg1.mfn);
>>> +                                 rc, mfn_x(new_mfn));
>>
>> Here I'm also not sure I see the point of the conversion.
> 
> op.arg1.mfn and mfn are technically not the same type. The
> former is a xen_pfn_t, whilst the latter is mfn_t.
> 
> In practice they are both unsigned long on x86, so it should
> be fine to use PRI_mfn. However, I think this is an abuse
> and we should aim to use the proper PRI_* for a type.

I'd be fine with switching to PRI_xen_pfn here, yes. But
especially with the "not the same type" argument what should
be logged is imo what was specified, not what we converted it
to.

>>> @@ -213,17 +214,17 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
>>>   #ifndef __ASSEMBLY__
>>>     /* Page-table type. */
>>> -typedef struct { u64 pfn; } pagetable_t;
>>> -#define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
>>> +typedef struct { mfn_t mfn; } pagetable_t;
>>> +#define PAGETABLE_NULL_MFN      _mfn(0)
>>
>> I'd prefer to get away without this constant.
> I would rather keep the constant as it makes easier to
> understand what _mfn(0) means in the context of the pagetable.

If this was used outside of the accessor definitions, I'd
probably agree. But the accessor definitions exist specifically
to abstract away such things from use sites. Hence, bike-
shedding or not, if Andrew was clearly agreeing with your view,
I'd accept it. If he's indifferent, I'd prefer the #define to
be dropped.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
  2020-03-28 11:14     ` Julien Grall
@ 2020-03-30  8:10       ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-03-30  8:10 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Julien Grall, Tamas K Lengyel, xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

On 28.03.2020 12:14, Julien Grall wrote:
> On 27/03/2020 13:15, Jan Beulich wrote:
>> On 22.03.2020 17:14, julien@xen.org wrote:
>>> @@ -983,19 +984,20 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
>>>                   /* check for 1GB super page */
>>>                   if ( l3e_get_flags(l3e[i3]) & _PAGE_PSE )
>>>                   {
>>> -                    mfn = l3e_get_pfn(l3e[i3]);
>>> -                    ASSERT(mfn_valid(_mfn(mfn)));
>>> +                    mfn = l3e_get_mfn(l3e[i3]);
>>> +                    ASSERT(mfn_valid(mfn));
>>>                       /* we have to cover 512x512 4K pages */
>>>                       for ( i2 = 0;
>>>                             i2 < (L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES);
>>>                             i2++)
>>>                       {
>>> -                        m2pfn = get_gpfn_from_mfn(mfn+i2);
>>> +                        m2pfn = get_pfn_from_mfn(mfn_add(mfn, i2));
>>>                           if ( m2pfn != (gfn + i2) )
>>>                           {
>>>                               pmbad++;
>>> -                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n",
>>> -                                       gfn + i2, mfn + i2, m2pfn);
>>> +                            P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" gfn %#lx\n",
>>> +                                       gfn + i2, mfn_x(mfn_add(mfn, i2)),
>>
>> As in the earlier patch, "mfn_x(mfn) + i2" would be shorter and
>> hence imo preferable, especially in printk() and alike invocations.
> 
> The goal of using typesafe is to make the code safer not try to
> open-code everything because it might be shorter to write.

I'm not talking about "everything". As soon as you use mfn_x()
_anywhere_, type-safety is gone. Since in printk() and alike you
unavoidably have to use it (at least for now), there's no win
from using e.g. mfn_add() as you do here, imo. And hence the
readability aspect gets even higher significance.

>> I would also prefer if you left %#lx alone, with the 2nd best
>> option being to also use PRI_gfn alongside PRI_mfn. Primarily
>> I'd like to avoid having a mixture.
> The two options would be wrong:
>     * gfn is an unsigned long and not gfn_t, so using PRI_gfn would be incorrect
>     * mfn is now an mfn_t so using %lx would be incorrect
> 
> So the format string used in the patch is correct based on the types used.

Hmm, xen/mm.h suggests a partial connection between e.g. mfn_t
and PRI_mfn, yes, but I think this is unhelpful as long as
mfn_x() needs to be explicitly used when specifying the printk()
arguments. Instead I view PRI_mfn and alike as a more general
format usable also for MFNs stored in unsigned long rather than
mfn_t.

I agree though that views here may differ. Hence wider agreement
on what the intentions are (also mid/long term), and hence how
well formed code ought to look like, would seem necessary here.

> This...
> 
>>
>> Same (for both) at least one more time further down.
> 
> ... would likely be applicable for all the other uses.

Agreed.

>>> --- a/xen/include/asm-x86/mm.h
>>> +++ b/xen/include/asm-x86/mm.h
>>> @@ -500,9 +500,10 @@ extern paddr_t mem_hotplug;
>>>    */
>>>   extern bool machine_to_phys_mapping_valid;
>>>   -static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn)
>>> +static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn)
>>>   {
>>> -    const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn)));
>>> +    const unsigned long mfn = mfn_x(mfn_);
>>
>> I think it would be better overall if the parameter was named
>> "mfn" and there was no local variable altogether. This would
>> bring things in line with ...
> 
> You asked for this approach on the previous version [1]:
> 
> "Btw, the cheaper (in terms of code churn) change here would seem to
> be to name the function parameter mfn_, and the local variable mfn.
> That'll also reduce the number of uses of the unfortunate trailing-
> underscore-name."
> 
> So can you pick a side and stick with it?

Well, things like this happen when you see the final result, sorry.
And indeed I recalled commenting on this before, but upon searching
I didn't manage to find the earlier reply, to better justify what I
also suspected might have been a change of mind.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN
  2020-03-30  7:38       ` Jan Beulich
@ 2020-04-16 11:50         ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-04-16 11:50 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

Hi Jan,

On 30/03/2020 08:38, Jan Beulich wrote:
> Maybe some variation thereof:
> 
>   - hvm_cr3_to_gfn()/hvm_gfn_to_cr3(): Handle the CR3 for HVM guest
>   - {pv,compat}_mfn_to_cr3()/{pv,compat}_cr3_to_mfn(): Handle the CR3 for PV guest
>   - cr3_to_mfn()/mfn_to_cr3(): To handle the host cr3
> 
> ? This is because I'd prefer to avoid host_ prefixes (albeit I'm
> not entirely opposed to such), and I'd also prefer to use xen_
> prefixes as they're generally ambiguous as to what aspect of "Xen"
> they actually mean.

I am happy with your suggested naming. I will have a look to see how 
they fit in the tree and respin the series.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-03-30  7:52       ` Jan Beulich
@ 2020-04-18 10:23         ` Julien Grall
  2020-04-20  9:16           ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-04-18 10:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

Hi,

On 30/03/2020 08:52, Jan Beulich wrote:
> On 28.03.2020 11:52, Julien Grall wrote:
>> On 26/03/2020 15:39, Jan Beulich wrote:
>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>>>>          /* Free that page if non-zero */
>>>>        do {
>>>> -        if ( mfn )
>>>> +        if ( !mfn_eq(mfn, _mfn(0)) )
>>>
>>> I admit I'm not fully certain either, but at the first glance
>>>
>>>           if ( mfn_x(mfn) )
>>>
>>> would seem more in line with the original code to me (and then
>>> also elsewhere).
>>
>> It is doing *exactly* the same things. The whole point of typesafe
>> is to use typesafe helper not open-coding test everywhere.
>>
>> It is also easier to spot any use of MFN 0 within the code as you
>> know could grep "_mfn(0)".
>>
>> Therefore I will insist to the code as-is.
> 
> What I insit on is that readability of the result of such changes be
> also kept in mind. The mfn_eq() construct is (I think) clearly less
> easy to read and recognize than the simpler alternative suggested.

If mfn_eq() is less clear, then where do you draw the line when the 
macro should or not be used?

> If you want to avoid mfn_x(), how about introducing (if possible
> limited to x86, assuming that MFN 0 has no special meaning on Arm)
> mfn_zero()?

Zero has not special meaning on Arm, so we could limit to x86.

> 
>>>> @@ -3560,19 +3561,18 @@ long do_mmuext_op(
>>>>                if ( unlikely(rc) )
>>>>                    break;
>>>>    -            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
>>>> +            old_mfn = pagetable_get_mfn(curr->arch.guest_table_user);
>>>>                /*
>>>>                 * This is particularly important when getting restarted after the
>>>>                 * previous attempt got preempted in the put-old-MFN phase.
>>>>                 */
>>>> -            if ( old_mfn == op.arg1.mfn )
>>>> +            if ( mfn_eq(old_mfn, new_mfn) )
>>>>                    break;
>>>>    -            if ( op.arg1.mfn != 0 )
>>>> +            if ( !mfn_eq(new_mfn, _mfn(0)) )
>>>
>>> At least here I would clearly prefer the old code to be kept.
>>
>> See above.
> 
> I don't agree - here you're evaluating an aspect of the public
> interface. MFN 0 internally having a special meaning is, while
> connected to this aspect, still an implementation detail.

Fair enough.

> 
>>>> @@ -3580,19 +3580,19 @@ long do_mmuext_op(
>>>>                        else if ( rc != -ERESTART )
>>>>                            gdprintk(XENLOG_WARNING,
>>>>                                     "Error %d installing new mfn %" PRI_mfn "\n",
>>>> -                                 rc, op.arg1.mfn);
>>>> +                                 rc, mfn_x(new_mfn));
>>>
>>> Here I'm also not sure I see the point of the conversion.
>>
>> op.arg1.mfn and mfn are technically not the same type. The
>> former is a xen_pfn_t, whilst the latter is mfn_t.
>>
>> In practice they are both unsigned long on x86, so it should
>> be fine to use PRI_mfn. However, I think this is an abuse
>> and we should aim to use the proper PRI_* for a type.
> 
> I'd be fine with switching to PRI_xen_pfn here, yes. But
> especially with the "not the same type" argument what should
> be logged is imo what was specified, not what we converted it
> to.

Fair point. I will switch back to op.arg1.mfn.

> 
>>>> @@ -213,17 +214,17 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
>>>>    #ifndef __ASSEMBLY__
>>>>      /* Page-table type. */
>>>> -typedef struct { u64 pfn; } pagetable_t;
>>>> -#define pagetable_get_paddr(x)  ((paddr_t)(x).pfn << PAGE_SHIFT)
>>>> +typedef struct { mfn_t mfn; } pagetable_t;
>>>> +#define PAGETABLE_NULL_MFN      _mfn(0)
>>>
>>> I'd prefer to get away without this constant.
>> I would rather keep the constant as it makes easier to
>> understand what _mfn(0) means in the context of the pagetable.
> 
> If this was used outside of the accessor definitions, I'd
> probably agree. But the accessor definitions exist specifically
> to abstract away such things from use sites. Hence, bike-
> shedding or not, if Andrew was clearly agreeing with your view,
> I'd accept it. If he's indifferent, I'd prefer the #define to
> be dropped.

Andrew, do you have any opinion?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn'
  2020-03-26 15:51   ` Jan Beulich
@ 2020-04-18 10:54     ` Julien Grall
  0 siblings, 0 replies; 61+ messages in thread
From: Julien Grall @ 2020-04-18 10:54 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

Hi Jan,

On 26/03/2020 15:51, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> We are using the 'mfn' to refer to machine frame. As this function deal
>> with 'mfn', replace 'pfn' with 'mfn'.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>
>> I am not entirely sure to understand the comment on top of the
>> function, so this change may be wrong.
> 
> Looking at the history of the function, ...
> 
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -1321,7 +1321,7 @@ static int put_data_pages(struct page_info *page, bool writeable, int pt_shift)
>>   }
>>   
>>   /*
>> - * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'.
>> + * NB. Virtual address 'l2e' maps to a machine address within frame 'mfn'.
>>    * Note also that this automatically deals correctly with linear p.t.'s.
>>    */
>>   static int put_page_from_l2e(l2_pgentry_t l2e, mfn_t l2mfn, unsigned int flags)
> 
> ... it used to be
> 
> static int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn)
> 
> When the rename occurred (in the context of or as a follow-up to an
> XSA iirc), the comment adjustment was apparently missed. With the
> referenced name matching that of the function argument (l2mfn)
> Acked-by: Jan Beulich <jbeulich@suse.com>

I will update the reference to use 'l2mfn' and also add a word that the 
comment was adjusted in ea51977a7aa5e645680a7194550fbceb59004ccf.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN
  2020-03-26 15:54   ` Jan Beulich
@ 2020-04-18 11:01     ` Julien Grall
  2020-04-18 11:43       ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-04-18 11:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper



On 26/03/2020 15:54, Jan Beulich wrote:
> On 22.03.2020 17:14, julien@xen.org wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Note that the code is now using cr3_to_mfn() to get the MFN. This is
>> slightly different as the top 12-bits will now be masked.
> 
> And here I agree with the change. Hence it is even more so important
> that the patch introducing the new helper(s) first gets sorted.
> Should there be further patches in this series with this same
> interaction issue, I won't point it out again and may not respond at
> all if I see no other issues.

I will update the commit message explaining the reason of using 
cr3_to_mfn() and look at the other user.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN
  2020-04-18 11:01     ` Julien Grall
@ 2020-04-18 11:43       ` Julien Grall
  2020-04-20  9:19         ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-04-18 11:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 18/04/2020 12:01, Julien Grall wrote:
> On 26/03/2020 15:54, Jan Beulich wrote:
>> On 22.03.2020 17:14, julien@xen.org wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Note that the code is now using cr3_to_mfn() to get the MFN. This is
>>> slightly different as the top 12-bits will now be masked.
>>
>> And here I agree with the change. Hence it is even more so important
>> that the patch introducing the new helper(s) first gets sorted.
>> Should there be further patches in this series with this same
>> interaction issue, I won't point it out again and may not respond at
>> all if I see no other issues.
> 
> I will update the commit message explaining the reason of using 
> cr3_to_mfn() and look at the other user.

Looking at the code again, there are a few users that don't mask the top 
12-bits. I am trying to understand why this has never been an issue so far.

Wouldn't it break when bit 63 (no flush) is set? If so, maybe I should 
split the work from typesafe.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-04-18 10:23         ` Julien Grall
@ 2020-04-20  9:16           ` Jan Beulich
  2020-04-20 10:10             ` Julien Grall
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2020-04-20  9:16 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

On 18.04.2020 12:23, Julien Grall wrote:
> On 30/03/2020 08:52, Jan Beulich wrote:
>> On 28.03.2020 11:52, Julien Grall wrote:
>>> On 26/03/2020 15:39, Jan Beulich wrote:
>>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>>> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>>>>>          /* Free that page if non-zero */
>>>>>        do {
>>>>> -        if ( mfn )
>>>>> +        if ( !mfn_eq(mfn, _mfn(0)) )
>>>>
>>>> I admit I'm not fully certain either, but at the first glance
>>>>
>>>>           if ( mfn_x(mfn) )
>>>>
>>>> would seem more in line with the original code to me (and then
>>>> also elsewhere).
>>>
>>> It is doing *exactly* the same things. The whole point of typesafe
>>> is to use typesafe helper not open-coding test everywhere.
>>>
>>> It is also easier to spot any use of MFN 0 within the code as you
>>> know could grep "_mfn(0)".
>>>
>>> Therefore I will insist to the code as-is.
>>
>> What I insit on is that readability of the result of such changes be
>> also kept in mind. The mfn_eq() construct is (I think) clearly less
>> easy to read and recognize than the simpler alternative suggested.
> 
> If mfn_eq() is less clear, then where do you draw the line when the
> macro should or not be used?

I'm afraid there may not be a clear line to draw until everything
got converted. I do seem to recall though that, perhaps in a
different context, Andrew recently agreed with my view here (Andrew,
please correct me if I'm wrong). It being a fuzzy thing, I guess
maintainers get to judge ...

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN
  2020-04-18 11:43       ` Julien Grall
@ 2020-04-20  9:19         ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-04-20  9:19 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 18.04.2020 13:43, Julien Grall wrote:
> On 18/04/2020 12:01, Julien Grall wrote:
>> On 26/03/2020 15:54, Jan Beulich wrote:
>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Note that the code is now using cr3_to_mfn() to get the MFN. This is
>>>> slightly different as the top 12-bits will now be masked.
>>>
>>> And here I agree with the change. Hence it is even more so important
>>> that the patch introducing the new helper(s) first gets sorted.
>>> Should there be further patches in this series with this same
>>> interaction issue, I won't point it out again and may not respond at
>>> all if I see no other issues.
>>
>> I will update the commit message explaining the reason of using cr3_to_mfn() and look at the other user.
> 
> Looking at the code again, there are a few users that don't mask the top 12-bits. I am trying to understand why this has never been an issue so far.
> 
> Wouldn't it break when bit 63 (no flush) is set?

Yes; I guess those uses are case where bit 63 can't / won't be set.
Just like the register, which doesn't store the bit, I think we
avoid storing the bit set as well. But correctness of the non-
masking variants can only be established by looking at every
individual site.

> If so, maybe I should split the work from typesafe.

Maybe better indeed.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-04-20  9:16           ` Jan Beulich
@ 2020-04-20 10:10             ` Julien Grall
  2020-04-20 12:14               ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Julien Grall @ 2020-04-20 10:10 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

Hi,

On 20/04/2020 10:16, Jan Beulich wrote:
> On 18.04.2020 12:23, Julien Grall wrote:
>> On 30/03/2020 08:52, Jan Beulich wrote:
>>> On 28.03.2020 11:52, Julien Grall wrote:
>>>> On 26/03/2020 15:39, Jan Beulich wrote:
>>>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>>>> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>>>>>>           /* Free that page if non-zero */
>>>>>>         do {
>>>>>> -        if ( mfn )
>>>>>> +        if ( !mfn_eq(mfn, _mfn(0)) )
>>>>>
>>>>> I admit I'm not fully certain either, but at the first glance
>>>>>
>>>>>            if ( mfn_x(mfn) )
>>>>>
>>>>> would seem more in line with the original code to me (and then
>>>>> also elsewhere).
>>>>
>>>> It is doing *exactly* the same things. The whole point of typesafe
>>>> is to use typesafe helper not open-coding test everywhere.
>>>>
>>>> It is also easier to spot any use of MFN 0 within the code as you
>>>> know could grep "_mfn(0)".
>>>>
>>>> Therefore I will insist to the code as-is.
>>>
>>> What I insit on is that readability of the result of such changes be
>>> also kept in mind. The mfn_eq() construct is (I think) clearly less
>>> easy to read and recognize than the simpler alternative suggested.
>>
>> If mfn_eq() is less clear, then where do you draw the line when the
>> macro should or not be used?
> 
> I'm afraid there may not be a clear line to draw until everything
> got converted.

I am sorry but this doesn't add up. Here you say that we can't have a 
clear line to draw until everything is converted but...

> I do seem to recall though that, perhaps in a
> different context, Andrew recently agreed with my view here (Andrew,
> please correct me if I'm wrong). It being a fuzzy thing, I guess
> maintainers get to judge ...

... here you say the maintainers get to decide when to use mfn_eq() (or 
other typesafe construction). So basically, we would never be able to 
fully convert the code and therefore never draw a line.

As I am trying to convert x86 to use typesafe, I would like a bit more 
guidelines on your expectation for typesafe. Can you clarify it?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers
  2020-04-20 10:10             ` Julien Grall
@ 2020-04-20 12:14               ` Jan Beulich
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2020-04-20 12:14 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Julien Grall, Tim Deegan,
	George Dunlap, Jun Nakajima, xen-devel, Roger Pau Monné

On 20.04.2020 12:10, Julien Grall wrote:
> Hi,
> 
> On 20/04/2020 10:16, Jan Beulich wrote:
>> On 18.04.2020 12:23, Julien Grall wrote:
>>> On 30/03/2020 08:52, Jan Beulich wrote:
>>>> On 28.03.2020 11:52, Julien Grall wrote:
>>>>> On 26/03/2020 15:39, Jan Beulich wrote:
>>>>>> On 22.03.2020 17:14, julien@xen.org wrote:
>>>>>>> @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v)
>>>>>>>           /* Free that page if non-zero */
>>>>>>>         do {
>>>>>>> -        if ( mfn )
>>>>>>> +        if ( !mfn_eq(mfn, _mfn(0)) )
>>>>>>
>>>>>> I admit I'm not fully certain either, but at the first glance
>>>>>>
>>>>>>            if ( mfn_x(mfn) )
>>>>>>
>>>>>> would seem more in line with the original code to me (and then
>>>>>> also elsewhere).
>>>>>
>>>>> It is doing *exactly* the same things. The whole point of typesafe
>>>>> is to use typesafe helper not open-coding test everywhere.
>>>>>
>>>>> It is also easier to spot any use of MFN 0 within the code as you
>>>>> know could grep "_mfn(0)".
>>>>>
>>>>> Therefore I will insist to the code as-is.
>>>>
>>>> What I insit on is that readability of the result of such changes be
>>>> also kept in mind. The mfn_eq() construct is (I think) clearly less
>>>> easy to read and recognize than the simpler alternative suggested.
>>>
>>> If mfn_eq() is less clear, then where do you draw the line when the
>>> macro should or not be used?
>>
>> I'm afraid there may not be a clear line to draw until everything
>> got converted.
> 
> I am sorry but this doesn't add up. Here you say that we can't have
> a clear line to draw until everything is converted but...
> 
>> I do seem to recall though that, perhaps in a
>> different context, Andrew recently agreed with my view here (Andrew,
>> please correct me if I'm wrong). It being a fuzzy thing, I guess
>> maintainers get to judge ...
> 
> ... here you say the maintainers get to decide when to use mfn_eq()
> (or other typesafe construction). So basically, we would never be
> able to fully convert the code and therefore never draw a line.

Why? Eventually both sides of an mfn_eq() will be of type mfn_t. And
in the specific case hand even with my alternative suggestion no
further change would  be needed down the road. Type safety is for
things like function argument passing and assignments and alike. A
leaf expression like "if ( mfn_x() )" is not type-unsafe in any way.

Jan


^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2020-04-20 12:15 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-22 16:14 [Xen-devel] [PATCH 00/17] Bunch of typesafe conversion julien
2020-03-22 16:14 ` [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN julien
2020-03-25 14:46   ` Jan Beulich
2020-03-28 10:14     ` Julien Grall
2020-03-30  7:38       ` Jan Beulich
2020-04-16 11:50         ` Julien Grall
2020-03-22 16:14 ` [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN julien
2020-03-25 14:51   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header julien
2020-03-25 15:00   ` Jan Beulich
2020-03-25 18:09     ` Julien Grall
2020-03-26  9:02       ` Jan Beulich
2020-03-28 10:15         ` Julien Grall
2020-03-22 16:14 ` [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN julien
2020-03-25 15:27   ` Jan Beulich
2020-03-25 18:21     ` Julien Grall
2020-03-26  9:09       ` Jan Beulich
2020-03-28 10:33         ` Julien Grall
2020-03-22 16:14 ` [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers julien
2020-03-26 15:39   ` Jan Beulich
2020-03-28 10:52     ` Julien Grall
2020-03-30  7:52       ` Jan Beulich
2020-04-18 10:23         ` Julien Grall
2020-04-20  9:16           ` Jan Beulich
2020-04-20 10:10             ` Julien Grall
2020-04-20 12:14               ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn' julien
2020-03-26 15:51   ` Jan Beulich
2020-04-18 10:54     ` Julien Grall
2020-03-22 16:14 ` [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN julien
2020-03-26 15:54   ` Jan Beulich
2020-04-18 11:01     ` Julien Grall
2020-04-18 11:43       ` Julien Grall
2020-04-20  9:19         ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 08/17] xen/x86: traps: Convert show_page_walk() " julien
2020-03-22 16:14 ` [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn() julien
2020-03-27 10:52   ` Jan Beulich
2020-03-28 10:53     ` Julien Grall
2020-03-22 16:14 ` [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version julien
2020-03-27 11:34   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga() julien
2020-03-27 11:35   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m() julien
2020-03-27 11:35   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s " julien
2020-03-27 11:36   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function julien
2020-03-27 12:44   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m() julien
2020-03-27 12:45   ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN julien
2020-03-23 12:11   ` Hongyan Xia
2020-03-23 12:26     ` Julien Grall
2020-03-27 13:15   ` Jan Beulich
2020-03-28 11:14     ` Julien Grall
2020-03-30  8:10       ` Jan Beulich
2020-03-22 16:14 ` [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn julien
2020-03-23  8:37   ` Paul Durrant
2020-03-23 10:26     ` Julien Grall
2020-03-27 13:50   ` Jan Beulich
2020-03-27 13:59     ` Julien Grall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).