* [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
@ 2017-10-04 18:15 Julien Grall
2017-10-04 18:15 ` [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
` (7 more replies)
0 siblings, 8 replies; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel
Cc: Elena Ufimtseva, Kevin Tian, Stefano Stabellini, Wei Liu,
Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
George Dunlap, Andrew Cooper, Julien Grall, Ian Jackson,
Tim Deegan, Julien Grall, Paul Durrant, Tamas K Lengyel,
Jan Beulich, Shane Wang, Suravee Suthikulpanit, Boris Ostrovsky,
Gang Wei
Hi all,
Most of the users of page_to_mfn and mfn_to_page are either overriding
the macros to make them work with mfn_t or use mfn_x/_mfn becaue the rest
of the function use mfn_t.
So I think it is time to make __page_to_mfn and __mfn_to_page using typesafe
MFN.
The first 6 patches will convert of the code to use typesafe MFN, easing
the tree-wide conversion in patch 7.
Cheers,
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Shane Wang <shane.wang@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Julien Grall (7):
xen/arm: domain_build: Clean-up insert_11_bank
xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash
xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >>
PAGE_SHIFT
xen/kimage: Remove defined but unused variables
xen/xenoprof: Convert the file to use typesafe MFN
xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
xen/arch/arm/domain_build.c | 15 ++++++++-------
xen/arch/arm/kernel.c | 2 +-
xen/arch/arm/mem_access.c | 2 +-
xen/arch/arm/mm.c | 2 +-
xen/arch/arm/p2m.c | 8 +-------
xen/arch/x86/cpu/vpmu.c | 6 +++---
xen/arch/x86/debug.c | 2 +-
xen/arch/x86/domain.c | 21 +++++++++++----------
xen/arch/x86/domctl.c | 2 +-
xen/arch/x86/hvm/dm.c | 2 +-
xen/arch/x86/hvm/dom0_build.c | 6 +++---
xen/arch/x86/hvm/hvm.c | 16 ++++++++--------
xen/arch/x86/hvm/ioreq.c | 6 +++---
xen/arch/x86/hvm/stdvga.c | 2 +-
xen/arch/x86/hvm/svm/svm.c | 4 ++--
xen/arch/x86/hvm/viridian.c | 8 ++++----
xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
xen/arch/x86/hvm/vmx/vmx.c | 10 +++++-----
xen/arch/x86/hvm/vmx/vvmx.c | 2 +-
xen/arch/x86/mm.c | 6 ------
xen/arch/x86/mm/guest_walk.c | 6 +++---
xen/arch/x86/mm/hap/guest_walk.c | 2 +-
xen/arch/x86/mm/hap/hap.c | 6 ------
xen/arch/x86/mm/hap/nested_ept.c | 2 +-
xen/arch/x86/mm/mem_sharing.c | 9 ++-------
xen/arch/x86/mm/p2m-ept.c | 4 ++++
xen/arch/x86/mm/p2m-pod.c | 6 ------
xen/arch/x86/mm/p2m.c | 6 ------
xen/arch/x86/mm/paging.c | 6 ------
xen/arch/x86/mm/shadow/common.c | 2 +-
xen/arch/x86/mm/shadow/multi.c | 6 +++---
xen/arch/x86/mm/shadow/private.h | 16 ++--------------
xen/arch/x86/numa.c | 2 +-
xen/arch/x86/physdev.c | 2 +-
xen/arch/x86/pv/callback.c | 6 ------
xen/arch/x86/pv/descriptor-tables.c | 6 ------
xen/arch/x86/pv/dom0_build.c | 6 ++++++
xen/arch/x86/pv/domain.c | 6 ------
xen/arch/x86/pv/emul-gate-op.c | 6 ------
xen/arch/x86/pv/emul-priv-op.c | 10 ----------
xen/arch/x86/pv/grant_table.c | 6 ------
xen/arch/x86/pv/ro-page-fault.c | 6 ------
xen/arch/x86/smpboot.c | 6 ------
xen/arch/x86/tboot.c | 4 ++--
xen/arch/x86/traps.c | 2 +-
xen/arch/x86/x86_64/mm.c | 6 ++++++
xen/common/domain.c | 4 ++--
xen/common/event_fifo.c | 2 +-
xen/common/grant_table.c | 6 ++++++
xen/common/kimage.c | 25 +++++++++++--------------
xen/common/memory.c | 6 ++++++
xen/common/page_alloc.c | 6 ++++++
xen/common/tmem.c | 2 +-
xen/common/tmem_xen.c | 24 ++++++++++++------------
xen/common/trace.c | 6 ++++++
xen/common/vmap.c | 9 +++++----
xen/common/xenoprof.c | 17 +++++++++++------
xen/drivers/passthrough/amd/iommu_map.c | 6 ++++++
xen/drivers/passthrough/iommu.c | 2 +-
xen/drivers/passthrough/x86/iommu.c | 2 +-
xen/include/asm-arm/mm.h | 22 ++++++++++++----------
xen/include/asm-arm/p2m.h | 4 ++--
xen/include/asm-x86/mm.h | 12 ++++++------
xen/include/asm-x86/p2m.h | 2 +-
xen/include/asm-x86/page.h | 32 ++++++++++++++++----------------
xen/include/xen/domain_page.h | 4 ++--
xen/include/xen/tmem_xen.h | 2 +-
67 files changed, 206 insertions(+), 258 deletions(-)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 22:39 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 2/7] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash Julien Grall
` (6 subsequent siblings)
7 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel; +Cc: Julien Grall, Stefano Stabellini
- Remove spurious ()
- Add missing spaces
- Turn 1 << to 1UL <<
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
xen/arch/arm/domain_build.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3723dc3f78..093ebf1a8e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -109,11 +109,11 @@ static bool insert_11_bank(struct domain *d,
spfn = page_to_mfn(pg);
start = pfn_to_paddr(spfn);
- size = pfn_to_paddr((1 << order));
+ size = pfn_to_paddr(1UL << order);
D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
start, start + size,
- 1UL << (order+PAGE_SHIFT-20),
+ 1UL << (order + PAGE_SHIFT - 20),
/* Don't want format this as PRIpaddr (16 digit hex) */
(unsigned long)(kinfo->unassigned_mem >> 20),
order);
@@ -167,7 +167,8 @@ static bool insert_11_bank(struct domain *d,
*/
if ( start + size < bank->start && kinfo->mem.nr_banks < NR_MEM_BANKS )
{
- memmove(bank + 1, bank, sizeof(*bank)*(kinfo->mem.nr_banks - i));
+ memmove(bank + 1, bank,
+ sizeof(*bank) * (kinfo->mem.nr_banks - i));
kinfo->mem.nr_banks++;
bank->start = start;
bank->size = size;
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/7] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
2017-10-04 18:15 ` [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 18:15 ` [PATCH 3/7] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
` (5 subsequent siblings)
7 siblings, 0 replies; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel; +Cc: Julien Grall, Stefano Stabellini
The arm32 version of the function is_xen_heap_page currently define a
variable _mfn. This will lead to a compiler when use typesafe MFN in a
folow-up patch:
called object '_mfn' is not a function or function pointer
Fix it by renaming the local variable _mfn to mfn_.
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
xen/include/asm-arm/mm.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index cd6dfb54b9..737a429409 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -140,9 +140,9 @@ extern vaddr_t xenheap_virt_start;
#ifdef CONFIG_ARM_32
#define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
#define is_xen_heap_mfn(mfn) ({ \
- unsigned long _mfn = (mfn); \
- (_mfn >= mfn_x(xenheap_mfn_start) && \
- _mfn < mfn_x(xenheap_mfn_end)); \
+ unsigned long mfn_ = (mfn); \
+ (mfn_ >= mfn_x(xenheap_mfn_start) && \
+ mfn_ < mfn_x(xenheap_mfn_end)); \
})
#else
#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/7] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
2017-10-04 18:15 ` [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
2017-10-04 18:15 ` [PATCH 2/7] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 22:41 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 4/7] xen/kimage: Remove defined but unused variables Julien Grall
` (4 subsequent siblings)
7 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel
Cc: Elena Ufimtseva, George Dunlap, Andrew Cooper, Julien Grall,
Tim Deegan, Jan Beulich
The constructions _mfn(... > PAGE_SHIFT) and mfn_to_page(... >> PAGE_SHIFT)
could respectively be replaced by maddr_to_mfn(...) and
maddr_to_page(...).
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
xen/arch/x86/debug.c | 2 +-
xen/arch/x86/mm/shadow/common.c | 2 +-
xen/arch/x86/mm/shadow/multi.c | 6 +++---
xen/common/kimage.c | 6 +++---
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 1c10b84a16..9159f32db4 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -98,7 +98,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
l2_pgentry_t l2e, *l2t;
l1_pgentry_t l1e, *l1t;
unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
- mfn_t mfn = _mfn(cr3 >> PAGE_SHIFT);
+ mfn_t mfn = maddr_to_mfn(cr3);
DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
cr3, pgd3val);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 86186cccdf..f65d2a6523 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2640,7 +2640,7 @@ static int sh_remove_shadow_via_pointer(struct domain *d, mfn_t smfn)
ASSERT(sh_type_has_up_pointer(d, sp->u.sh.type));
if (sp->up == 0) return 0;
- pmfn = _mfn(sp->up >> PAGE_SHIFT);
+ pmfn = maddr_to_mfn(sp->up);
ASSERT(mfn_valid(pmfn));
vaddr = map_domain_page(pmfn);
ASSERT(vaddr);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 28030acbf6..1e42e1d8ab 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2425,7 +2425,7 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
sp = mfn_to_page(smfn);
if ( sp->u.sh.count != 1 || !sp->up )
return 0;
- smfn = _mfn(sp->up >> PAGE_SHIFT);
+ smfn = maddr_to_mfn(sp->up);
ASSERT(mfn_valid(smfn));
#if (SHADOW_PAGING_LEVELS == 4)
@@ -2434,7 +2434,7 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
ASSERT(sh_type_has_up_pointer(d, SH_type_l2_shadow));
if ( sp->u.sh.count != 1 || !sp->up )
return 0;
- smfn = _mfn(sp->up >> PAGE_SHIFT);
+ smfn = maddr_to_mfn(sp->up);
ASSERT(mfn_valid(smfn));
/* up to l4 */
@@ -2442,7 +2442,7 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn)
if ( sp->u.sh.count != 1
|| !sh_type_has_up_pointer(d, SH_type_l3_64_shadow) || !sp->up )
return 0;
- smfn = _mfn(sp->up >> PAGE_SHIFT);
+ smfn = maddr_to_mfn(sp->up);
ASSERT(mfn_valid(smfn));
#endif
diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index cf624d10fd..ebc71affd1 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -504,7 +504,7 @@ static void kimage_free_entry(kimage_entry_t entry)
{
struct page_info *page;
- page = mfn_to_page(entry >> PAGE_SHIFT);
+ page = maddr_to_page(entry);
free_domheap_page(page);
}
@@ -636,8 +636,8 @@ static struct page_info *kimage_alloc_page(struct kexec_image *image,
if ( old )
{
/* If so move it. */
- mfn_t old_mfn = _mfn(*old >> PAGE_SHIFT);
- mfn_t mfn = _mfn(addr >> PAGE_SHIFT);
+ mfn_t old_mfn = maddr_to_mfn(*old);
+ mfn_t mfn = maddr_to_mfn(addr);
copy_domain_page(mfn, old_mfn);
clear_domain_page(old_mfn);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 4/7] xen/kimage: Remove defined but unused variables
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
` (2 preceding siblings ...)
2017-10-04 18:15 ` [PATCH 3/7] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 22:42 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 5/7] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
` (3 subsequent siblings)
7 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel; +Cc: Andrew Cooper, Julien Grall
In the function kimage_alloc_normal_control_page, the variables mfn and
emfn are defined but not used. Remove them.
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
xen/common/kimage.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index ebc71affd1..07587896a4 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -310,14 +310,11 @@ static struct page_info *kimage_alloc_normal_control_page(
* destination page.
*/
do {
- unsigned long mfn, emfn;
paddr_t addr, eaddr;
page = kimage_alloc_zeroed_page(memflags);
if ( !page )
break;
- mfn = page_to_mfn(page);
- emfn = mfn + 1;
addr = page_to_maddr(page);
eaddr = addr + PAGE_SIZE;
if ( kimage_is_destination_range(image, addr, eaddr) )
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 5/7] xen/xenoprof: Convert the file to use typesafe MFN
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
` (3 preceding siblings ...)
2017-10-04 18:15 ` [PATCH 4/7] xen/kimage: Remove defined but unused variables Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 22:43 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 6/7] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
` (2 subsequent siblings)
7 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
George Dunlap, Andrew Cooper, Julien Grall, Ian Jackson,
Tim Deegan, Jan Beulich
The file common/xenoprof.c is now converted to use typesafe. This is
requiring to override the macros virt_to_mfn and mfn_to_page to make
them work with mfn_t.
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
---
xen/common/xenoprof.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index a5fe6204a5..98937c9ac6 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -19,6 +19,12 @@
#include <xsm/xsm.h>
#include <xen/hypercall.h>
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
+
/* Limit amount of pages used for shared buffer (per domain) */
#define MAX_OPROF_SHARED_PAGES 32
@@ -134,25 +140,26 @@ static void xenoprof_reset_buf(struct domain *d)
}
static int
-share_xenoprof_page_with_guest(struct domain *d, unsigned long mfn, int npages)
+share_xenoprof_page_with_guest(struct domain *d, mfn_t mfn, int npages)
{
int i;
/* Check if previous page owner has released the page. */
for ( i = 0; i < npages; i++ )
{
- struct page_info *page = mfn_to_page(mfn + i);
+ struct page_info *page = mfn_to_page(mfn_add(mfn, i));
if ( (page->count_info & (PGC_allocated|PGC_count_mask)) != 0 )
{
printk(XENLOG_G_INFO "dom%d mfn %#lx page->count_info %#lx\n",
- d->domain_id, mfn + i, page->count_info);
+ d->domain_id, mfn_x(mfn_add(mfn, i)), page->count_info);
return -EBUSY;
}
page_set_owner(page, NULL);
}
for ( i = 0; i < npages; i++ )
- share_xen_page_with_guest(mfn_to_page(mfn + i), d, XENSHARE_writable);
+ share_xen_page_with_guest(mfn_to_page(mfn_add(mfn, i)),
+ d, XENSHARE_writable);
return 0;
}
@@ -161,11 +168,11 @@ static void
unshare_xenoprof_page_with_guest(struct xenoprof *x)
{
int i, npages = x->npages;
- unsigned long mfn = virt_to_mfn(x->rawbuf);
+ mfn_t mfn = virt_to_mfn(x->rawbuf);
for ( i = 0; i < npages; i++ )
{
- struct page_info *page = mfn_to_page(mfn + i);
+ struct page_info *page = mfn_to_page(mfn_add(mfn, i));
BUG_ON(page_get_owner(page) != current->domain);
if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
put_page(page);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 6/7] xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
` (4 preceding siblings ...)
2017-10-04 18:15 ` [PATCH 5/7] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 22:46 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
2017-10-06 17:31 ` [PATCH 0/7] " Tim Deegan
7 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel; +Cc: Julien Grall, Konrad Rzeszutek Wilk
The file common/tmem_xen.c is now converted to use typesafe. This is
requiring to override the macro page_to_mfn to make it work with mfn_t.
Note that all variables converted to mfn_t havem there initial value,
when set, switch from 0 to INVALID_MFN. This is fine because the initial
values was always overriden before used.
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
xen/common/tmem_xen.c | 28 ++++++++++++++++------------
1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 20f74b268f..8dc031514a 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -14,6 +14,10 @@
#include <xen/cpu.h>
#include <xen/init.h>
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef page_to_mfn
+#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
+
bool __read_mostly opt_tmem;
boolean_param("tmem", opt_tmem);
@@ -31,7 +35,7 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, dstmem);
static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page);
#if defined(CONFIG_ARM)
-static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
+static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn,
struct page_info **pcli_pfp, bool cli_write)
{
ASSERT_UNREACHABLE();
@@ -39,14 +43,14 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
}
static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
- unsigned long cli_mfn, bool mark_dirty)
+ mfn_t cli_mfn, bool mark_dirty)
{
ASSERT_UNREACHABLE();
}
#else
#include <asm/p2m.h>
-static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
+static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn,
struct page_info **pcli_pfp, bool cli_write)
{
p2m_type_t t;
@@ -68,16 +72,16 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
*pcli_mfn = page_to_mfn(page);
*pcli_pfp = page;
- return map_domain_page(_mfn(*pcli_mfn));
+ return map_domain_page(*pcli_mfn);
}
static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
- unsigned long cli_mfn, bool mark_dirty)
+ mfn_t cli_mfn, bool mark_dirty)
{
if ( mark_dirty )
{
put_page_and_type(cli_pfp);
- paging_mark_dirty(current->domain, _mfn(cli_mfn));
+ paging_mark_dirty(current->domain, cli_mfn);
}
else
put_page(cli_pfp);
@@ -88,14 +92,14 @@ static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
int tmem_copy_from_client(struct page_info *pfp,
xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
{
- unsigned long tmem_mfn, cli_mfn = 0;
+ mfn_t tmem_mfn, cli_mfn = INVALID_MFN;
char *tmem_va, *cli_va = NULL;
struct page_info *cli_pfp = NULL;
int rc = 1;
ASSERT(pfp != NULL);
tmem_mfn = page_to_mfn(pfp);
- tmem_va = map_domain_page(_mfn(tmem_mfn));
+ tmem_va = map_domain_page(tmem_mfn);
if ( guest_handle_is_null(clibuf) )
{
cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 0);
@@ -125,7 +129,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn,
unsigned char *wmem = this_cpu(workmem);
char *scratch = this_cpu(scratch_page);
struct page_info *cli_pfp = NULL;
- unsigned long cli_mfn = 0;
+ mfn_t cli_mfn = INVALID_MFN;
void *cli_va = NULL;
if ( dmem == NULL || wmem == NULL )
@@ -152,7 +156,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn,
int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
tmem_cli_va_param_t clibuf)
{
- unsigned long tmem_mfn, cli_mfn = 0;
+ mfn_t tmem_mfn, cli_mfn = INVALID_MFN;
char *tmem_va, *cli_va = NULL;
struct page_info *cli_pfp = NULL;
int rc = 1;
@@ -165,7 +169,7 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
return -EFAULT;
}
tmem_mfn = page_to_mfn(pfp);
- tmem_va = map_domain_page(_mfn(tmem_mfn));
+ tmem_va = map_domain_page(tmem_mfn);
if ( cli_va )
{
memcpy(cli_va, tmem_va, PAGE_SIZE);
@@ -181,7 +185,7 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
size_t size, tmem_cli_va_param_t clibuf)
{
- unsigned long cli_mfn = 0;
+ mfn_t cli_mfn = INVALID_MFN;
struct page_info *cli_pfp = NULL;
void *cli_va = NULL;
char *scratch = this_cpu(scratch_page);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
` (5 preceding siblings ...)
2017-10-04 18:15 ` [PATCH 6/7] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
@ 2017-10-04 18:15 ` Julien Grall
2017-10-04 19:38 ` Razvan Cojocaru
` (2 more replies)
2017-10-06 17:31 ` [PATCH 0/7] " Tim Deegan
7 siblings, 3 replies; 20+ messages in thread
From: Julien Grall @ 2017-10-04 18:15 UTC (permalink / raw)
To: xen-devel
Cc: Jun Nakajima, Kevin Tian, Stefano Stabellini, Wei Liu,
Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
George Dunlap, Andrew Cooper, Julien Grall, Ian Jackson,
Tim Deegan, Julien Grall, Tamas K Lengyel, Jan Beulich,
Shane Wang, Boris Ostrovsky, Gang Wei, Paul Durrant
Most of the users of page_to_mfn and mfn_to_page are either overriding
the macros to make them work with mfn_t or use mfn_x/_mfn because the
rest of the function use mfn_t.
So make __page_to_mfn and __mfn_to_page return mfn_t by default.
Only reasonable clean-ups are done in this patch because it is
already quite big. So some of the files now override page_to_mfn and
mfn_to_page to avoid using mfn_t.
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: Shane Wang <shane.wang@intel.com>
---
xen/arch/arm/domain_build.c | 8 ++++----
xen/arch/arm/kernel.c | 2 +-
xen/arch/arm/mem_access.c | 2 +-
xen/arch/arm/mm.c | 2 +-
xen/arch/arm/p2m.c | 8 +-------
xen/arch/x86/cpu/vpmu.c | 6 +++---
xen/arch/x86/domain.c | 21 +++++++++++----------
xen/arch/x86/domctl.c | 2 +-
xen/arch/x86/hvm/dm.c | 2 +-
xen/arch/x86/hvm/dom0_build.c | 6 +++---
xen/arch/x86/hvm/hvm.c | 16 ++++++++--------
xen/arch/x86/hvm/ioreq.c | 6 +++---
xen/arch/x86/hvm/stdvga.c | 2 +-
xen/arch/x86/hvm/svm/svm.c | 4 ++--
xen/arch/x86/hvm/viridian.c | 8 ++++----
xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
xen/arch/x86/hvm/vmx/vmx.c | 10 +++++-----
xen/arch/x86/hvm/vmx/vvmx.c | 2 +-
xen/arch/x86/mm.c | 6 ------
xen/arch/x86/mm/guest_walk.c | 6 +++---
xen/arch/x86/mm/hap/guest_walk.c | 2 +-
xen/arch/x86/mm/hap/hap.c | 6 ------
xen/arch/x86/mm/hap/nested_ept.c | 2 +-
xen/arch/x86/mm/mem_sharing.c | 9 ++-------
xen/arch/x86/mm/p2m-ept.c | 4 ++++
xen/arch/x86/mm/p2m-pod.c | 6 ------
xen/arch/x86/mm/p2m.c | 6 ------
xen/arch/x86/mm/paging.c | 6 ------
xen/arch/x86/mm/shadow/private.h | 16 ++--------------
xen/arch/x86/numa.c | 2 +-
xen/arch/x86/physdev.c | 2 +-
xen/arch/x86/pv/callback.c | 6 ------
xen/arch/x86/pv/descriptor-tables.c | 6 ------
xen/arch/x86/pv/dom0_build.c | 6 ++++++
xen/arch/x86/pv/domain.c | 6 ------
xen/arch/x86/pv/emul-gate-op.c | 6 ------
xen/arch/x86/pv/emul-priv-op.c | 10 ----------
xen/arch/x86/pv/grant_table.c | 6 ------
xen/arch/x86/pv/ro-page-fault.c | 6 ------
xen/arch/x86/smpboot.c | 6 ------
xen/arch/x86/tboot.c | 4 ++--
xen/arch/x86/traps.c | 2 +-
xen/arch/x86/x86_64/mm.c | 6 ++++++
xen/common/domain.c | 4 ++--
xen/common/event_fifo.c | 2 +-
xen/common/grant_table.c | 6 ++++++
xen/common/kimage.c | 16 ++++++++--------
xen/common/memory.c | 6 ++++++
xen/common/page_alloc.c | 6 ++++++
xen/common/tmem.c | 2 +-
xen/common/tmem_xen.c | 4 ----
xen/common/trace.c | 6 ++++++
xen/common/vmap.c | 9 +++++----
xen/common/xenoprof.c | 2 --
xen/drivers/passthrough/amd/iommu_map.c | 6 ++++++
xen/drivers/passthrough/iommu.c | 2 +-
xen/drivers/passthrough/x86/iommu.c | 2 +-
xen/include/asm-arm/mm.h | 16 +++++++++-------
xen/include/asm-arm/p2m.h | 4 ++--
xen/include/asm-x86/mm.h | 12 ++++++------
xen/include/asm-x86/p2m.h | 2 +-
xen/include/asm-x86/page.h | 32 ++++++++++++++++----------------
xen/include/xen/domain_page.h | 4 ++--
xen/include/xen/tmem_xen.h | 2 +-
64 files changed, 168 insertions(+), 229 deletions(-)
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 093ebf1a8e..0753d03aac 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -104,11 +104,11 @@ static bool insert_11_bank(struct domain *d,
unsigned int order)
{
int res, i;
- paddr_t spfn;
+ mfn_t smfn;
paddr_t start, size;
- spfn = page_to_mfn(pg);
- start = pfn_to_paddr(spfn);
+ smfn = page_to_mfn(pg);
+ start = mfn_to_maddr(smfn);
size = pfn_to_paddr(1UL << order);
D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
@@ -126,7 +126,7 @@ static bool insert_11_bank(struct domain *d,
goto fail;
}
- res = guest_physmap_add_page(d, _gfn(spfn), _mfn(spfn), order);
+ res = guest_physmap_add_page(d, _gfn(mfn_x(smfn)), smfn, order);
if ( res )
panic("Failed map pages to DOM0: %d", res);
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 9c183f96da..f391938640 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -295,7 +295,7 @@ static __init int kernel_decompress(struct bootmodule *mod)
iounmap(input);
return -ENOMEM;
}
- mfn = _mfn(page_to_mfn(pages));
+ mfn = page_to_mfn(pages);
output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
rc = perform_gunzip(output, input, size);
diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index 0f2cbb81d3..112e291cba 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -210,7 +210,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
if ( t != p2m_ram_rw )
goto err;
- page = mfn_to_page(mfn_x(mfn));
+ page = mfn_to_page(mfn);
if ( unlikely(!get_page(page, v->domain)) )
page = NULL;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9a37f29ce6..fe7a5da9bb 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1286,7 +1286,7 @@ int xenmem_add_to_physmap_one(
return -EINVAL;
}
- mfn = _mfn(page_to_mfn(page));
+ mfn = page_to_mfn(page);
t = p2m_map_foreign;
rcu_unlock_domain(od);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0410b1e86b..1e7a0c6c40 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -38,12 +38,6 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
#define P2M_ROOT_PAGES (1<<P2M_ROOT_ORDER)
-/* Override macros from asm/mm.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
unsigned int __read_mostly p2m_ipa_bits;
/* Helpers to lookup the properties of each level */
@@ -98,7 +92,7 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
printk("P2M @ %p mfn:0x%lx\n",
- p2m->root, __page_to_mfn(p2m->root));
+ p2m->root, mfn_x(page_to_mfn(p2m->root)));
dump_pt_walk(page_to_maddr(p2m->root), addr,
P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index fd2fcacc26..c3acddc53e 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -657,7 +657,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
{
struct vcpu *v;
struct vpmu_struct *vpmu;
- uint64_t mfn;
+ mfn_t mfn;
void *xenpmu_data;
if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) )
@@ -678,8 +678,8 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
if ( xenpmu_data )
{
- mfn = domain_page_map_to_mfn(xenpmu_data);
- ASSERT(mfn_valid(_mfn(mfn)));
+ mfn = _mfn(domain_page_map_to_mfn(xenpmu_data));
+ ASSERT(mfn_valid(mfn));
unmap_domain_page_global(xenpmu_data);
put_page_and_type(mfn_to_page(mfn));
}
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index bb1ffa3222..395ef6145a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -186,7 +186,7 @@ void dump_pageframe_info(struct domain *d)
}
}
printk(" DomPage %p: caf=%08lx, taf=%" PRtype_info "\n",
- _p(page_to_mfn(page)),
+ _p(mfn_x(page_to_mfn(page))),
page->count_info, page->u.inuse.type_info);
}
spin_unlock(&d->page_alloc_lock);
@@ -199,7 +199,7 @@ void dump_pageframe_info(struct domain *d)
page_list_for_each ( page, &d->xenpage_list )
{
printk(" XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
- _p(page_to_mfn(page)),
+ _p(mfn_x(page_to_mfn(page))),
page->count_info, page->u.inuse.type_info);
}
spin_unlock(&d->page_alloc_lock);
@@ -621,7 +621,8 @@ int arch_domain_soft_reset(struct domain *d)
struct page_info *page = virt_to_page(d->shared_info), *new_page;
int ret = 0;
struct domain *owner;
- unsigned long mfn, gfn;
+ mfn_t mfn;
+ unsigned long gfn;
p2m_type_t p2mt;
unsigned int i;
@@ -655,7 +656,7 @@ int arch_domain_soft_reset(struct domain *d)
ASSERT( owner == d );
mfn = page_to_mfn(page);
- gfn = mfn_to_gmfn(d, mfn);
+ gfn = mfn_to_gmfn(d, mfn_x(mfn));
/*
* gfn == INVALID_GFN indicates that the shared_info page was never mapped
@@ -664,7 +665,7 @@ int arch_domain_soft_reset(struct domain *d)
if ( gfn == gfn_x(INVALID_GFN) )
goto exit_put_page;
- if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
+ if ( !mfn_eq(get_gfn_query(d, gfn, &p2mt), mfn) )
{
printk(XENLOG_G_ERR "Failed to get Dom%d's shared_info GFN (%lx)\n",
d->domain_id, gfn);
@@ -681,7 +682,7 @@ int arch_domain_soft_reset(struct domain *d)
goto exit_put_gfn;
}
- ret = guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), PAGE_ORDER_4K);
+ ret = guest_physmap_remove_page(d, _gfn(gfn), mfn, PAGE_ORDER_4K);
if ( ret )
{
printk(XENLOG_G_ERR "Failed to remove Dom%d's shared_info frame %lx\n",
@@ -690,7 +691,7 @@ int arch_domain_soft_reset(struct domain *d)
goto exit_put_gfn;
}
- ret = guest_physmap_add_page(d, _gfn(gfn), _mfn(page_to_mfn(new_page)),
+ ret = guest_physmap_add_page(d, _gfn(gfn), page_to_mfn(new_page),
PAGE_ORDER_4K);
if ( ret )
{
@@ -988,7 +989,7 @@ int arch_set_info_guest(
{
if ( (page->u.inuse.type_info & PGT_type_mask) ==
PGT_l4_page_table )
- done = !fill_ro_mpt(_mfn(page_to_mfn(page)));
+ done = !fill_ro_mpt(page_to_mfn(page));
page_unlock(page);
}
@@ -1114,7 +1115,7 @@ int arch_set_info_guest(
l4_pgentry_t *l4tab;
l4tab = map_domain_page(_mfn(pagetable_get_pfn(v->arch.guest_table)));
- *l4tab = l4e_from_pfn(page_to_mfn(cr3_page),
+ *l4tab = l4e_from_pfn(mfn_x(page_to_mfn(cr3_page)),
_PAGE_PRESENT|_PAGE_RW|_PAGE_USER|_PAGE_ACCESSED);
unmap_domain_page(l4tab);
}
@@ -1941,7 +1942,7 @@ int domain_relinquish_resources(struct domain *d)
if ( d->arch.pirq_eoi_map != NULL )
{
unmap_domain_page_global(d->arch.pirq_eoi_map);
- put_page_and_type(mfn_to_page(d->arch.pirq_eoi_map_mfn));
+ put_page_and_type(mfn_to_page(_mfn(d->arch.pirq_eoi_map_mfn)));
d->arch.pirq_eoi_map = NULL;
d->arch.auto_unmask = 0;
}
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 540ba089d7..9292ae5118 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -429,7 +429,7 @@ long arch_do_domctl(
{
if ( i >= max_pfns )
break;
- mfn = page_to_mfn(page);
+ mfn = mfn_x(page_to_mfn(page));
if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
i, &mfn, 1) )
{
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 9cf53b551c..1a83f27c0b 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -219,7 +219,7 @@ static int modified_memory(struct domain *d,
page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
if ( page )
{
- mfn_t gmfn = _mfn(page_to_mfn(page));
+ mfn_t gmfn = page_to_mfn(page);
paging_mark_dirty(d, gmfn);
/*
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index e8f746c70b..7789f6e571 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -120,7 +120,7 @@ static int __init pvh_populate_memory_range(struct domain *d,
continue;
}
- rc = guest_physmap_add_page(d, _gfn(start), _mfn(page_to_mfn(page)),
+ rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
order);
if ( rc != 0 )
{
@@ -270,7 +270,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
}
write_32bit_pse_identmap(ident_pt);
unmap_domain_page(ident_pt);
- put_page(mfn_to_page(mfn_x(mfn)));
+ put_page(mfn_to_page(mfn));
d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr;
if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) )
printk("Unable to set identity page tables as reserved in the memory map\n");
@@ -288,7 +288,7 @@ static void __init pvh_steal_low_ram(struct domain *d, unsigned long start,
for ( mfn = start; mfn < start + nr_pages; mfn++ )
{
- struct page_info *pg = mfn_to_page(mfn);
+ struct page_info *pg = mfn_to_page(_mfn(mfn));
int rc;
rc = unshare_xen_page_with_guest(pg, dom_io);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 205b4cb685..e8dbc23a51 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2211,7 +2211,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
v->arch.guest_table = pagetable_from_page(page);
HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx, mfn = %lx",
- v->arch.hvm_vcpu.guest_cr[3], page_to_mfn(page));
+ v->arch.hvm_vcpu.guest_cr[3], mfn_x(page_to_mfn(page)));
}
}
else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
@@ -2546,7 +2546,7 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent,
if ( unlikely(p2m_is_discard_write(p2mt)) )
*writable = 0;
else if ( !permanent )
- paging_mark_dirty(d, _mfn(page_to_mfn(page)));
+ paging_mark_dirty(d, page_to_mfn(page));
}
if ( !permanent )
@@ -2588,13 +2588,13 @@ void *hvm_map_guest_frame_ro(unsigned long gfn, bool_t permanent)
void hvm_unmap_guest_frame(void *p, bool_t permanent)
{
- unsigned long mfn;
+ mfn_t mfn;
struct page_info *page;
if ( !p )
return;
- mfn = domain_page_map_to_mfn(p);
+ mfn = _mfn(domain_page_map_to_mfn(p));
page = mfn_to_page(mfn);
if ( !permanent )
@@ -2609,7 +2609,7 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent)
list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
if ( track->page == page )
{
- paging_mark_dirty(d, _mfn(mfn));
+ paging_mark_dirty(d, mfn);
list_del(&track->list);
xfree(track);
break;
@@ -2626,7 +2626,7 @@ void hvm_mapped_guest_frames_mark_dirty(struct domain *d)
spin_lock(&d->arch.hvm_domain.write_map.lock);
list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
- paging_mark_dirty(d, _mfn(page_to_mfn(track->page)));
+ paging_mark_dirty(d, page_to_mfn(track->page));
spin_unlock(&d->arch.hvm_domain.write_map.lock);
}
@@ -3201,7 +3201,7 @@ static enum hvm_translation_result __hvm_copy(
if ( xchg(&lastpage, gfn_x(gfn)) != gfn_x(gfn) )
dprintk(XENLOG_G_DEBUG,
"%pv attempted write to read-only gfn %#lx (mfn=%#lx)\n",
- v, gfn_x(gfn), page_to_mfn(page));
+ v, gfn_x(gfn), mfn_x(page_to_mfn(page)));
}
else
{
@@ -3209,7 +3209,7 @@ static enum hvm_translation_result __hvm_copy(
memcpy(p, buf, count);
else
memset(p, 0, count);
- paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
+ paging_mark_dirty(v->domain, page_to_mfn(page));
}
}
else
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index f2e0b3f74a..5bd5cd788e 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -268,7 +268,7 @@ static void hvm_remove_ioreq_gfn(
struct domain *d, struct hvm_ioreq_page *iorp)
{
if ( guest_physmap_remove_page(d, _gfn(iorp->gfn),
- _mfn(page_to_mfn(iorp->page)), 0) )
+ page_to_mfn(iorp->page), 0) )
domain_crash(d);
clear_page(iorp->va);
}
@@ -281,9 +281,9 @@ static int hvm_add_ioreq_gfn(
clear_page(iorp->va);
rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
- _mfn(page_to_mfn(iorp->page)), 0);
+ page_to_mfn(iorp->page), 0);
if ( rc == 0 )
- paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
+ paging_mark_dirty(d, page_to_mfn(iorp->page));
return rc;
}
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index 088fbdf8ce..925bab2438 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -590,7 +590,7 @@ void stdvga_init(struct domain *d)
if ( pg == NULL )
break;
s->vram_page[i] = pg;
- clear_domain_page(_mfn(page_to_mfn(pg)));
+ clear_domain_page(page_to_mfn(pg));
}
if ( i == ARRAY_SIZE(s->vram_page) )
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index b9cf423fd9..f50f931598 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1521,7 +1521,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
if ( !pg )
goto err;
- clear_domain_page(_mfn(page_to_mfn(pg)));
+ clear_domain_page(page_to_mfn(pg));
*this_hsa = page_to_maddr(pg);
}
@@ -1531,7 +1531,7 @@ static int svm_cpu_up_prepare(unsigned int cpu)
if ( !pg )
goto err;
- clear_domain_page(_mfn(page_to_mfn(pg)));
+ clear_domain_page(page_to_mfn(pg));
*this_vmcb = page_to_maddr(pg);
}
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index f0fa59d7d5..c5440863b1 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -354,7 +354,7 @@ static void enable_hypercall_page(struct domain *d)
if ( page )
put_page(page);
gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
- gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
+ gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
return;
}
@@ -414,7 +414,7 @@ static void initialize_vp_assist(struct vcpu *v)
fail:
gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", gmfn,
- page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
+ mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
}
static void teardown_vp_assist(struct vcpu *v)
@@ -427,7 +427,7 @@ static void teardown_vp_assist(struct vcpu *v)
v->arch.hvm_vcpu.viridian.vp_assist.va = NULL;
- page = mfn_to_page(domain_page_map_to_mfn(va));
+ page = mfn_to_page(_mfn(domain_page_map_to_mfn(va)));
unmap_domain_page_global(va);
put_page_and_type(page);
@@ -494,7 +494,7 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
if ( page )
put_page(page);
gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
- gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN));
+ gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
return;
}
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index f62fe7e217..471d224539 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1441,7 +1441,7 @@ int vmx_vcpu_enable_pml(struct vcpu *v)
vmx_vmcs_enter(v);
- __vmwrite(PML_ADDRESS, page_to_mfn(v->arch.hvm_vmx.pml_pg) << PAGE_SHIFT);
+ __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm_vmx.pml_pg));
__vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
v->arch.hvm_vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_PML;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9cfa9b6965..40b91933bf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2951,7 +2951,7 @@ gp_fault:
static int vmx_alloc_vlapic_mapping(struct domain *d)
{
struct page_info *pg;
- unsigned long mfn;
+ mfn_t mfn;
if ( !cpu_has_vmx_virtualize_apic_accesses )
return 0;
@@ -2960,10 +2960,10 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
if ( !pg )
return -ENOMEM;
mfn = page_to_mfn(pg);
- clear_domain_page(_mfn(mfn));
+ clear_domain_page(mfn);
share_xen_page_with_guest(pg, d, XENSHARE_writable);
- d->arch.hvm_domain.vmx.apic_access_mfn = mfn;
- set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), _mfn(mfn),
+ d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn);
+ set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access);
return 0;
@@ -2974,7 +2974,7 @@ static void vmx_free_vlapic_mapping(struct domain *d)
unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn;
if ( mfn != 0 )
- free_shared_domheap_page(mfn_to_page(mfn));
+ free_shared_domheap_page(mfn_to_page(_mfn(mfn)));
}
static void vmx_install_vlapic_mapping(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cd0ee0a307..790a2285e5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -84,7 +84,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
}
v->arch.hvm_vmx.vmread_bitmap = vmread_bitmap;
- clear_domain_page(_mfn(page_to_mfn(vmread_bitmap)));
+ clear_domain_page(page_to_mfn(vmread_bitmap));
vmwrite_bitmap = alloc_domheap_page(NULL, 0);
if ( !vmwrite_bitmap )
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d9df5ca69f..39038723ce 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -129,12 +129,6 @@
#include "pv/mm.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
/* Mapping of the fixmap space needed early. */
l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
l1_fixmap[L1_PAGETABLE_ENTRIES];
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 6055fec1ad..f67aeda3d0 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -469,20 +469,20 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
if ( l3p )
{
unmap_domain_page(l3p);
- put_page(mfn_to_page(mfn_x(gw->l3mfn)));
+ put_page(mfn_to_page(gw->l3mfn));
}
#endif
#if GUEST_PAGING_LEVELS >= 3
if ( l2p )
{
unmap_domain_page(l2p);
- put_page(mfn_to_page(mfn_x(gw->l2mfn)));
+ put_page(mfn_to_page(gw->l2mfn));
}
#endif
if ( l1p )
{
unmap_domain_page(l1p);
- put_page(mfn_to_page(mfn_x(gw->l1mfn)));
+ put_page(mfn_to_page(gw->l1mfn));
}
return walk_ok;
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index c550017ba4..cb3f9cebe7 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -83,7 +83,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
*pfec &= ~PFEC_page_present;
goto out_tweak_pfec;
}
- top_mfn = _mfn(page_to_mfn(top_page));
+ top_mfn = page_to_mfn(top_page);
/* Map the top-level table and call the tree-walker */
ASSERT(mfn_valid(top_mfn));
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index dc85e828cd..e45c1a1913 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -42,12 +42,6 @@
#include "private.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
/************************************************/
/* HAP VRAM TRACKING SUPPORT */
/************************************************/
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 14b1bb01e9..1738df69f6 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -173,7 +173,7 @@ nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
goto map_err;
gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
unmap_domain_page(lxp);
- put_page(mfn_to_page(mfn_x(lxmfn)));
+ put_page(mfn_to_page(lxmfn));
if ( nept_non_present_check(gw->lxe[lvl]) )
goto non_present;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index b856028c02..b799bdb77c 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -152,11 +152,6 @@ static inline shr_handle_t get_next_handle(void)
#define mem_sharing_enabled(d) \
(is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled)
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
static atomic_t nr_saved_mfns = ATOMIC_INIT(0);
static atomic_t nr_shared_mfns = ATOMIC_INIT(0);
@@ -1185,8 +1180,8 @@ int __mem_sharing_unshare_page(struct domain *d,
return -ENOMEM;
}
- s = map_domain_page(_mfn(__page_to_mfn(old_page)));
- t = map_domain_page(_mfn(__page_to_mfn(page)));
+ s = map_domain_page(page_to_mfn(old_page));
+ t = map_domain_page(page_to_mfn(page));
memcpy(t, s, PAGE_SIZE);
unmap_domain_page(s);
unmap_domain_page(t);
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 054827aa88..24de202a1b 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -33,6 +33,10 @@
#include "mm-locks.h"
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
#define atomic_read_ept_entry(__pepte) \
( (ept_entry_t) { .epte = read_atomic(&(__pepte)->epte) } )
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 0a811ccf28..7a88074c31 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -29,12 +29,6 @@
#include "mm-locks.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
#define superpage_aligned(_x) (((_x)&(SUPERPAGE_PAGES-1))==0)
/* Enforce lock ordering when grabbing the "external" page_alloc lock */
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 3fbc537da6..2194b35bc7 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -47,12 +47,6 @@ bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1;
boolean_param("hap_1gb", opt_hap_1gb);
boolean_param("hap_2mb", opt_hap_2mb);
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
/* Init the datastructures for later use by the p2m code */
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 1e2c9ba4cc..cb97642cbc 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -47,12 +47,6 @@
/* Per-CPU variable for enforcing the lock ordering */
DEFINE_PER_CPU(int, mm_lock_level);
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
/************************************************/
/* LOG DIRTY SUPPORT */
/************************************************/
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 6a03370402..b9cc680f4e 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -315,7 +315,7 @@ static inline int page_is_out_of_sync(struct page_info *p)
static inline int mfn_is_out_of_sync(mfn_t gmfn)
{
- return page_is_out_of_sync(mfn_to_page(mfn_x(gmfn)));
+ return page_is_out_of_sync(mfn_to_page(gmfn));
}
static inline int page_oos_may_write(struct page_info *p)
@@ -326,7 +326,7 @@ static inline int page_oos_may_write(struct page_info *p)
static inline int mfn_oos_may_write(mfn_t gmfn)
{
- return page_oos_may_write(mfn_to_page(mfn_x(gmfn)));
+ return page_oos_may_write(mfn_to_page(gmfn));
}
#endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
@@ -465,18 +465,6 @@ void sh_reset_l3_up_pointers(struct vcpu *v);
* MFN/page-info handling
*/
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m))
-#undef page_to_mfn
-#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg))
-
-/* Override pagetable_t <-> struct page_info conversions to work with mfn_t */
-#undef pagetable_get_page
-#define pagetable_get_page(x) mfn_to_page(pagetable_get_mfn(x))
-#undef pagetable_from_page
-#define pagetable_from_page(pg) pagetable_from_mfn(page_to_mfn(pg))
-
#define backpointer(sp) _mfn(pdx_to_pfn((unsigned long)(sp)->v.sh.back))
static inline unsigned long __backpointer(const struct page_info *sp)
{
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 4fc967f893..a87987da6f 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -430,7 +430,7 @@ static void dump_numa(unsigned char key)
spin_lock(&d->page_alloc_lock);
page_list_for_each(page, &d->page_list)
{
- i = phys_to_nid((paddr_t)page_to_mfn(page) << PAGE_SHIFT);
+ i = phys_to_nid(page_to_maddr(page));
page_num_node[i]++;
}
spin_unlock(&d->page_alloc_lock);
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 0eb409758f..ba950af4a8 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -241,7 +241,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
}
if ( cmpxchg(&currd->arch.pirq_eoi_map_mfn,
- 0, page_to_mfn(page)) != 0 )
+ 0, mfn_x(page_to_mfn(page))) != 0 )
{
put_page_and_type(page);
ret = -EBUSY;
diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
index 97d8438600..5957cb5085 100644
--- a/xen/arch/x86/pv/callback.c
+++ b/xen/arch/x86/pv/callback.c
@@ -31,12 +31,6 @@
#include <public/callback.h>
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
static int register_guest_nmi_callback(unsigned long address)
{
struct vcpu *curr = current;
diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
index 81973af124..371221a302 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -25,12 +25,6 @@
#include <asm/p2m.h>
#include <asm/pv/mm.h>
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
/*******************
* Descriptor Tables
*/
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index dcbee43e8f..e9a893ba47 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -22,6 +22,12 @@
#include "mm.h"
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
/* Allow ring-3 access in long mode as guest cannot use ring 1 ... */
#define BASE_PROT (_PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_USER)
#define L1_PROT (BASE_PROT|_PAGE_GUEST_KERNEL)
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 90d5569be1..4ca3205821 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -16,12 +16,6 @@
#include "mm.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
static void noreturn continue_nonidle_domain(struct vcpu *v)
{
check_wakeup_from_wait();
diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-op.c
index 0f89c91dff..5cdb54c937 100644
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -41,12 +41,6 @@
#include "emulate.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
static int read_gate_descriptor(unsigned int gate_sel,
const struct vcpu *v,
unsigned int *sel,
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index dd90713acf..9ccbd021ef 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -43,16 +43,6 @@
#include "emulate.h"
#include "mm.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
-/***********************
- * I/O emulation support
- */
-
struct priv_op_ctxt {
struct x86_emulate_ctxt ctxt;
struct {
diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c
index aaca228c6b..97323367c5 100644
--- a/xen/arch/x86/pv/grant_table.c
+++ b/xen/arch/x86/pv/grant_table.c
@@ -27,12 +27,6 @@
#include "mm.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
static unsigned int grant_to_pte_flags(unsigned int grant_flags,
unsigned int cache_flags)
{
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index 6b2976d3df..a7b7eb5113 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -33,12 +33,6 @@
#include "emulate.h"
#include "mm.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
/*********************
* Writable Pagetables
*/
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 3ca716c59f..663966bc74 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -46,12 +46,6 @@
#include <mach_wakecpu.h>
#include <smpboot_hooks.h>
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
#define setup_trampoline() (bootsym_phys(trampoline_realmode_entry))
unsigned long __read_mostly trampoline_phys;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 59d7c477f4..e9522f06ec 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -184,7 +184,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
for ( mfn = 0; mfn < max_page; mfn++ )
{
- struct page_info *page = mfn_to_page(mfn);
+ struct page_info *page = mfn_to_page(_mfn(mfn));
if ( !mfn_valid(_mfn(mfn)) )
continue;
@@ -276,7 +276,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
vmac_set_key((uint8_t *)key, &ctx);
for ( mfn = 0; mfn < max_page; mfn++ )
{
- struct page_info *page = __mfn_to_page(mfn);
+ struct page_info *page = mfn_to_page(_mfn(mfn));
if ( !mfn_valid(_mfn(mfn)) )
continue;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 86506f3747..b85394d1f9 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -811,7 +811,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
gdprintk(XENLOG_WARNING,
"Bad GMFN %lx (MFN %lx) to MSR %08x\n",
- gmfn, page ? page_to_mfn(page) : -1UL, base);
+ gmfn, page ? mfn_x(page_to_mfn(page)) : -1UL, base);
return 0;
}
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 11746730b4..971ccfcbbe 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -40,6 +40,12 @@ asm(".file \"" __FILE__ "\"");
#include <asm/mem_sharing.h>
#include <public/memory.h>
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
unsigned int __read_mostly m2p_compat_vstart = __HYPERVISOR_COMPAT_VIRT_START;
l2_pgentry_t *compat_idle_pg_table_l2;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 5aebcf265f..e8302e8e1b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1192,7 +1192,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
}
v->vcpu_info = new_info;
- v->vcpu_info_mfn = _mfn(page_to_mfn(page));
+ v->vcpu_info_mfn = page_to_mfn(page);
/* Set new vcpu_info pointer /before/ setting pending flags. */
smp_wmb();
@@ -1225,7 +1225,7 @@ void unmap_vcpu_info(struct vcpu *v)
vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */
- put_page_and_type(mfn_to_page(mfn_x(mfn)));
+ put_page_and_type(mfn_to_page(mfn));
}
int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index c49f446754..f15adf0eb5 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -389,7 +389,7 @@ static void unmap_guest_page(void *virt)
return;
virt = (void *)((unsigned long)virt & PAGE_MASK);
- page = mfn_to_page(domain_page_map_to_mfn(virt));
+ page = mfn_to_page(_mfn(domain_page_map_to_mfn(virt)));
unmap_domain_page_global(virt);
put_page_and_type(page);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 6d20b17739..2afde596d9 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -40,6 +40,12 @@
#include <xsm/xsm.h>
#include <asm/flushtlb.h>
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
/* Per-domain grant information. */
struct grant_table {
/*
diff --git a/xen/common/kimage.c b/xen/common/kimage.c
index 07587896a4..93c7da5c20 100644
--- a/xen/common/kimage.c
+++ b/xen/common/kimage.c
@@ -76,7 +76,7 @@ static struct page_info *kimage_alloc_zeroed_page(unsigned memflags)
if ( !page )
return NULL;
- clear_domain_page(_mfn(page_to_mfn(page)));
+ clear_domain_page(page_to_mfn(page));
return page;
}
@@ -405,7 +405,7 @@ static struct page_info *kimage_alloc_crash_control_page(struct kexec_image *ima
if ( page )
{
image->next_crash_page = hole_end;
- clear_domain_page(_mfn(page_to_mfn(page)));
+ clear_domain_page(page_to_mfn(page));
}
return page;
@@ -641,7 +641,7 @@ static struct page_info *kimage_alloc_page(struct kexec_image *image,
*old = (addr & ~PAGE_MASK) | IND_SOURCE;
unmap_domain_page(old);
- page = mfn_to_page(mfn_x(old_mfn));
+ page = mfn_to_page(old_mfn);
break;
}
else
@@ -873,22 +873,22 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
for ( entry = page; ; )
{
unsigned long ind;
- unsigned long mfn;
+ mfn_t mfn;
ind = kimage_entry_ind(entry, compat);
- mfn = kimage_entry_mfn(entry, compat);
+ mfn = _mfn(kimage_entry_mfn(entry, compat));
switch ( ind )
{
case IND_DESTINATION:
- dest = (paddr_t)mfn << PAGE_SHIFT;
+ dest = mfn_to_maddr(mfn);
ret = kimage_set_destination(image, dest);
if ( ret < 0 )
goto done;
break;
case IND_INDIRECTION:
unmap_domain_page(page);
- page = map_domain_page(_mfn(mfn));
+ page = map_domain_page(mfn);
entry = page;
continue;
case IND_DONE:
@@ -913,7 +913,7 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
goto done;
}
- copy_domain_page(_mfn(page_to_mfn(xen_page)), _mfn(mfn));
+ copy_domain_page(page_to_mfn(xen_page), mfn);
put_page(guest_page);
ret = kimage_add_page(image, page_to_maddr(xen_page));
diff --git a/xen/common/memory.c b/xen/common/memory.c
index ad987e0f29..e467f271c7 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -29,6 +29,12 @@
#include <public/memory.h>
#include <xsm/xsm.h>
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
struct memop_args {
/* INPUT */
struct domain *domain; /* Domain to be affected. */
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 472c6fe329..5e7d74e274 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -150,6 +150,12 @@
#define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
#endif
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
/*
* Comma-separated list of hexadecimal page numbers containing bad bytes.
* e.g. 'badpage=0x3f45,0x8a321'.
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index c955cf7167..1adb96f00c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -243,7 +243,7 @@ static void tmem_persistent_pool_page_put(void *page_va)
struct page_info *pi;
ASSERT(IS_PAGE_ALIGNED(page_va));
- pi = mfn_to_page(virt_to_mfn(page_va));
+ pi = mfn_to_page(_mfn(virt_to_mfn(page_va)));
ASSERT(IS_VALID_PAGE(pi));
__tmem_free_page_thispool(pi);
}
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 8dc031514a..9131fd9d79 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -14,10 +14,6 @@
#include <xen/cpu.h>
#include <xen/init.h>
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
bool __read_mostly opt_tmem;
boolean_param("tmem", opt_tmem);
diff --git a/xen/common/trace.c b/xen/common/trace.c
index 2e18702317..cf8f8b0997 100644
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -42,6 +42,12 @@ CHECK_t_buf;
#define compat_t_rec t_rec
#endif
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
/* opt_tbuf_size: trace buffer size (in pages) for each cpu */
static unsigned int opt_tbuf_size;
static unsigned int opt_tevt_mask;
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index 0b23f8fb97..10f32b29e0 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -36,7 +36,7 @@ void __init vm_init_type(enum vmap_region type, void *start, void *end)
{
struct page_info *pg = alloc_domheap_page(NULL, 0);
- map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
+ map_pages_to_xen(va, mfn_x(page_to_mfn(pg)), 1, PAGE_HYPERVISOR);
clear_page((void *)va);
}
bitmap_fill(vm_bitmap(type), vm_low[type]);
@@ -107,7 +107,8 @@ static void *vm_alloc(unsigned int nr, unsigned int align,
{
unsigned long va = (unsigned long)vm_bitmap(t) + vm_top[t] / 8;
- if ( !map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR) )
+ if ( !map_pages_to_xen(va, mfn_x(page_to_mfn(pg)),
+ 1, PAGE_HYPERVISOR) )
{
clear_page((void *)va);
vm_top[t] += PAGE_SIZE * 8;
@@ -258,7 +259,7 @@ static void *vmalloc_type(size_t size, enum vmap_region type)
pg = alloc_domheap_page(NULL, 0);
if ( pg == NULL )
goto error;
- mfn[i] = _mfn(page_to_mfn(pg));
+ mfn[i] = page_to_mfn(pg);
}
va = __vmap(mfn, 1, pages, 1, PAGE_HYPERVISOR, type);
@@ -270,7 +271,7 @@ static void *vmalloc_type(size_t size, enum vmap_region type)
error:
while ( i-- )
- free_domheap_page(mfn_to_page(mfn_x(mfn[i])));
+ free_domheap_page(mfn_to_page(mfn[i]));
xfree(mfn);
return NULL;
}
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index 98937c9ac6..1f547bca52 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -22,8 +22,6 @@
/* Override macros from asm/page.h to make them work with mfn_t */
#undef virt_to_mfn
#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
/* Limit amount of pages used for shared buffer (per domain) */
#define MAX_OPROF_SHARED_PAGES 32
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index fd2327d3e5..bd62c2ce90 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -25,6 +25,12 @@
#include "../ats.h"
#include <xen/pci.h>
+/* Override macros from asm/page.h to avoid using typesafe mfn_t. */
+#undef page_to_mfn
+#define page_to_mfn(pg) mfn_x(__page_to_mfn(pg))
+#undef mfn_to_page
+#define mfn_to_page(mfn) __mfn_to_page(_mfn(mfn))
+
/* Given pfn and page table level, return pde index */
static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
{
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 1aecf7cf34..2c44fabf99 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -184,7 +184,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
page_list_for_each ( page, &d->page_list )
{
- unsigned long mfn = page_to_mfn(page);
+ unsigned long mfn = mfn_x(page_to_mfn(page));
unsigned long gfn = mfn_to_gmfn(d, mfn);
unsigned int mapping = IOMMUF_readable;
int ret;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 0253823173..68182afd91 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -58,7 +58,7 @@ int arch_iommu_populate_page_table(struct domain *d)
if ( is_hvm_domain(d) ||
(page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
{
- unsigned long mfn = page_to_mfn(page);
+ unsigned long mfn = mfn_x(page_to_mfn(page));
unsigned long gfn = mfn_to_gmfn(d, mfn);
if ( gfn != gfn_x(INVALID_GFN) )
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 737a429409..3eb4b68761 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -138,7 +138,7 @@ extern vaddr_t xenheap_virt_start;
#endif
#ifdef CONFIG_ARM_32
-#define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
+#define is_xen_heap_page(page) is_xen_heap_mfn(mfn_x(__page_to_mfn(page)))
#define is_xen_heap_mfn(mfn) ({ \
unsigned long mfn_ = (mfn); \
(mfn_ >= mfn_x(xenheap_mfn_start) && \
@@ -220,12 +220,14 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
})
/* Convert between machine frame numbers and page-info structures. */
-#define __mfn_to_page(mfn) (frame_table + (pfn_to_pdx(mfn) - frametable_base_pdx))
-#define __page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_pdx)
+#define __mfn_to_page(mfn) \
+ (frame_table + (pfn_to_pdx(mfn_x(mfn)) - frametable_base_pdx))
+#define __page_to_mfn(pg) \
+ _mfn(pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_pdx))
/* Convert between machine addresses and page-info structures. */
-#define maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
-#define page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT)
+#define maddr_to_page(ma) __mfn_to_page(maddr_to_mfn(ma))
+#define page_to_maddr(pg) (mfn_to_maddr(__page_to_mfn(pg)))
/* Convert between frame number and address formats. */
#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
@@ -235,7 +237,7 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
#define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga))
#define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn))
#define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma))
-#define vmap_to_mfn(va) paddr_to_pfn(virt_to_maddr((vaddr_t)va))
+#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)va))
#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
/* Page-align address and convert to frame number format */
@@ -309,7 +311,7 @@ static inline struct page_info *virt_to_page(const void *v)
static inline void *page_to_virt(const struct page_info *pg)
{
- return mfn_to_virt(page_to_mfn(pg));
+ return mfn_to_virt(mfn_x(__page_to_mfn(pg)));
}
struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index faadcfe8fe..87c9994974 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -276,7 +276,7 @@ static inline struct page_info *get_page_from_gfn(
{
struct page_info *page;
p2m_type_t p2mt;
- unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt));
+ mfn_t mfn = p2m_lookup(d, _gfn(gfn), &p2mt);
if (t)
*t = p2mt;
@@ -284,7 +284,7 @@ static inline struct page_info *get_page_from_gfn(
if ( !p2m_is_any_ram(p2mt) )
return NULL;
- if ( !mfn_valid(_mfn(mfn)) )
+ if ( !mfn_valid(mfn) )
return NULL;
page = mfn_to_page(mfn);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index f2e0f498c4..984f54c3fa 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -259,7 +259,7 @@ struct page_info
#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
#define is_xen_heap_mfn(mfn) \
- (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
+ (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(_mfn(mfn))))
#define is_xen_fixed_mfn(mfn) \
((((mfn) << PAGE_SHIFT) >= __pa(&_stext)) && \
(((mfn) << PAGE_SHIFT) <= __pa(&__2M_rwdata_end)))
@@ -369,7 +369,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner);
static inline bool get_page_from_mfn(mfn_t mfn, struct domain *d)
{
- struct page_info *page = __mfn_to_page(mfn_x(mfn));
+ struct page_info *page = __mfn_to_page(mfn);
if ( unlikely(!mfn_valid(mfn)) || unlikely(!get_page(page, d)) )
{
@@ -463,10 +463,10 @@ extern paddr_t mem_hotplug;
#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY)
#define compat_machine_to_phys_mapping ((unsigned int *)RDWR_COMPAT_MPT_VIRT_START)
-#define _set_gpfn_from_mfn(mfn, pfn) ({ \
- struct domain *d = page_get_owner(__mfn_to_page(mfn)); \
- unsigned long entry = (d && (d == dom_cow)) ? \
- SHARED_M2P_ENTRY : (pfn); \
+#define _set_gpfn_from_mfn(mfn, pfn) ({ \
+ struct domain *d = page_get_owner(__mfn_to_page(_mfn(mfn))); \
+ unsigned long entry = (d && (d == dom_cow)) ? \
+ SHARED_M2P_ENTRY : (pfn); \
((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 || \
(compat_machine_to_phys_mapping[(mfn)] = (unsigned int)(entry))), \
machine_to_phys_mapping[(mfn)] = (entry)); \
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 70f00c332f..18eac537c9 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -480,7 +480,7 @@ static inline struct page_info *get_page_from_gfn(
/* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
if ( t )
*t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct;
- page = __mfn_to_page(gfn);
+ page = __mfn_to_page(_mfn(gfn));
return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
}
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 45ca742678..8737ef16ff 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -88,10 +88,10 @@
((paddr_t)(((x).l4 & (PADDR_MASK&PAGE_MASK))))
/* Get pointer to info structure of page mapped by pte (struct page_info *). */
-#define l1e_get_page(x) (__mfn_to_page(l1e_get_pfn(x)))
-#define l2e_get_page(x) (__mfn_to_page(l2e_get_pfn(x)))
-#define l3e_get_page(x) (__mfn_to_page(l3e_get_pfn(x)))
-#define l4e_get_page(x) (__mfn_to_page(l4e_get_pfn(x)))
+#define l1e_get_page(x) (__mfn_to_page(l1e_get_mfn(x)))
+#define l2e_get_page(x) (__mfn_to_page(l2e_get_mfn(x)))
+#define l3e_get_page(x) (__mfn_to_page(l3e_get_mfn(x)))
+#define l4e_get_page(x) (__mfn_to_page(l4e_get_mfn(x)))
/* Get pte access flags (unsigned int). */
#define l1e_get_flags(x) (get_pte_flags((x).l1))
@@ -157,10 +157,10 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
#define l4e_from_intpte(intpte) ((l4_pgentry_t) { (intpte_t)(intpte) })
/* Construct a pte from a page pointer and access flags. */
-#define l1e_from_page(page, flags) l1e_from_pfn(__page_to_mfn(page), (flags))
-#define l2e_from_page(page, flags) l2e_from_pfn(__page_to_mfn(page), (flags))
-#define l3e_from_page(page, flags) l3e_from_pfn(__page_to_mfn(page), (flags))
-#define l4e_from_page(page, flags) l4e_from_pfn(__page_to_mfn(page), (flags))
+#define l1e_from_page(page, flags) l1e_from_mfn(__page_to_mfn(page), (flags))
+#define l2e_from_page(page, flags) l2e_from_mfn(__page_to_mfn(page), (flags))
+#define l3e_from_page(page, flags) l3e_from_mfn(__page_to_mfn(page), (flags))
+#define l4e_from_page(page, flags) l4e_from_mfn(__page_to_mfn(page), (flags))
/* Add extra flags to an existing pte. */
#define l1e_add_flags(x, flags) ((x).l1 |= put_pte_flags(flags))
@@ -215,13 +215,13 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
/* Page-table type. */
typedef struct { u64 pfn; } pagetable_t;
#define pagetable_get_paddr(x) ((paddr_t)(x).pfn << PAGE_SHIFT)
-#define pagetable_get_page(x) __mfn_to_page((x).pfn)
+#define pagetable_get_page(x) __mfn_to_page(pagetable_get_mfn(x))
#define pagetable_get_pfn(x) ((x).pfn)
#define pagetable_get_mfn(x) _mfn(((x).pfn))
#define pagetable_is_null(x) ((x).pfn == 0)
#define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) })
#define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) })
-#define pagetable_from_page(pg) pagetable_from_pfn(__page_to_mfn(pg))
+#define pagetable_from_page(pg) pagetable_from_mfn(__page_to_mfn(pg))
#define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT)
#define pagetable_null() pagetable_from_pfn(0)
@@ -240,12 +240,12 @@ void copy_page_sse2(void *, const void *);
#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
/* Convert between machine frame numbers and page-info structures. */
-#define __mfn_to_page(mfn) (frame_table + pfn_to_pdx(mfn))
-#define __page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table))
+#define __mfn_to_page(mfn) (frame_table + pfn_to_pdx(mfn_x(mfn)))
+#define __page_to_mfn(pg) _mfn(pdx_to_pfn((unsigned long)((pg) - frame_table)))
/* Convert between machine addresses and page-info structures. */
-#define __maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT)
-#define __page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT)
+#define __maddr_to_page(ma) __mfn_to_page(maddr_to_mfn(ma))
+#define __page_to_maddr(pg) (mfn_to_maddr(__page_to_mfn(pg)))
/* Convert between frame number and address formats. */
#define __pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
@@ -273,8 +273,8 @@ void copy_page_sse2(void *, const void *);
#define pfn_to_paddr(pfn) __pfn_to_paddr(pfn)
#define paddr_to_pfn(pa) __paddr_to_pfn(pa)
#define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va) l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))
-#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
+#define vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
+#define vmap_to_page(va) __mfn_to_page(vmap_to_mfn(va))
#endif /* !defined(__ASSEMBLY__) */
diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
index 890bae5b9c..5d1c34528e 100644
--- a/xen/include/xen/domain_page.h
+++ b/xen/include/xen/domain_page.h
@@ -44,11 +44,11 @@ unsigned long domain_page_map_to_mfn(const void *va);
void *map_domain_page_global(mfn_t mfn);
void unmap_domain_page_global(const void *va);
-#define __map_domain_page(pg) map_domain_page(_mfn(__page_to_mfn(pg)))
+#define __map_domain_page(pg) map_domain_page(__page_to_mfn(pg))
static inline void *__map_domain_page_global(const struct page_info *pg)
{
- return map_domain_page_global(_mfn(__page_to_mfn(pg)));
+ return map_domain_page_global(page_to_mfn(pg));
}
#else /* !CONFIG_DOMAIN_PAGE */
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 542c0b3f20..8516a0b131 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -25,7 +25,7 @@
typedef uint32_t pagesize_t; /* like size_t, must handle largest PAGE_SIZE */
#define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
-#define IS_VALID_PAGE(_pi) mfn_valid(_mfn(page_to_mfn(_pi)))
+#define IS_VALID_PAGE(_pi) mfn_valid(page_to_mfn(_pi))
extern struct page_list_head tmem_page_list;
extern spinlock_t tmem_page_list_lock;
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 18:15 ` [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
@ 2017-10-04 19:38 ` Razvan Cojocaru
2017-10-04 22:10 ` Tamas K Lengyel
2017-10-04 23:27 ` Andrew Cooper
2 siblings, 0 replies; 20+ messages in thread
From: Razvan Cojocaru @ 2017-10-04 19:38 UTC (permalink / raw)
To: Julien Grall, xen-devel
Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Tamas K Lengyel, Jan Beulich,
Suravee Suthikulpanit, Shane Wang, Boris Ostrovsky, Gang Wei,
Paul Durrant
On 10/04/2017 09:15 PM, Julien Grall wrote:
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
>
> So make __page_to_mfn and __mfn_to_page return mfn_t by default.
>
> Only reasonable clean-ups are done in this patch because it is
> already quite big. So some of the files now override page_to_mfn and
> mfn_to_page to avoid using mfn_t.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Thanks,
Razvan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 18:15 ` [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
2017-10-04 19:38 ` Razvan Cojocaru
@ 2017-10-04 22:10 ` Tamas K Lengyel
2017-10-04 23:27 ` Andrew Cooper
2 siblings, 0 replies; 20+ messages in thread
From: Tamas K Lengyel @ 2017-10-04 22:10 UTC (permalink / raw)
To: Julien Grall
Cc: Jun Nakajima, Tim Deegan, Kevin Tian, Stefano Stabellini,
Wei Liu, Suravee Suthikulpanit, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Xen-devel, Julien Grall, Paul Durrant, Jan Beulich, Shane Wang,
Boris Ostrovsky, Gang Wei
On Wed, Oct 4, 2017 at 12:15 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn because the
> rest of the function use mfn_t.
>
> So make __page_to_mfn and __mfn_to_page return mfn_t by default.
>
> Only reasonable clean-ups are done in this patch because it is
> already quite big. So some of the files now override page_to_mfn and
> mfn_to_page to avoid using mfn_t.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank
2017-10-04 18:15 ` [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
@ 2017-10-04 22:39 ` Andrew Cooper
2017-10-05 11:08 ` Julien Grall
0 siblings, 1 reply; 20+ messages in thread
From: Andrew Cooper @ 2017-10-04 22:39 UTC (permalink / raw)
To: Julien Grall, xen-devel; +Cc: Stefano Stabellini
On 04/10/2017 19:15, Julien Grall wrote:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 3723dc3f78..093ebf1a8e 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -109,11 +109,11 @@ static bool insert_11_bank(struct domain *d,
>
> spfn = page_to_mfn(pg);
> start = pfn_to_paddr(spfn);
> - size = pfn_to_paddr((1 << order));
> + size = pfn_to_paddr(1UL << order);
>
> D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
> start, start + size,
> - 1UL << (order+PAGE_SHIFT-20),
> + 1UL << (order + PAGE_SHIFT - 20),
If you are looking to be picky, you've got a double space between the
minus and the 20. I'm sure this would be trivial to fix on commit.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/7] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT
2017-10-04 18:15 ` [PATCH 3/7] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
@ 2017-10-04 22:41 ` Andrew Cooper
0 siblings, 0 replies; 20+ messages in thread
From: Andrew Cooper @ 2017-10-04 22:41 UTC (permalink / raw)
To: Julien Grall, xen-devel
Cc: Elena Ufimtseva, George Dunlap, Tim Deegan, Jan Beulich
On 04/10/2017 19:15, Julien Grall wrote:
> The constructions _mfn(... > PAGE_SHIFT) and mfn_to_page(... >> PAGE_SHIFT)
> could respectively be replaced by maddr_to_mfn(...) and
> maddr_to_page(...).
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/7] xen/kimage: Remove defined but unused variables
2017-10-04 18:15 ` [PATCH 4/7] xen/kimage: Remove defined but unused variables Julien Grall
@ 2017-10-04 22:42 ` Andrew Cooper
0 siblings, 0 replies; 20+ messages in thread
From: Andrew Cooper @ 2017-10-04 22:42 UTC (permalink / raw)
To: Julien Grall, xen-devel
On 04/10/2017 19:15, Julien Grall wrote:
> In the function kimage_alloc_normal_control_page, the variables mfn and
> emfn are defined but not used. Remove them.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
Oops.
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/7] xen/xenoprof: Convert the file to use typesafe MFN
2017-10-04 18:15 ` [PATCH 5/7] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
@ 2017-10-04 22:43 ` Andrew Cooper
0 siblings, 0 replies; 20+ messages in thread
From: Andrew Cooper @ 2017-10-04 22:43 UTC (permalink / raw)
To: Julien Grall, xen-devel
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
George Dunlap, Ian Jackson, Tim Deegan, Jan Beulich
On 04/10/2017 19:15, Julien Grall wrote:
> @@ -134,25 +140,26 @@ static void xenoprof_reset_buf(struct domain *d)
> }
>
> static int
> -share_xenoprof_page_with_guest(struct domain *d, unsigned long mfn, int npages)
> +share_xenoprof_page_with_guest(struct domain *d, mfn_t mfn, int npages)
> {
> int i;
>
> /* Check if previous page owner has released the page. */
> for ( i = 0; i < npages; i++ )
> {
> - struct page_info *page = mfn_to_page(mfn + i);
> + struct page_info *page = mfn_to_page(mfn_add(mfn, i));
A newline would be nice here...
> if ( (page->count_info & (PGC_allocated|PGC_count_mask)) != 0 )
> {
> printk(XENLOG_G_INFO "dom%d mfn %#lx page->count_info %#lx\n",
> - d->domain_id, mfn + i, page->count_info);
> + d->domain_id, mfn_x(mfn_add(mfn, i)), page->count_info);
> return -EBUSY;
> }
> page_set_owner(page, NULL);
> }
>
> for ( i = 0; i < npages; i++ )
> - share_xen_page_with_guest(mfn_to_page(mfn + i), d, XENSHARE_writable);
> + share_xen_page_with_guest(mfn_to_page(mfn_add(mfn, i)),
> + d, XENSHARE_writable);
>
> return 0;
> }
> @@ -161,11 +168,11 @@ static void
> unshare_xenoprof_page_with_guest(struct xenoprof *x)
> {
> int i, npages = x->npages;
> - unsigned long mfn = virt_to_mfn(x->rawbuf);
> + mfn_t mfn = virt_to_mfn(x->rawbuf);
>
> for ( i = 0; i < npages; i++ )
> {
> - struct page_info *page = mfn_to_page(mfn + i);
> + struct page_info *page = mfn_to_page(mfn_add(mfn, i));
... and here. This can easily be fixed on commit, so Reviewed-by:
Andrew Cooper <andrew.cooper3@citrix.com>
> BUG_ON(page_get_owner(page) != current->domain);
> if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
> put_page(page);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 6/7] xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
2017-10-04 18:15 ` [PATCH 6/7] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
@ 2017-10-04 22:46 ` Andrew Cooper
0 siblings, 0 replies; 20+ messages in thread
From: Andrew Cooper @ 2017-10-04 22:46 UTC (permalink / raw)
To: Julien Grall, xen-devel; +Cc: Konrad Rzeszutek Wilk
On 04/10/2017 19:15, Julien Grall wrote:
> @@ -68,16 +72,16 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
>
> *pcli_mfn = page_to_mfn(page);
> *pcli_pfp = page;
Newline.
> - return map_domain_page(_mfn(*pcli_mfn));
> + return map_domain_page(*pcli_mfn);
> }
>
> static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
> - unsigned long cli_mfn, bool mark_dirty)
> + mfn_t cli_mfn, bool mark_dirty)
> {
> if ( mark_dirty )
> {
> put_page_and_type(cli_pfp);
> - paging_mark_dirty(current->domain, _mfn(cli_mfn));
> + paging_mark_dirty(current->domain, cli_mfn);
> }
> else
> put_page(cli_pfp);
> @@ -165,7 +169,7 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
> return -EFAULT;
> }
> tmem_mfn = page_to_mfn(pfp);
> - tmem_va = map_domain_page(_mfn(tmem_mfn));
> + tmem_va = map_domain_page(tmem_mfn);
Newline.
Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> if ( cli_va )
> {
> memcpy(cli_va, tmem_va, PAGE_SIZE);
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 18:15 ` [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
2017-10-04 19:38 ` Razvan Cojocaru
2017-10-04 22:10 ` Tamas K Lengyel
@ 2017-10-04 23:27 ` Andrew Cooper
2017-10-05 9:34 ` Jan Beulich
2017-10-05 14:54 ` Julien Grall
2 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2017-10-04 23:27 UTC (permalink / raw)
To: Julien Grall, xen-devel
Cc: Jun Nakajima, Kevin Tian, Stefano Stabellini, Wei Liu,
Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
George Dunlap, Tim Deegan, Ian Jackson, Julien Grall,
Tamas K Lengyel, Jan Beulich, Shane Wang, Boris Ostrovsky,
Gang Wei, Paul Durrant
On 04/10/2017 19:15, Julien Grall wrote:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 093ebf1a8e..0753d03aac 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -104,11 +104,11 @@ static bool insert_11_bank(struct domain *d,
> unsigned int order)
> {
> int res, i;
> - paddr_t spfn;
> + mfn_t smfn;
> paddr_t start, size;
>
> - spfn = page_to_mfn(pg);
> - start = pfn_to_paddr(spfn);
> + smfn = page_to_mfn(pg);
> + start = mfn_to_maddr(smfn);
> size = pfn_to_paddr(1UL << order);
Wouldn't it be cleaner to move this renaming into patch 1, along with an
extra set of undef/override, to be taken out here? (perhaps not given
the rework effort?)
>
> D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
> @@ -678,8 +678,8 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
>
> if ( xenpmu_data )
> {
> - mfn = domain_page_map_to_mfn(xenpmu_data);
> - ASSERT(mfn_valid(_mfn(mfn)));
> + mfn = _mfn(domain_page_map_to_mfn(xenpmu_data));
Seeing as you convert every(?) call to domain_page_map_to_mfn(), it
would be cleaner to change the return type while making the change.
I'd be happy for such a change being folded into this patch, because
doing so would be by far the least disruptive way of making the change.
> + ASSERT(mfn_valid(mfn));
> unmap_domain_page_global(xenpmu_data);
> put_page_and_type(mfn_to_page(mfn));
> }
> @@ -1185,8 +1180,8 @@ int __mem_sharing_unshare_page(struct domain *d,
> return -ENOMEM;
> }
>
> - s = map_domain_page(_mfn(__page_to_mfn(old_page)));
> - t = map_domain_page(_mfn(__page_to_mfn(page)));
> + s = map_domain_page(page_to_mfn(old_page));
> + t = map_domain_page(page_to_mfn(page));
> memcpy(t, s, PAGE_SIZE);
> unmap_domain_page(s);
> unmap_domain_page(t);
This whole lot could turn into copy_domain_page()
> diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
> index 81973af124..371221a302 100644
> --- a/xen/arch/x86/pv/descriptor-tables.c
> +++ b/xen/arch/x86/pv/descriptor-tables.c
> @@ -25,12 +25,6 @@
> #include <asm/p2m.h>
> #include <asm/pv/mm.h>
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> /*******************
> * Descriptor Tables
> */
If you're making this change, please take out the Descriptor Tables
comment like you do with I/O below, because the entire file is dedicated
to descriptor table support and it will save me one item on a cleanup
patch :).
> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
> index dd90713acf..9ccbd021ef 100644
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -43,16 +43,6 @@
> #include "emulate.h"
> #include "mm.h"
>
> -/* Override macros from asm/page.h to make them work with mfn_t */
> -#undef mfn_to_page
> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
> -#undef page_to_mfn
> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
> -
> -/***********************
> - * I/O emulation support
> - */
> -
> struct priv_op_ctxt {
> struct x86_emulate_ctxt ctxt;
> struct {
> @@ -873,22 +873,22 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
> for ( entry = page; ; )
> {
> unsigned long ind;
> - unsigned long mfn;
> + mfn_t mfn;
>
> ind = kimage_entry_ind(entry, compat);
> - mfn = kimage_entry_mfn(entry, compat);
> + mfn = _mfn(kimage_entry_mfn(entry, compat));
Again, modify the return type of kimage_entry_mfn() ?
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 45ca742678..8737ef16ff 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -273,8 +273,8 @@ void copy_page_sse2(void *, const void *);
> #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn)
> #define paddr_to_pfn(pa) __paddr_to_pfn(pa)
> #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va) l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))
> -#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
> +#define vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
> +#define vmap_to_page(va) __mfn_to_page(vmap_to_mfn(va))
>
> #endif /* !defined(__ASSEMBLY__) */
>
> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
> index 542c0b3f20..8516a0b131 100644
> --- a/xen/include/xen/tmem_xen.h
> +++ b/xen/include/xen/tmem_xen.h
> @@ -25,7 +25,7 @@
> typedef uint32_t pagesize_t; /* like size_t, must handle largest PAGE_SIZE */
>
> #define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
> -#define IS_VALID_PAGE(_pi) mfn_valid(_mfn(page_to_mfn(_pi)))
> +#define IS_VALID_PAGE(_pi) mfn_valid(page_to_mfn(_pi))
/sigh This is tautological. The definition of a "valid mfn" in this
case is one for which we have frametable entry, and by having a struct
page_info in our hands, this is by definition true (unless you have a
wild pointer, at which point your bug is elsewhere).
IS_VALID_PAGE() is only ever used in assertions and never usefully, so
instead I would remove it entirely rather than trying to fix it up.
As for TMEM itself (Julien: This my no means blocks the patch. It is
more an observation for Konrad to see about fixing), I see that TMEM is
broken on x86 machines with more than 5TB of RAM, because it is not
legal to call page_to_virt() on a struct page_info allocated from the
domheap (which is why alloc_xenheap_page() returns a void *, and
alloc_domheap_page() specifically doesn't). The easy fix for this is to
swap the allocation primitives over to using xenheap allocations, which
would remove the need for page_to_virt() and back, or a better fix would
be to not pass everything thing by virtual address (at which point
retaining use of the domheap is fine).
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 23:27 ` Andrew Cooper
@ 2017-10-05 9:34 ` Jan Beulich
2017-10-05 14:54 ` Julien Grall
1 sibling, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2017-10-05 9:34 UTC (permalink / raw)
To: Andrew Cooper
Cc: Tim Deegan, Kevin Tian, Stefano Stabellini, Wei Liu,
Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
George Dunlap, Shane Wang, Julien Grall, Ian Jackson, xen-devel,
Julien Grall, Paul Durrant, Tamas K Lengyel,
Suravee Suthikulpanit, Boris Ostrovsky, Gang Wei
>>> On 05.10.17 at 01:27, <andrew.cooper3@citrix.com> wrote:
> As for TMEM itself (Julien: This my no means blocks the patch. It is
> more an observation for Konrad to see about fixing), I see that TMEM is
> broken on x86 machines with more than 5TB of RAM, because it is not
> legal to call page_to_virt() on a struct page_info allocated from the
> domheap (which is why alloc_xenheap_page() returns a void *, and
> alloc_domheap_page() specifically doesn't). The easy fix for this is to
> swap the allocation primitives over to using xenheap allocations, which
> would remove the need for page_to_virt() and back, or a better fix would
> be to not pass everything thing by virtual address (at which point
> retaining use of the domheap is fine).
For this reason we have
if ( tmem_enabled() )
{
printk(XENLOG_WARNING
"TMEM physical RAM limit exceeded, disabling TMEM\n");
tmem_disable();
}
in setup.c.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank
2017-10-04 22:39 ` Andrew Cooper
@ 2017-10-05 11:08 ` Julien Grall
0 siblings, 0 replies; 20+ messages in thread
From: Julien Grall @ 2017-10-05 11:08 UTC (permalink / raw)
To: Andrew Cooper, xen-devel; +Cc: Stefano Stabellini
Hi Andrew,
On 04/10/17 23:39, Andrew Cooper wrote:
> On 04/10/2017 19:15, Julien Grall wrote:
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 3723dc3f78..093ebf1a8e 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -109,11 +109,11 @@ static bool insert_11_bank(struct domain *d,
>>
>> spfn = page_to_mfn(pg);
>> start = pfn_to_paddr(spfn);
>> - size = pfn_to_paddr((1 << order));
>> + size = pfn_to_paddr(1UL << order);
>>
>> D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
>> start, start + size,
>> - 1UL << (order+PAGE_SHIFT-20),
>> + 1UL << (order + PAGE_SHIFT - 20),
>
> If you are looking to be picky, you've got a double space between the
> minus and the 20. I'm sure this would be trivial to fix on commit.
Argh, on my original patch I had 2 spaces before the minus. Dropped one
before sending on xen-devel and didn't spot the one after the minus.
I will resend the series with your comments addressed.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 23:27 ` Andrew Cooper
2017-10-05 9:34 ` Jan Beulich
@ 2017-10-05 14:54 ` Julien Grall
1 sibling, 0 replies; 20+ messages in thread
From: Julien Grall @ 2017-10-05 14:54 UTC (permalink / raw)
To: Andrew Cooper, xen-devel
Cc: Jun Nakajima, Kevin Tian, Stefano Stabellini, Wei Liu,
Suravee Suthikulpanit, Razvan Cojocaru, Konrad Rzeszutek Wilk,
George Dunlap, Tim Deegan, Ian Jackson, Julien Grall,
Tamas K Lengyel, Jan Beulich, Shane Wang, Boris Ostrovsky,
Gang Wei, Paul Durrant
Hi Andrew,
On 05/10/17 00:27, Andrew Cooper wrote:
> On 04/10/2017 19:15, Julien Grall wrote:
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 093ebf1a8e..0753d03aac 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -104,11 +104,11 @@ static bool insert_11_bank(struct domain *d,
>> unsigned int order)
>> {
>> int res, i;
>> - paddr_t spfn;
>> + mfn_t smfn;
>> paddr_t start, size;
>>
>> - spfn = page_to_mfn(pg);
>> - start = pfn_to_paddr(spfn);
>> + smfn = page_to_mfn(pg);
>> + start = mfn_to_maddr(smfn);
>> size = pfn_to_paddr(1UL << order);
>
> Wouldn't it be cleaner to move this renaming into patch 1, along with an
> extra set of undef/override, to be taken out here? (perhaps not given
> the rework effort?)
I moved the clean-up to patch #1 and add a temporary override that will
be dropped in this patch.
>
>>
>> D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
>> @@ -678,8 +678,8 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
>>
>> if ( xenpmu_data )
>> {
>> - mfn = domain_page_map_to_mfn(xenpmu_data);
>> - ASSERT(mfn_valid(_mfn(mfn)));
>> + mfn = _mfn(domain_page_map_to_mfn(xenpmu_data));
>
> Seeing as you convert every(?) call to domain_page_map_to_mfn(), it
> would be cleaner to change the return type while making the change. >
> I'd be happy for such a change being folded into this patch, because
> doing so would be by far the least disruptive way of making the change.
All of them but one are turned to _mfn(domain_page_map_to_mfn()). I have
folded the conversion in this patch.
>
>> + ASSERT(mfn_valid(mfn));
>> unmap_domain_page_global(xenpmu_data);
>> put_page_and_type(mfn_to_page(mfn));
>> }
>> @@ -1185,8 +1180,8 @@ int __mem_sharing_unshare_page(struct domain *d,
>> return -ENOMEM;
>> }
>>
>> - s = map_domain_page(_mfn(__page_to_mfn(old_page)));
>> - t = map_domain_page(_mfn(__page_to_mfn(page)));
>> + s = map_domain_page(page_to_mfn(old_page));
>> + t = map_domain_page(page_to_mfn(page));
>> memcpy(t, s, PAGE_SIZE);
>> unmap_domain_page(s);
>> unmap_domain_page(t);
>
> This whole lot could turn into copy_domain_page()
I have added a patch at the beginning of the series to use copy_domain_page.
>
>> diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
>> index 81973af124..371221a302 100644
>> --- a/xen/arch/x86/pv/descriptor-tables.c
>> +++ b/xen/arch/x86/pv/descriptor-tables.c
>> @@ -25,12 +25,6 @@
>> #include <asm/p2m.h>
>> #include <asm/pv/mm.h>
>>
>> -/* Override macros from asm/page.h to make them work with mfn_t */
>> -#undef mfn_to_page
>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>> -#undef page_to_mfn
>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>> -
>> /*******************
>> * Descriptor Tables
>> */
>
> If you're making this change, please take out the Descriptor Tables
> comment like you do with I/O below, because the entire file is dedicated
> to descriptor table support and it will save me one item on a cleanup
> patch :).
It is dropped now.
>
>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
>> index dd90713acf..9ccbd021ef 100644
>> --- a/xen/arch/x86/pv/emul-priv-op.c
>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>> @@ -43,16 +43,6 @@
>> #include "emulate.h"
>> #include "mm.h"
>>
>> -/* Override macros from asm/page.h to make them work with mfn_t */
>> -#undef mfn_to_page
>> -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
>> -#undef page_to_mfn
>> -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
>> -
>> -/***********************
>> - * I/O emulation support
>> - */
>> -
>> struct priv_op_ctxt {
>> struct x86_emulate_ctxt ctxt;
>> struct {
>> @@ -873,22 +873,22 @@ int kimage_build_ind(struct kexec_image *image, unsigned long ind_mfn,
>> for ( entry = page; ; )
>> {
>> unsigned long ind;
>> - unsigned long mfn;
>> + mfn_t mfn;
>>
>> ind = kimage_entry_ind(entry, compat);
>> - mfn = kimage_entry_mfn(entry, compat);
>> + mfn = _mfn(kimage_entry_mfn(entry, compat));
>
> Again, modify the return type of kimage_entry_mfn() ?
I have added a patch at the beginning of the series to switch
kimage/kexec to mfn_t.
>
>> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
>> index 45ca742678..8737ef16ff 100644
>> --- a/xen/include/asm-x86/page.h
>> +++ b/xen/include/asm-x86/page.h
>> @@ -273,8 +273,8 @@ void copy_page_sse2(void *, const void *);
>> #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn)
>> #define paddr_to_pfn(pa) __paddr_to_pfn(pa)
>> #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa))
>> -#define vmap_to_mfn(va) l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))
>> -#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
>> +#define vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
>
> l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
>
>> +#define vmap_to_page(va) __mfn_to_page(vmap_to_mfn(va))
>>
>> #endif /* !defined(__ASSEMBLY__) */
>>
>> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
>> index 542c0b3f20..8516a0b131 100644
>> --- a/xen/include/xen/tmem_xen.h
>> +++ b/xen/include/xen/tmem_xen.h
>> @@ -25,7 +25,7 @@
>> typedef uint32_t pagesize_t; /* like size_t, must handle largest PAGE_SIZE */
>>
>> #define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
>> -#define IS_VALID_PAGE(_pi) mfn_valid(_mfn(page_to_mfn(_pi)))
>> +#define IS_VALID_PAGE(_pi) mfn_valid(page_to_mfn(_pi))
>
> /sigh This is tautological. The definition of a "valid mfn" in this
> case is one for which we have frametable entry, and by having a struct
> page_info in our hands, this is by definition true (unless you have a
> wild pointer, at which point your bug is elsewhere).
>
> IS_VALID_PAGE() is only ever used in assertions and never usefully, so
> instead I would remove it entirely rather than trying to fix it up.
I would be happy to remove IS_VALID_PAGE is a patch at the beginning of
the series if Konrad is happy with it.
I will probably send a new version today without IS_VALID_PAGE dropped.
Though I will mention it in the patch to get feedback.
>
> As for TMEM itself (Julien: This my no means blocks the patch. It is
> more an observation for Konrad to see about fixing), I see that TMEM is
> broken on x86 machines with more than 5TB of RAM, because it is not
> legal to call page_to_virt() on a struct page_info allocated from the
> domheap (which is why alloc_xenheap_page() returns a void *, and
> alloc_domheap_page() specifically doesn't). The easy fix for this is to
> swap the allocation primitives over to using xenheap allocations, which
> would remove the need for page_to_virt() and back, or a better fix would
> be to not pass everything thing by virtual address (at which point
> retaining use of the domheap is fine).
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
` (6 preceding siblings ...)
2017-10-04 18:15 ` [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
@ 2017-10-06 17:31 ` Tim Deegan
7 siblings, 0 replies; 20+ messages in thread
From: Tim Deegan @ 2017-10-06 17:31 UTC (permalink / raw)
To: Julien Grall
Cc: Elena Ufimtseva, Kevin Tian, Stefano Stabellini, Wei Liu,
Jun Nakajima, Razvan Cojocaru, Konrad Rzeszutek Wilk,
George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
Julien Grall, Paul Durrant, Tamas K Lengyel, Jan Beulich,
Shane Wang, Suravee Suthikulpanit, Boris Ostrovsky, Gang Wei
At 19:15 +0100 on 04 Oct (1507144519), Julien Grall wrote:
> Hi all,
>
> Most of the users of page_to_mfn and mfn_to_page are either overriding
> the macros to make them work with mfn_t or use mfn_x/_mfn becaue the rest
> of the function use mfn_t.
>
> So I think it is time to make __page_to_mfn and __mfn_to_page using typesafe
> MFN.
>
> The first 6 patches will convert of the code to use typesafe MFN, easing
> the tree-wide conversion in patch 7.
x86 shadow code changes Acked-by: Tim Deegan <tim@xen.org>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2017-10-06 17:31 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-04 18:15 [PATCH 0/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN Julien Grall
2017-10-04 18:15 ` [PATCH 1/7] xen/arm: domain_build: Clean-up insert_11_bank Julien Grall
2017-10-04 22:39 ` Andrew Cooper
2017-10-05 11:08 ` Julien Grall
2017-10-04 18:15 ` [PATCH 2/7] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash Julien Grall
2017-10-04 18:15 ` [PATCH 3/7] xen/x86: Use maddr_to_page and maddr_to_mfn to avoid open-coded >> PAGE_SHIFT Julien Grall
2017-10-04 22:41 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 4/7] xen/kimage: Remove defined but unused variables Julien Grall
2017-10-04 22:42 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 5/7] xen/xenoprof: Convert the file to use typesafe MFN Julien Grall
2017-10-04 22:43 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 6/7] xen/tmem: Convert the file common/tmem_xen.c " Julien Grall
2017-10-04 22:46 ` Andrew Cooper
2017-10-04 18:15 ` [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page " Julien Grall
2017-10-04 19:38 ` Razvan Cojocaru
2017-10-04 22:10 ` Tamas K Lengyel
2017-10-04 23:27 ` Andrew Cooper
2017-10-05 9:34 ` Jan Beulich
2017-10-05 14:54 ` Julien Grall
2017-10-06 17:31 ` [PATCH 0/7] " Tim Deegan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.