From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
To: Julien Grall <julien.grall@arm.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
sstabellini@kernel.org, Feng Wu <feng.wu@intel.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
Jun Nakajima <jun.nakajima@intel.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Tim Deegan <tim@xen.org>,
xen-devel@lists.xen.org,
George Dunlap <george.dunlap@eu.citrix.com>,
Paul Durrant <paul.durrant@citrix.com>,
Jan Beulich <jbeulich@suse.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN
Date: Fri, 8 Jul 2016 15:05:07 -0700 [thread overview]
Message-ID: <20160708220507.GB12597@localhost.localdomain> (raw)
In-Reply-To: <577D0214.8000205@arm.com>
On Wed, Jul 06, 2016 at 02:05:24PM +0100, Julien Grall wrote:
> (CC Elena)
>
Thanks Julien!
> On 06/07/16 14:01, Julien Grall wrote:
> >Also take the opportunity to convert arch/x86/debug.c to the typesafe gfn.
> >
> >Signed-off-by: Julien Grall <julien.grall@arm.com>
> >Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> >
> >---
> >Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
>
> I forgot to update the CC list since GDSX maintainership was taken over by
> Elena. Sorry for that.
>
> Regards,
>
> >Cc: Jan Beulich <jbeulich@suse.com>
> >Cc: Paul Durrant <paul.durrant@citrix.com>
> >Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> >Cc: Jun Nakajima <jun.nakajima@intel.com>
> >Cc: Kevin Tian <kevin.tian@intel.com>
> >Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >Cc: Tim Deegan <tim@xen.org>
> >Cc: Feng Wu <feng.wu@intel.com>
> >
> > Changes in v6:
> > - Add Stefano's acked-by for ARM bits
> > - Remove set of brackets when it is not necessary
> > - Add Andrew's reviewed-by
> >
> > Changes in v5:
> > - Patch added
> >---
> > xen/arch/arm/p2m.c | 4 ++--
> > xen/arch/x86/debug.c | 18 +++++++++---------
> > xen/arch/x86/domain.c | 2 +-
> > xen/arch/x86/hvm/emulate.c | 7 ++++---
> > xen/arch/x86/hvm/hvm.c | 6 +++---
> > xen/arch/x86/hvm/ioreq.c | 8 ++++----
> > xen/arch/x86/hvm/svm/nestedsvm.c | 2 +-
> > xen/arch/x86/hvm/vmx/vmx.c | 6 +++---
> > xen/arch/x86/mm/altp2m.c | 2 +-
> > xen/arch/x86/mm/hap/guest_walk.c | 10 +++++-----
> > xen/arch/x86/mm/hap/nested_ept.c | 2 +-
> > xen/arch/x86/mm/p2m-pod.c | 6 +++---
> > xen/arch/x86/mm/p2m.c | 18 +++++++++---------
> > xen/arch/x86/mm/shadow/common.c | 2 +-
> > xen/arch/x86/mm/shadow/multi.c | 2 +-
> > xen/arch/x86/mm/shadow/private.h | 2 +-
> > xen/drivers/passthrough/amd/iommu_map.c | 2 +-
> > xen/drivers/passthrough/vtd/iommu.c | 4 ++--
> > xen/drivers/passthrough/x86/iommu.c | 2 +-
> > xen/include/asm-x86/guest_pt.h | 4 ++--
> > xen/include/asm-x86/p2m.h | 2 +-
> > xen/include/xen/mm.h | 2 +-
> > 22 files changed, 57 insertions(+), 56 deletions(-)
> >
> >diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> >index d690602..c938dde 100644
> >--- a/xen/arch/arm/p2m.c
> >+++ b/xen/arch/arm/p2m.c
> >@@ -479,7 +479,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> > }
> >
> > /* If request to get default access. */
> >- if ( gfn_x(gfn) == INVALID_GFN )
> >+ if ( gfn_eq(gfn, INVALID_GFN) )
> > {
> > *access = memaccess[p2m->default_access];
> > return 0;
> >@@ -1879,7 +1879,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> > p2m->mem_access_enabled = true;
> >
> > /* If request to set default access. */
> >- if ( gfn_x(gfn) == INVALID_GFN )
> >+ if ( gfn_eq(gfn, INVALID_GFN) )
> > {
> > p2m->default_access = a;
> > return 0;
> >diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> >index 9213ea7..3030022 100644
> >--- a/xen/arch/x86/debug.c
> >+++ b/xen/arch/x86/debug.c
> >@@ -44,8 +44,7 @@ typedef unsigned char dbgbyte_t;
> >
> > /* Returns: mfn for the given (hvm guest) vaddr */
> > static mfn_t
> >-dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >- unsigned long *gfn)
> >+dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr, gfn_t *gfn)
> > {
> > mfn_t mfn;
> > uint32_t pfec = PFEC_page_present;
> >@@ -53,14 +52,14 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >
> > DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
> >
> >- *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
> >- if ( *gfn == INVALID_GFN )
> >+ *gfn = _gfn(paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec));
> >+ if ( gfn_eq(*gfn, INVALID_GFN) )
> > {
> > DBGP2("kdb:bad gfn from gva_to_gfn\n");
> > return INVALID_MFN;
> > }
> >
> >- mfn = get_gfn(dp, *gfn, &gfntype);
> >+ mfn = get_gfn(dp, gfn_x(*gfn), &gfntype);
> > if ( p2m_is_readonly(gfntype) && toaddr )
> > {
> > DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> >@@ -72,7 +71,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
> >
> > if ( mfn_eq(mfn, INVALID_MFN) )
> > {
> >- put_gfn(dp, *gfn);
> >+ put_gfn(dp, gfn_x(*gfn));
> > *gfn = INVALID_GFN;
> > }
> >
> >@@ -165,7 +164,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> > char *va;
> > unsigned long addr = (unsigned long)gaddr;
> > mfn_t mfn;
> >- unsigned long gfn = INVALID_GFN, pagecnt;
> >+ gfn_t gfn = INVALID_GFN;
> >+ unsigned long pagecnt;
> >
> > pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
> >
> >@@ -189,8 +189,8 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
> > }
> >
> > unmap_domain_page(va);
> >- if ( gfn != INVALID_GFN )
> >- put_gfn(dp, gfn);
> >+ if ( !gfn_eq(gfn, INVALID_GFN) )
> >+ put_gfn(dp, gfn_x(gfn));
> >
> > addr += pagecnt;
> > buf += pagecnt;
> >diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> >index bb59247..c8c7e2d 100644
> >--- a/xen/arch/x86/domain.c
> >+++ b/xen/arch/x86/domain.c
> >@@ -783,7 +783,7 @@ int arch_domain_soft_reset(struct domain *d)
> > * gfn == INVALID_GFN indicates that the shared_info page was never mapped
> > * to the domain's address space and there is nothing to replace.
> > */
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn == gfn_x(INVALID_GFN) )
> > goto exit_put_page;
> >
> > if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn )
> >diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> >index 855af4d..c55ad7b 100644
> >--- a/xen/arch/x86/hvm/emulate.c
> >+++ b/xen/arch/x86/hvm/emulate.c
> >@@ -455,7 +455,7 @@ static int hvmemul_linear_to_phys(
> > return rc;
> > pfn = _paddr >> PAGE_SHIFT;
> > }
> >- else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == INVALID_GFN )
> >+ else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) )
> > {
> > if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
> > return X86EMUL_RETRY;
> >@@ -472,7 +472,8 @@ static int hvmemul_linear_to_phys(
> > npfn = paging_gva_to_gfn(curr, addr, &pfec);
> >
> > /* Is it contiguous with the preceding PFNs? If not then we're done. */
> >- if ( (npfn == INVALID_GFN) || (npfn != (pfn + (reverse ? -i : i))) )
> >+ if ( (npfn == gfn_x(INVALID_GFN)) ||
> >+ (npfn != (pfn + (reverse ? -i : i))) )
> > {
> > if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
> > return X86EMUL_RETRY;
> >@@ -480,7 +481,7 @@ static int hvmemul_linear_to_phys(
> > if ( done == 0 )
> > {
> > ASSERT(!reverse);
> >- if ( npfn != INVALID_GFN )
> >+ if ( npfn != gfn_x(INVALID_GFN) )
> > return X86EMUL_UNHANDLEABLE;
> > hvm_inject_page_fault(pfec, addr & PAGE_MASK);
> > return X86EMUL_EXCEPTION;
> >diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >index f3faf2e..bb39d5f 100644
> >--- a/xen/arch/x86/hvm/hvm.c
> >+++ b/xen/arch/x86/hvm/hvm.c
> >@@ -3039,7 +3039,7 @@ static enum hvm_copy_result __hvm_copy(
> > if ( flags & HVMCOPY_virt )
> > {
> > gfn = paging_gva_to_gfn(curr, addr, &pfec);
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn == gfn_x(INVALID_GFN) )
> > {
> > if ( pfec & PFEC_page_paged )
> > return HVMCOPY_gfn_paged_out;
> >@@ -3154,7 +3154,7 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, int size)
> > count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
> >
> > gfn = paging_gva_to_gfn(curr, addr, &pfec);
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn == gfn_x(INVALID_GFN) )
> > {
> > if ( pfec & PFEC_page_paged )
> > return HVMCOPY_gfn_paged_out;
> >@@ -5298,7 +5298,7 @@ static int do_altp2m_op(
> > a.u.enable_notify.vcpu_id != curr->vcpu_id )
> > rc = -EINVAL;
> >
> >- if ( (gfn_x(vcpu_altp2m(curr).veinfo_gfn) != INVALID_GFN) ||
> >+ if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
> > mfn_eq(get_gfn_query_unlocked(curr->domain,
> > a.u.enable_notify.gfn, &p2mt), INVALID_MFN) )
> > return -EINVAL;
> >diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> >index 7148ac4..d2245e2 100644
> >--- a/xen/arch/x86/hvm/ioreq.c
> >+++ b/xen/arch/x86/hvm/ioreq.c
> >@@ -204,7 +204,7 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
> > {
> > unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
> >
> >- if ( gmfn != INVALID_GFN )
> >+ if ( gmfn != gfn_x(INVALID_GFN) )
> > set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> > }
> >
> >@@ -420,7 +420,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
> > if ( rc )
> > return rc;
> >
> >- if ( bufioreq_pfn != INVALID_GFN )
> >+ if ( bufioreq_pfn != gfn_x(INVALID_GFN) )
> > rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
> >
> > if ( rc )
> >@@ -434,8 +434,8 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
> > bool_t handle_bufioreq)
> > {
> > struct domain *d = s->domain;
> >- unsigned long ioreq_pfn = INVALID_GFN;
> >- unsigned long bufioreq_pfn = INVALID_GFN;
> >+ unsigned long ioreq_pfn = gfn_x(INVALID_GFN);
> >+ unsigned long bufioreq_pfn = gfn_x(INVALID_GFN);
> > int rc;
> >
> > if ( is_default )
> >diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
> >index 9d2ac09..f9b38ab 100644
> >--- a/xen/arch/x86/hvm/svm/nestedsvm.c
> >+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> >@@ -1200,7 +1200,7 @@ nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
> > /* Walk the guest-supplied NPT table, just as if it were a pagetable */
> > gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
> >
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn == gfn_x(INVALID_GFN) )
> > return NESTEDHVM_PAGEFAULT_INJECT;
> >
> > *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
> >diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >index a061420..088f454 100644
> >--- a/xen/arch/x86/hvm/vmx/vmx.c
> >+++ b/xen/arch/x86/hvm/vmx/vmx.c
> >@@ -2057,13 +2057,13 @@ static int vmx_vcpu_emulate_vmfunc(struct cpu_user_regs *regs)
> > static bool_t vmx_vcpu_emulate_ve(struct vcpu *v)
> > {
> > bool_t rc = 0, writable;
> >- unsigned long gfn = gfn_x(vcpu_altp2m(v).veinfo_gfn);
> >+ gfn_t gfn = vcpu_altp2m(v).veinfo_gfn;
> > ve_info_t *veinfo;
> >
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn_eq(gfn, INVALID_GFN) )
> > return 0;
> >
> >- veinfo = hvm_map_guest_frame_rw(gfn, 0, &writable);
> >+ veinfo = hvm_map_guest_frame_rw(gfn_x(gfn), 0, &writable);
> > if ( !veinfo )
> > return 0;
> > if ( !writable || veinfo->semaphore != 0 )
> >diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
> >index 10605c8..930bdc2 100644
> >--- a/xen/arch/x86/mm/altp2m.c
> >+++ b/xen/arch/x86/mm/altp2m.c
> >@@ -26,7 +26,7 @@ altp2m_vcpu_reset(struct vcpu *v)
> > struct altp2mvcpu *av = &vcpu_altp2m(v);
> >
> > av->p2midx = INVALID_ALTP2M;
> >- av->veinfo_gfn = _gfn(INVALID_GFN);
> >+ av->veinfo_gfn = INVALID_GFN;
> > }
> >
> > void
> >diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
> >index d2716f9..1b1a15d 100644
> >--- a/xen/arch/x86/mm/hap/guest_walk.c
> >+++ b/xen/arch/x86/mm/hap/guest_walk.c
> >@@ -70,14 +70,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> > if ( top_page )
> > put_page(top_page);
> > p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> > }
> > if ( p2m_is_shared(p2mt) )
> > {
> > pfec[0] = PFEC_page_shared;
> > if ( top_page )
> > put_page(top_page);
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> > }
> > if ( !top_page )
> > {
> >@@ -110,12 +110,12 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> > ASSERT(p2m_is_hostp2m(p2m));
> > pfec[0] = PFEC_page_paged;
> > p2m_mem_paging_populate(p2m->domain, gfn_x(gfn));
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> > }
> > if ( p2m_is_shared(p2mt) )
> > {
> > pfec[0] = PFEC_page_shared;
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> > }
> >
> > if ( page_order )
> >@@ -147,7 +147,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
> > if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
> > pfec[0] &= ~PFEC_insn_fetch;
> >
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> > }
> >
> >
> >diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
> >index 94cf832..02b27b1 100644
> >--- a/xen/arch/x86/mm/hap/nested_ept.c
> >+++ b/xen/arch/x86/mm/hap/nested_ept.c
> >@@ -236,7 +236,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
> > ept_walk_t gw;
> > rwx_acc &= EPTE_RWX_MASK;
> >
> >- *l1gfn = INVALID_GFN;
> >+ *l1gfn = gfn_x(INVALID_GFN);
> >
> > rc = nept_walk_tables(v, l2ga, &gw);
> > switch ( rc )
> >diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> >index f384589..149f529 100644
> >--- a/xen/arch/x86/mm/p2m-pod.c
> >+++ b/xen/arch/x86/mm/p2m-pod.c
> >@@ -1003,7 +1003,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
> > unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
> > unsigned long gfn = mrp->list[idx];
> >
> >- if ( gfn != INVALID_GFN )
> >+ if ( gfn != gfn_x(INVALID_GFN) )
> > {
> > if ( gfn & POD_LAST_SUPERPAGE )
> > {
> >@@ -1020,7 +1020,7 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
> > else
> > p2m_pod_zero_check(p2m, &gfn, 1);
> >
> >- mrp->list[idx] = INVALID_GFN;
> >+ mrp->list[idx] = gfn_x(INVALID_GFN);
> > }
> >
> > } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
> >@@ -1031,7 +1031,7 @@ static void pod_eager_record(struct p2m_domain *p2m,
> > {
> > struct pod_mrp_list *mrp = &p2m->pod.mrp;
> >
> >- ASSERT(gfn != INVALID_GFN);
> >+ ASSERT(gfn != gfn_x(INVALID_GFN));
> >
> > mrp->list[mrp->idx++] =
> > gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
> >diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> >index b93c8a2..ff0cce8 100644
> >--- a/xen/arch/x86/mm/p2m.c
> >+++ b/xen/arch/x86/mm/p2m.c
> >@@ -76,7 +76,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
> > p2m->np2m_base = P2M_BASE_EADDR;
> >
> > for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
> >- p2m->pod.mrp.list[i] = INVALID_GFN;
> >+ p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
> >
> > if ( hap_enabled(d) && cpu_has_vmx )
> > ret = ept_p2m_init(p2m);
> >@@ -1863,7 +1863,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
> > }
> >
> > /* If request to set default access. */
> >- if ( gfn_x(gfn) == INVALID_GFN )
> >+ if ( gfn_eq(gfn, INVALID_GFN) )
> > {
> > p2m->default_access = a;
> > return 0;
> >@@ -1932,7 +1932,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access)
> > };
> >
> > /* If request to get default access. */
> >- if ( gfn_x(gfn) == INVALID_GFN )
> >+ if ( gfn_eq(gfn, INVALID_GFN) )
> > {
> > *access = memaccess[p2m->default_access];
> > return 0;
> >@@ -2113,8 +2113,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
> > mode = paging_get_nestedmode(v);
> > l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
> >
> >- if ( l2_gfn == INVALID_GFN )
> >- return INVALID_GFN;
> >+ if ( l2_gfn == gfn_x(INVALID_GFN) )
> >+ return gfn_x(INVALID_GFN);
> >
> > /* translate l2 guest gfn into l1 guest gfn */
> > rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
> >@@ -2123,7 +2123,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
> > !!(*pfec & PFEC_insn_fetch));
> >
> > if ( rv != NESTEDHVM_PAGEFAULT_DONE )
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> >
> > /*
> > * Sanity check that l1_gfn can be used properly as a 4K mapping, even
> >@@ -2415,7 +2415,7 @@ static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
> > struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
> > struct ept_data *ept;
> >
> >- p2m->min_remapped_gfn = INVALID_GFN;
> >+ p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
> > p2m->max_remapped_gfn = 0;
> > ept = &p2m->ept;
> > ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> >@@ -2551,7 +2551,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
> >
> > mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
> >
> >- if ( gfn_x(new_gfn) == INVALID_GFN )
> >+ if ( gfn_eq(new_gfn, INVALID_GFN) )
> > {
> > if ( mfn_valid(mfn) )
> > p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
> >@@ -2613,7 +2613,7 @@ static void p2m_reset_altp2m(struct p2m_domain *p2m)
> > /* Uninit and reinit ept to force TLB shootdown */
> > ept_p2m_uninit(p2m);
> > ept_p2m_init(p2m);
> >- p2m->min_remapped_gfn = INVALID_GFN;
> >+ p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
> > p2m->max_remapped_gfn = 0;
> > }
> >
> >diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> >index 1c0b6cd..61ccddf 100644
> >--- a/xen/arch/x86/mm/shadow/common.c
> >+++ b/xen/arch/x86/mm/shadow/common.c
> >@@ -1707,7 +1707,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr,
> >
> > /* Translate the VA to a GFN. */
> > gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec);
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn == gfn_x(INVALID_GFN) )
> > {
> > if ( is_hvm_vcpu(v) )
> > hvm_inject_page_fault(pfec, vaddr);
> >diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> >index f892e2f..e54c8b7 100644
> >--- a/xen/arch/x86/mm/shadow/multi.c
> >+++ b/xen/arch/x86/mm/shadow/multi.c
> >@@ -3660,7 +3660,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
> > */
> > if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
> > pfec[0] &= ~PFEC_insn_fetch;
> >- return INVALID_GFN;
> >+ return gfn_x(INVALID_GFN);
> > }
> > gfn = guest_walk_to_gfn(&gw);
> >
> >diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
> >index c424ad6..824796f 100644
> >--- a/xen/arch/x86/mm/shadow/private.h
> >+++ b/xen/arch/x86/mm/shadow/private.h
> >@@ -796,7 +796,7 @@ static inline unsigned long vtlb_lookup(struct vcpu *v,
> > unsigned long va, uint32_t pfec)
> > {
> > unsigned long page_number = va >> PAGE_SHIFT;
> >- unsigned long frame_number = INVALID_GFN;
> >+ unsigned long frame_number = gfn_x(INVALID_GFN);
> > int i = vtlb_hash(page_number);
> >
> > spin_lock(&v->arch.paging.vtlb_lock);
> >diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> >index c758459..b8c0a48 100644
> >--- a/xen/drivers/passthrough/amd/iommu_map.c
> >+++ b/xen/drivers/passthrough/amd/iommu_map.c
> >@@ -555,7 +555,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
> > unsigned long old_root_mfn;
> > struct domain_iommu *hd = dom_iommu(d);
> >
> >- if ( gfn == INVALID_GFN )
> >+ if ( gfn == gfn_x(INVALID_GFN) )
> > return -EADDRNOTAVAIL;
> > ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> >
> >diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> >index f010612..c322b9f 100644
> >--- a/xen/drivers/passthrough/vtd/iommu.c
> >+++ b/xen/drivers/passthrough/vtd/iommu.c
> >@@ -611,7 +611,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
> > if ( iommu_domid == -1 )
> > continue;
> >
> >- if ( page_count != 1 || gfn == INVALID_GFN )
> >+ if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
> > rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
> > 0, flush_dev_iotlb);
> > else
> >@@ -640,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
> >
> > static int __must_check iommu_flush_iotlb_all(struct domain *d)
> > {
> >- return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
> >+ return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
> > }
> >
> > /* clear one page's page table */
> >diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> >index cd435d7..69cd6c5 100644
> >--- a/xen/drivers/passthrough/x86/iommu.c
> >+++ b/xen/drivers/passthrough/x86/iommu.c
> >@@ -61,7 +61,7 @@ int arch_iommu_populate_page_table(struct domain *d)
> > unsigned long mfn = page_to_mfn(page);
> > unsigned long gfn = mfn_to_gmfn(d, mfn);
> >
> >- if ( gfn != INVALID_GFN )
> >+ if ( gfn != gfn_x(INVALID_GFN) )
> > {
> > ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> > BUG_ON(SHARED_M2P(gfn));
> >diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
> >index a8d980c..79ed4ff 100644
> >--- a/xen/include/asm-x86/guest_pt.h
> >+++ b/xen/include/asm-x86/guest_pt.h
> >@@ -32,7 +32,7 @@
> > #error GUEST_PAGING_LEVELS not defined
> > #endif
> >
> >-#define VALID_GFN(m) (m != INVALID_GFN)
> >+#define VALID_GFN(m) (m != gfn_x(INVALID_GFN))
> >
> > static inline int
> > valid_gfn(gfn_t m)
> >@@ -251,7 +251,7 @@ static inline gfn_t
> > guest_walk_to_gfn(walk_t *gw)
> > {
> > if ( !(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT) )
> >- return _gfn(INVALID_GFN);
> >+ return INVALID_GFN;
> > return guest_l1e_get_gfn(gw->l1e);
> > }
> >
> >diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> >index 4ab3574..194020e 100644
> >--- a/xen/include/asm-x86/p2m.h
> >+++ b/xen/include/asm-x86/p2m.h
> >@@ -324,7 +324,7 @@ struct p2m_domain {
> > #define NR_POD_MRP_ENTRIES 32
> >
> > /* Encode ORDER_2M superpage in top bit of GFN */
> >-#define POD_LAST_SUPERPAGE (INVALID_GFN & ~(INVALID_GFN >> 1))
> >+#define POD_LAST_SUPERPAGE (gfn_x(INVALID_GFN) & ~(gfn_x(INVALID_GFN) >> 1))
> >
> > unsigned long list[NR_POD_MRP_ENTRIES];
> > unsigned int idx;
> >diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> >index 7f207ec..58bc0b8 100644
> >--- a/xen/include/xen/mm.h
> >+++ b/xen/include/xen/mm.h
> >@@ -84,7 +84,7 @@ static inline bool_t mfn_eq(mfn_t x, mfn_t y)
> >
> > TYPE_SAFE(unsigned long, gfn);
> > #define PRI_gfn "05lx"
> >-#define INVALID_GFN (~0UL)
> >+#define INVALID_GFN _gfn(~0UL)
> >
> > #ifndef gfn_t
> > #define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
> >
>
> --
> Julien Grall
Acked-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-08 22:05 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-06 13:00 [PATCH v6 00/14] xen/arm: Use the typesafes gfn and mfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 01/14] xen: Use the typesafe mfn and gfn in map_mmio_regions Julien Grall
2016-07-06 13:01 ` [PATCH v6 02/14] xen/passthrough: x86: Use INVALID_GFN rather than INVALID_MFN Julien Grall
2016-07-06 13:01 ` [PATCH v6 03/14] xen: Use a typesafe to define INVALID_MFN Julien Grall
2016-07-06 13:04 ` Julien Grall
2016-07-08 22:01 ` Elena Ufimtseva
2016-07-08 19:20 ` Andrew Cooper
2016-07-09 0:21 ` Elena Ufimtseva
2016-07-08 19:39 ` Julien Grall
2016-07-06 13:01 ` [PATCH v6 04/14] xen: Use a typesafe to define INVALID_GFN Julien Grall
2016-07-06 13:05 ` Julien Grall
2016-07-08 22:05 ` Elena Ufimtseva [this message]
2016-07-06 13:01 ` [PATCH v6 05/14] xen/arm: Rework the interface of p2m_lookup and use typesafe gfn and mfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 06/14] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 07/14] xen/arm: map_regions_rw_cache: Map the region with p2m->default_access Julien Grall
2016-07-06 13:01 ` [PATCH v6 08/14] xen/arm: dom0_build: Remove dead code in allocate_memory Julien Grall
2016-07-06 13:01 ` [PATCH v6 09/14] xen/arm: p2m: Remove unused operation ALLOCATE Julien Grall
2016-07-06 13:01 ` [PATCH v6 10/14] xen/arm: Use the typesafes mfn and gfn in map_dev_mmio_region Julien Grall
2016-07-06 13:01 ` [PATCH v6 11/14] xen/arm: Use the typesafes mfn and gfn in map_regions_rw_cache Julien Grall
2016-07-06 13:01 ` [PATCH v6 12/14] xen/arm: p2m: Introduce helpers to insert and remove mapping Julien Grall
2016-07-11 16:16 ` Julien Grall
2016-07-06 13:01 ` [PATCH v6 13/14] xen/arm: p2m: Use typesafe gfn for {max, lowest}_mapped_gfn Julien Grall
2016-07-06 13:01 ` [PATCH v6 14/14] xen/arm: p2m: Rework the interface of apply_p2m_changes and use typesafe Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160708220507.GB12597@localhost.localdomain \
--to=elena.ufimtseva@oracle.com \
--cc=andrew.cooper3@citrix.com \
--cc=boris.ostrovsky@oracle.com \
--cc=feng.wu@intel.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=paul.durrant@citrix.com \
--cc=sstabellini@kernel.org \
--cc=suravee.suthikulpanit@amd.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).