From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stefano Stabellini Subject: Re: [RFC 23/23] arm/xen: Add support for 64KB page granularity Date: Tue, 23 Jun 2015 15:19:09 +0100 Message-ID: References: <1431622863-28575-1-git-send-email-julien.grall@citrix.com> <1431622863-28575-24-git-send-email-julien.grall@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Z7P3q-0005eD-2c for xen-devel@lists.xenproject.org; Tue, 23 Jun 2015 14:20:26 +0000 In-Reply-To: <1431622863-28575-24-git-send-email-julien.grall@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Julien Grall Cc: Russell King , ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, tim@xen.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org List-Id: xen-devel@lists.xenproject.org On Thu, 14 May 2015, Julien Grall wrote: > The hypercall interface is always using 4KB page granularity. This is > requiring to use xen page definition macro when we deal with hypercall. > > Note that pfn_to_mfn is working with a Xen pfn (i.e 4KB). We may want to > rename pfn_mfn to make this explicit. > > We also allocate a 64KB page for the shared page even though only the > first 4KB is used. I don't think this is really important for now as it > helps to have the pointer 4KB aligned (XENMEM_add_to_physmap is taking a > Xen PFN). > > Signed-off-by: Julien Grall > Cc: Stefano Stabellini > Cc: Russell King > > arch/arm/include/asm/xen/page.h | 12 ++++++------ > arch/arm/xen/enlighten.c | 6 +++--- > 2 files changed, 9 insertions(+), 9 deletions(-) > > diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h > index 1bee8ca..ab6eb9a 100644 > --- a/arch/arm/include/asm/xen/page.h > +++ b/arch/arm/include/asm/xen/page.h > @@ -56,19 +56,19 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn) > > static inline xmaddr_t phys_to_machine(xpaddr_t phys) > { > - unsigned offset = phys.paddr & ~PAGE_MASK; > - return XMADDR(PFN_PHYS(pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset); > + unsigned offset = phys.paddr & ~XEN_PAGE_MASK; > + return XMADDR(XEN_PFN_PHYS(pfn_to_mfn(XEN_PFN_DOWN(phys.paddr))) | offset); > } > > static inline xpaddr_t machine_to_phys(xmaddr_t machine) > { > - unsigned offset = machine.maddr & ~PAGE_MASK; > - return XPADDR(PFN_PHYS(mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset); > + unsigned offset = machine.maddr & ~XEN_PAGE_MASK; > + return XPADDR(XEN_PFN_PHYS(mfn_to_pfn(XEN_PFN_DOWN(machine.maddr))) | offset); > } > /* VIRT <-> MACHINE conversion */ > #define virt_to_machine(v) (phys_to_machine(XPADDR(__pa(v)))) > -#define virt_to_mfn(v) (pfn_to_mfn(virt_to_pfn(v))) > -#define mfn_to_virt(m) (__va(mfn_to_pfn(m) << PAGE_SHIFT)) > +#define virt_to_mfn(v) (pfn_to_mfn(virt_to_phys(v) >> XEN_PAGE_SHIFT)) > +#define mfn_to_virt(m) (__va(mfn_to_pfn(m) << XEN_PAGE_SHIFT)) > > static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr) > { > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c > index 224081c..dcfe251 100644 > --- a/arch/arm/xen/enlighten.c > +++ b/arch/arm/xen/enlighten.c > @@ -93,8 +93,8 @@ static void xen_percpu_init(void) > pr_info("Xen: initializing cpu%d\n", cpu); > vcpup = per_cpu_ptr(xen_vcpu_info, cpu); > > - info.mfn = __pa(vcpup) >> PAGE_SHIFT; > - info.offset = offset_in_page(vcpup); > + info.mfn = __pa(vcpup) >> XEN_PAGE_SHIFT; > + info.offset = xen_offset_in_page(vcpup); > > err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info); > BUG_ON(err); > @@ -204,7 +204,7 @@ static int __init xen_guest_init(void) > xatp.domid = DOMID_SELF; > xatp.idx = 0; > xatp.space = XENMAPSPACE_shared_info; > - xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT; > + xatp.gpfn = __pa(shared_info_page) >> XEN_PAGE_SHIFT; > if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp)) > BUG(); What about xen_remap_domain_mfn_range? I guess we don't support that use case on 64K guests? If so, I would appreaciate an assert and/or an error message.