From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: Re: [PATCH v2 19/20] xen/privcmd: Add support for Linux 64KB page granularity Date: Mon, 13 Jul 2015 16:13:24 -0400 Message-ID: <55A41BE4.3080104__40742.1400108889$1436818563$gmane$org@oracle.com> References: <1436474552-31789-1-git-send-email-julien.grall@citrix.com> <1436474552-31789-20-git-send-email-julien.grall@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZEk7H-00053a-TQ for xen-devel@lists.xenproject.org; Mon, 13 Jul 2015 20:14:20 +0000 In-Reply-To: <1436474552-31789-20-git-send-email-julien.grall@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Julien Grall , xen-devel@lists.xenproject.org Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org, David Vrabel , linux-arm-kernel@lists.infradead.org List-Id: xen-devel@lists.xenproject.org On 07/09/2015 04:42 PM, Julien Grall wrote: > - > struct remap_data { > xen_pfn_t *fgmfn; /* foreign domain's gmfn */ > + xen_pfn_t *efgmfn; /* pointer to the end of the fgmfn array */ It might be better to keep size of fgmfn array instead. > > +static int unmap_gfn(struct page *page, unsigned long pfn, void *data) > +{ > + int *nr = data; > + struct xen_remove_from_physmap xrp; > + > + /* The Linux Page may not have been fully mapped to Xen */ > + if (!*nr) > + return 0; > + > + xrp.domid = DOMID_SELF; > + xrp.gpfn = pfn; > + (void)HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp); > + > + (*nr)--; > + > + return 0; > +} > + > int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma, > int nr, struct page **pages) > { > int i; > + int nr_page = round_up(nr, XEN_PFN_PER_PAGE); > > - for (i = 0; i < nr; i++) { > - struct xen_remove_from_physmap xrp; > - unsigned long pfn; > + for (i = 0; i < nr_page; i++) { > + /* unmap_gfn guarantees ret == 0 */ > + BUG_ON(xen_apply_to_page(pages[i], unmap_gfn, &nr)); TBH, I am not sure how useful xen_apply_to_page() routine is. In this patch especially, but also in others. -boris