From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Vrabel Subject: Re: [RFC 22/23] xen/privcmd: Add support for Linux 64KB page granularity Date: Tue, 19 May 2015 16:39:31 +0100 Message-ID: <555B5933.9040405__16119.5697416135$1432050129$gmane$org@citrix.com> References: <1431622863-28575-1-git-send-email-julien.grall@citrix.com> <1431622863-28575-23-git-send-email-julien.grall@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Yujcw-0003jT-Vq for xen-devel@lists.xenproject.org; Tue, 19 May 2015 15:40:19 +0000 In-Reply-To: <1431622863-28575-23-git-send-email-julien.grall@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Julien Grall , xen-devel@lists.xenproject.org Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, tim@xen.org, linux-kernel@vger.kernel.org, David Vrabel , Boris Ostrovsky , linux-arm-kernel@lists.infradead.org List-Id: xen-devel@lists.xenproject.org On 14/05/15 18:01, Julien Grall wrote: > The hypercall interface (as well as the toolstack) is always using 4KB > page granularity. When the toolstack is asking for mapping a series of > guest PFN in a batch, it expects to have the page map contiguously in > its virtual memory. > > When Linux is using 64KB page granularity, the privcmd driver will have > to map multiple Xen PFN in a single Linux page. > > Note that this solution works on page granularity which is a multiple of > 4KB. [...] > --- a/drivers/xen/xlate_mmu.c > +++ b/drivers/xen/xlate_mmu.c > @@ -63,6 +63,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn, > > struct remap_data { > xen_pfn_t *fgmfn; /* foreign domain's gmfn */ > + xen_pfn_t *egmfn; /* end foreign domain's gmfn */ I don't know what you mean by "end foreign domain". > pgprot_t prot; > domid_t domid; > struct vm_area_struct *vma; > @@ -78,17 +79,23 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr, > { > struct remap_data *info = data; > struct page *page = info->pages[info->index++]; > - unsigned long pfn = page_to_pfn(page); > - pte_t pte = pte_mkspecial(pfn_pte(pfn, info->prot)); > + unsigned long pfn = xen_page_to_pfn(page); > + pte_t pte = pte_mkspecial(pfn_pte(page_to_pfn(page), info->prot)); > int rc; > - > - rc = map_foreign_page(pfn, *info->fgmfn, info->domid); > - *info->err_ptr++ = rc; > - if (!rc) { > - set_pte_at(info->vma->vm_mm, addr, ptep, pte); > - info->mapped++; > + uint32_t i; > + > + for (i = 0; i < XEN_PFN_PER_PAGE; i++) { > + if (info->fgmfn == info->egmfn) > + break; > + > + rc = map_foreign_page(pfn++, *info->fgmfn, info->domid); > + *info->err_ptr++ = rc; > + if (!rc) { > + set_pte_at(info->vma->vm_mm, addr, ptep, pte); > + info->mapped++; > + } > + info->fgmfn++; This doesn't make any sense to me. Don't you need to gather the foreign GFNs into batches of PAGE_SIZE / XEN_PAGE_SIZE and map these all at once into a 64 KiB page? I don't see how you can have a set_pte_at() for each foreign GFN. David