From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755613AbcASPKg (ORCPT ); Tue, 19 Jan 2016 10:10:36 -0500 Received: from mail-pa0-f49.google.com ([209.85.220.49]:36587 "EHLO mail-pa0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753920AbcASPK2 (ORCPT ); Tue, 19 Jan 2016 10:10:28 -0500 Message-ID: <569E51D4.9020209@linaro.org> Date: Tue, 19 Jan 2016 23:10:12 +0800 From: Shannon Zhao User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Stefano Stabellini , Shannon Zhao CC: linux-arm-kernel@lists.infradead.org, ard.biesheuvel@linaro.org, stefano.stabellini@citrix.com, david.vrabel@citrix.com, mark.rutland@arm.com, devicetree@vger.kernel.org, linux-efi@vger.kernel.org, catalin.marinas@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, xen-devel@lists.xen.org, julien.grall@citrix.com, peter.huangpeng@huawei.com Subject: Re: [Xen-devel] [PATCH v2 03/16] Xen: xlate: Use page_to_xen_pfn instead of page_to_pfn References: <1452840929-19612-1-git-send-email-zhaoshenglong@huawei.com> <1452840929-19612-4-git-send-email-zhaoshenglong@huawei.com> <569C7363.80106@huawei.com> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2016/1/19 22:59, Stefano Stabellini wrote: > On Mon, 18 Jan 2016, Shannon Zhao wrote: >> On 2016/1/16 1:08, Stefano Stabellini wrote: >>> On Fri, 15 Jan 2016, Shannon Zhao wrote: >>>> From: Shannon Zhao >>>> >>>> Use page_to_xen_pfn in case of 64KB page. >>>> >>>> Signed-off-by: Shannon Zhao >>>> --- >>>> drivers/xen/xlate_mmu.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c >>>> index 9692656..b9fcc2c 100644 >>>> --- a/drivers/xen/xlate_mmu.c >>>> +++ b/drivers/xen/xlate_mmu.c >>>> @@ -227,7 +227,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt, >>>> return rc; >>>> } >>>> for (i = 0; i < nr_grant_frames; i++) >>>> - pfns[i] = page_to_pfn(pages[i]); >>>> + pfns[i] = page_to_xen_pfn(pages[i]); >>> >>> Shannon, thanks for the patch. >>> >>> Keeping in mind that in the 64K case, kernel pages are 64K but xen pages >>> are still 4K, I think you also need to allocate >>> (nr_grant_frames/XEN_PFN_PER_PAGE) kernel pages (assuming that they are >>> allocated contiguously): nr_grant_frames refers to 4K pages, while >>> xen_xlate_map_ballooned_pages is allocating pages on a 64K granularity >>> (sizeof(pages[0]) == 64K). >>> >>> Be careful that alloc_xenballooned_pages deals with 64K pages (because >>> it deals with kernel pages), while xen_pfn_t is always 4K based (because >>> it deals with Xen pfns). >>> >>> Please test it with and without CONFIG_ARM64_64K_PAGES. Thanks! >>> >> Stefano, thanks for your explanation. How about below patch? > > Good work, it looks like you covered all bases, I think it should work, > but I haven't tested it myself. Just one note below. > > >> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c >> index 9692656..e1f7c95 100644 >> --- a/drivers/xen/xlate_mmu.c >> +++ b/drivers/xen/xlate_mmu.c >> @@ -207,9 +207,12 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t >> **gfns, void **virt, >> void *vaddr; >> int rc; >> unsigned int i; >> + unsigned long nr_pages; >> + xen_pfn_t xen_pfn = 0; >> >> BUG_ON(nr_grant_frames == 0); >> - pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL); >> + nr_pages = DIV_ROUND_UP(nr_grant_frames, XEN_PFN_PER_PAGE); >> + pages = kcalloc(nr_pages, sizeof(pages[0]), GFP_KERNEL); >> if (!pages) >> return -ENOMEM; >> >> @@ -218,22 +221,25 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t >> **gfns, void **virt, >> kfree(pages); >> return -ENOMEM; >> } >> - rc = alloc_xenballooned_pages(nr_grant_frames, pages); >> + rc = alloc_xenballooned_pages(nr_pages, pages); >> if (rc) { >> - pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", >> __func__, >> - nr_grant_frames, rc); >> + pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", >> __func__, >> + nr_pages, rc); >> kfree(pages); >> kfree(pfns); >> return rc; >> } >> - for (i = 0; i < nr_grant_frames; i++) >> - pfns[i] = page_to_pfn(pages[i]); >> + for (i = 0; i < nr_grant_frames; i++) { >> + if ((i % XEN_PFN_PER_PAGE) == 0) >> + xen_pfn = page_to_xen_pfn(pages[i / >> XEN_PFN_PER_PAGE]); >> + pfns[i] = xen_pfn++; >> + } > > We might want to: > > pfns[i] = pfn_to_gfn(xen_pfn++); > > for consistency, even though for autotranslate guests pfn_to_gfn always > returns pfn. > Ok, will add. Thanks. > >> - vaddr = vmap(pages, nr_grant_frames, 0, PAGE_KERNEL); >> + vaddr = vmap(pages, nr_pages, 0, PAGE_KERNEL); >> if (!vaddr) { >> - pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__, >> - nr_grant_frames, rc); >> - free_xenballooned_pages(nr_grant_frames, pages); >> + pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__, >> + nr_pages, rc); >> + free_xenballooned_pages(nr_pages, pages); >> kfree(pages); >> kfree(pfns); >> return -ENOMEM; >> >> -- >> Shannon >> -- Shannon