From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751584AbdFFQR7 (ORCPT ); Tue, 6 Jun 2017 12:17:59 -0400 Received: from foss.arm.com ([217.140.101.70]:49146 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751423AbdFFQR6 (ORCPT ); Tue, 6 Jun 2017 12:17:58 -0400 Subject: Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory To: Boris Ostrovsky , xen-devel@lists.xen.org References: <20170531130357.14492-1-julien.grall@arm.com> <7199e366-f56a-acc8-ffa5-0c85d6975049@oracle.com> <592886a8-1443-6475-e318-85cb9acecead@arm.com> <0b91491d-8f67-fa22-8b27-7692f00bba51@arm.com> <1f38dcc0-8538-b8e5-a792-85867ea1c44d@arm.com> <0e7b0066-6219-79f8-5b17-b997a400d3f6@oracle.com> Cc: sstabellini@kernel.org, jgross@suse.com, linux-kernel@vger.kernel.org, stable@vger.kernel.org, Feng Kan From: Julien Grall Message-ID: Date: Tue, 6 Jun 2017 17:17:54 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <0e7b0066-6219-79f8-5b17-b997a400d3f6@oracle.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, It has been reviewed-by Boris but I don't see the patch queued. Would it be possible to queue it for 4.12? Cheers, On 01/06/17 21:41, Boris Ostrovsky wrote: > On 06/01/2017 11:38 AM, Julien Grall wrote: >> Hi Boris, >> >> On 01/06/17 16:16, Boris Ostrovsky wrote: >>> On 06/01/2017 10:01 AM, Julien Grall wrote: >>>> Hi Boris, >>>> >>>> On 01/06/17 14:33, Boris Ostrovsky wrote: >>>>> On 06/01/2017 08:50 AM, Julien Grall wrote: >>>>>> Hi Boris, >>>>>> >>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote: >>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote: >>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page >>>>>>>> granularity" did >>>>>>>> not go far enough to support 64KB in mmap_batch_fn. >>>>>>>> >>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when >>>>>>>> Linux >>>>>>>> is using 64KB page granularity the array of pages >>>>>>>> (vma->vm_private_data) >>>>>>>> contain one page per 64KB. Fix it by incrementing st->index >>>>>>>> correctly. >>>>>>>> >>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE != >>>>>>>> XEN_PAGE_SIZE. >>>>>>>> >>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page >>>>>>>> granularity") >>>>>>>> CC: stable@vger.kernel.org >>>>>>>> Reported-by: Feng Kan >>>>>>>> Signed-off-by: Julien Grall >>>>>>>> --- >>>>>>>> drivers/xen/privcmd.c | 4 ++-- >>>>>>>> 1 file changed, 2 insertions(+), 2 deletions(-) >>>>>>>> >>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c >>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644 >>>>>>>> --- a/drivers/xen/privcmd.c >>>>>>>> +++ b/drivers/xen/privcmd.c >>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, >>>>>>>> void *state) >>>>>>>> st->global_error = 1; >>>>>>>> } >>>>>>>> } >>>>>>>> - st->va += PAGE_SIZE * nr; >>>>>>>> - st->index += nr; >>>>>>>> + st->va += XEN_PAGE_SIZE * nr; >>>>>>>> + st->index += nr / XEN_PFN_PER_PAGE; >>>>>>>> >>>>>>>> return 0; >>>>>>>> } >>>>>>> >>>>>>> >>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()? >>>>>> >>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so >>>>>> there are no use of PAGE_MASK as the code has been converted to >>>>>> support 64K page granularity. >>>>>> >>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we >>>>>> don't really care as x86 only use 4KB page granularity. >>>>> >>>>> >>>>> I meant right above the change that you made. Should it also be >>>>> replaced >>>>> with XEN_PAGE_MASK? (Sorry for being unclear.) >>>> >>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be >>>> page aligned. So I think we want to keep PAGE_MASK here. >>> >>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE >>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test >>> this somewhere? I don't see it. >> > > I now see that this should (obviously) stay as PAGE_MASK, so > > Reviewed-by: Boris Ostrovsky > > but > >> nr might be smaller for the last batch. But all the intermediate batch >> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0). > > how can we have nr not covering full PAGE_SIZEs? If you are using 64K > pages, how can you map, say, only 4K (if nr==1)? > > -boris > >> >> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all >> the intermediate batch will always be an integral number of PAGE_SIZE. >> >> Cheers, >> > -- Julien Grall