From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA2A5C07E95 for ; Tue, 13 Jul 2021 16:16:21 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 972E4610C7 for ; Tue, 13 Jul 2021 16:16:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 972E4610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.155505.287045 (Exim 4.92) (envelope-from ) id 1m3L4j-0006Df-7G; Tue, 13 Jul 2021 16:16:01 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 155505.287045; Tue, 13 Jul 2021 16:16:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m3L4j-0006DW-4U; Tue, 13 Jul 2021 16:16:01 +0000 Received: by outflank-mailman (input) for mailman id 155505; Tue, 13 Jul 2021 16:16:00 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m3L4i-0006DQ-BL for xen-devel@lists.xenproject.org; Tue, 13 Jul 2021 16:16:00 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m3L4e-000173-W2; Tue, 13 Jul 2021 16:15:56 +0000 Received: from [54.239.6.178] (helo=a483e7b01a66.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1m3L4e-0002T9-Oc; Tue, 13 Jul 2021 16:15:56 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To: MIME-Version:Date:Message-ID:From:References:Cc:To:Subject; bh=bz1HCH7H0UxTodOJdtGTjFnwDY2dkeGZcM+kThjTzfs=; b=XJ9mhbRSVD5I6YAwj2tA7Ckz0a u/tu69gV/x+KVc4txFvARKJIREDaNL/tDyBCWa1O89vOJ70EDXehxgCiI8MSuy/90qgWmt2mtoZgw WJ+u+ilbRw+xqAuCSVLe+aiMqxF7xKvsVrip4uujJ3v28dHCR8r4rrldhYEotuOprhV8=; Subject: Re: [PATCH] stubdom: foreignmemory: Fix build after 0dbb4be739c5 To: Jan Beulich Cc: Julien Grall , Ian Jackson , Wei Liu , xen-devel@lists.xenproject.org, Andrew Cooper , Costin Lupu , Juergen Gross References: <20210713092019.7379-1-julien@xen.org> <0698e4b1-8fb9-919e-e9a2-1b135a808e3e@suse.com> <756ba923-17a6-0889-cc7e-bcd43a5eb258@citrix.com> <3505f2da-4c41-f5ca-d775-814d038d5bad@xen.org> <3c819563-b354-5527-050d-f698324d6021@xen.org> <65d35862-304c-7fe3-82de-3ff62f06529a@suse.com> <40c00267-60d2-c0fc-cde4-8ac4ce936f87@suse.com> <6c6afbc3-4444-7c3b-d6ef-2d3a2baa0b53@suse.com> From: Julien Grall Message-ID: Date: Tue, 13 Jul 2021 17:15:54 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <6c6afbc3-4444-7c3b-d6ef-2d3a2baa0b53@suse.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Hi Jan, On 13/07/2021 16:52, Jan Beulich wrote: > On 13.07.2021 16:33, Julien Grall wrote: >> On 13/07/2021 15:23, Jan Beulich wrote: >>> On 13.07.2021 16:19, Julien Grall wrote: >>>> On 13/07/2021 15:14, Jan Beulich wrote: >>>>>> And I don't think it should be named XC_PAGE_*, but rather XEN_PAGE_*. >>>>> >>>>> Even that doesn't seem right to me, at least in principle. There shouldn't >>>>> be a build time setting when it may vary at runtime. IOW on Arm I think a >>>>> runtime query to the hypervisor would be needed instead. >>>> >>>> Yes, we want to be able to use the same userspace/OS without rebuilding >>>> to a specific hypervisor page size. >>>> >>>>> And thinking >>>>> even more generally, perhaps there could also be mixed (base) page sizes >>>>> in use at run time, so it may need to be a bit mask which gets returned. >>>> >>>> I am not sure to understand this. Are you saying the hypervisor may use >>>> at the same time different page size? >>> >>> I think so, yes. And I further think the hypervisor could even allow its >>> guests to do so. >> >> This is already the case on Arm. We need to differentiate between the >> page size used by the guest and the one used by Xen for the stage-2 page >> table (what you call EPT on x86). >> >> In this case, we are talking about the page size used by the hypervisor >> to configure the stage-2 page table >> >>> There would be a distinction between the granularity at >>> which RAM gets allocated and the granularity at which page mappings (RAM >>> or other) can be established. Which yields an environment which I'd say >>> has no clear "system page size". >> >> I don't quite understand why you would allocate and etablish the memory >> with a different page size in the hypervisor. Can you give an example? > > Pages may get allocated in 16k chunks, but there may be ways to map > 4k MMIO regions, 4k grants, etc. Due to the 16k allocation granularity > you'd e.g. still balloon pages in and out at 16k granularity. Right, 16KB is a multiple of 4KB, so a guest could say "Please allocate a contiguous chunk of 4 4KB pages". From my understanding, you are suggesting to tell the guest that we "support 4KB, 16KB, 64KB...". However, it should be sufficient to say "we support 4KB and all its multiple". For hypervisor configured with 16KB (or 64KB) as the smaller page granularity, then we would say "we support 16KB (resp. 64KB) and all its multiple". So the only thing we need is a way to query the small page granularity supported. This could be a shift, size, whatever... If the guest is supporting a small page granularity, then the guest would need to make sure to adapt the balloning, grants... so they are at least a multiple of the page granularity supported by the hypervisor. Cheers, -- Julien Grall