xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Henry Wang <xin.wang2@amd.com>, xen-devel@lists.xenproject.org
Cc: "Anthony PERARD" <anthony.perard@citrix.com>,
	"Juergen Gross" <jgross@suse.com>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Jan Beulich" <jbeulich@suse.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Bertrand Marquis" <bertrand.marquis@arm.com>,
	"Michal Orzel" <michal.orzel@amd.com>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>,
	"Alec Kwapis" <alec.kwapis@medtronic.com>
Subject: Re: [PATCH] xen/common: Do not allocate magic pages 1:1 for direct mapped domains
Date: Wed, 28 Feb 2024 10:35:37 +0000	[thread overview]
Message-ID: <3982ba47-6709-47e3-a9c2-e2d3b4a2d8e3@xen.org> (raw)
In-Reply-To: <a84aeb87-17e8-4195-90cb-7b0123064106@amd.com>

Hi Henry,

On 27/02/2024 13:17, Henry Wang wrote:
> (-RISC-V and PPC people to avoid spamming their inbox as this is quite 
> Arm specific)
> 
> Hi Julien,
> 
> On 2/26/2024 5:13 PM, Julien Grall wrote:
>> Hi Henry,
>>
>> Welcome back!
> 
> Thanks!
> 
>> On 26/02/2024 01:19, Henry Wang wrote:
>>> An error message can seen from the init-dom0less application on
>>> direct-mapped 1:1 domains:
>>> ```
>>> Allocating magic pages
>>> memory.c:238:d0v0 mfn 0x39000 doesn't belong to d1
>>> Error on alloc magic pages
>>> ```
>>>
>>> This is because populate_physmap() automatically assumes gfn == mfn
>>> for direct mapped domains. This cannot be true for the magic pages
>>> that are allocated later for Dom0less DomUs from the init-dom0less
>>> helper application executed in Dom0.
>>>
>>> Force populate_physmap to take the "normal" memory allocation route for
>>> the magic pages even for 1:1 Dom0less DomUs. This should work as long
>>> as the 1:1 Dom0less DomU doesn't have anything else mapped at the same
>>> guest address as the magic pages:
>>> - gfn 0x39000 address 0x39000000
>>> - gfn 0x39001 address 0x39001000
>>> - gfn 0x39002 address 0x39002000
>>> - gfn 0x39003 address 0x39003000
>>
>> This is very fragile. You are making the assumption that the magic 
>> pages are not clashing with any RAM region. The layout defined in 
>> arch-arm.h has been designed for guest where Xen is in full control of 
>> the layout. This is not the case for directmapped domain. I don't 
>> think it is correct to try to re-use part of the layout.
> 
> Apologies for the (a bit) late reply, it took a bit longer for me to 
> understand the story about directmap stuff, and yes, now I agree with 
> you, for those directmapped domains we should not reuse the guest layout 
> directly.
> 
>> If you want to use 1:1 dom0less with xenstore & co, then you should 
>> find a different place in memory for the magic pages (TDB how to find 
>> that area). 
> 
> Yes, and maybe we can use similar strategy in find_unallocated_memory() 
> or find_domU_holes() to do that.
> 
>> You will still have the problem of the 1:1 allocation, but I think 
>> this could be solved bty adding a flag to force a non-1:1 allocation.
> 
> After checking the code flow, below rough plan came to my mind, I think 
> what we need to do is:
> 
> (1) Find a range of un-used memory using similar method in 
> find_unallocated_memory()/find_domU_holes()

AFAIK, the toolstack doesn't have any knowledge of the memeory layout 
for dom0less domUs today. We would need to expose it first.

Then the region could either be statically allocated (i.e. the admin 
provides it in the DTB) or dynamically.

> 
> (2) Change the base address, i.e. GUEST_MAGIC_BASE in alloc_xs_page() in 
> init-dom0less.c to point to the address in (1) if static mem or 11 
> directmap. (I think this is a bit tricky though, do you have any method 
> that in your mind?)

AFAIK, the toolstack doesn't know whether a domain is direct mapped or 
using static mem.

I think this ties with what I just wrote above. For dom0less domUs, we 
probably want Xen to prepare a memory layout (similar to the e820) that 
will then be retrieved by the toolstack.

> 
> (3) Use a flag or combination of existing flags (CDF_staticmem + 
> CDF_directmap) in populate_physmap() to force the allocation of these 
> magic pages using alloc_domheap_pages() - i.e. the "else" condition in 
> the bottom

If I am not mistaken, CDF_* are per-domain. So we would want to use MEMF_*.

>>> +
>>>   #endif /* __ASM_X86_MM_H__ */
>>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>>> index b3b05c2ec0..ab4bad79e2 100644
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -219,7 +219,7 @@ static void populate_physmap(struct memop_args *a)
>>>           }
>>>           else
>>>           {
>>> -            if ( is_domain_direct_mapped(d) )
>>> +            if ( is_domain_direct_mapped(d) && !is_magic_gpfn(gpfn) )
>>
>> This path will also be reached by dom0. Effectively, this will prevent 
>> dom0 to allocate the memory 1:1 for the magic GPFN (which is guest 
>> specific).
> 
> I think above proposal will solve this issue.
> 
>> Also, why are you only checking the first GFN? What if the caller pass 
>> an overlapped region?
> 
> I am a bit confused. My understanding is at this point we are handling 
> one page at a time.

We are handling one "extent" at the time. This could be one or multiple 
pages (see extent_order).

Cheers,

-- 
Julien Grall


  reply	other threads:[~2024-02-28 10:35 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-26  1:19 [PATCH] xen/common: Do not allocate magic pages 1:1 for direct mapped domains Henry Wang
2024-02-26  8:25 ` Jan Beulich
2024-02-27 13:24   ` Henry Wang
2024-02-27 13:27     ` Jan Beulich
2024-02-27 13:35       ` Henry Wang
2024-02-27 13:51         ` Jan Beulich
2024-02-27 14:21           ` Henry Wang
2024-02-26  9:13 ` Julien Grall
2024-02-27 13:17   ` Henry Wang
2024-02-28 10:35     ` Julien Grall [this message]
2024-02-28 11:53       ` Henry Wang
2024-02-28 12:27         ` Julien Grall
2024-02-28 12:50           ` Henry Wang
2024-03-01  3:03           ` Henry Wang
2024-03-04 18:38             ` Julien Grall
2024-03-05  0:22               ` Henry Wang
2024-02-26 10:29 ` Michal Orzel
2024-02-27 13:27   ` Henry Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3982ba47-6709-47e3-a9c2-e2d3b4a2d8e3@xen.org \
    --to=julien@xen.org \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=alec.kwapis@medtronic.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=anthony.perard@citrix.com \
    --cc=bertrand.marquis@arm.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=michal.orzel@amd.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    --cc=xin.wang2@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).