xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Questions about PVH memory layout
@ 2020-06-28  6:58 joshua_peter
  2020-06-29  8:57 ` Roger Pau Monné
  0 siblings, 1 reply; 3+ messages in thread
From: joshua_peter @ 2020-06-28  6:58 UTC (permalink / raw)
  To: xen-devel

Hello everyone,

I hope this is the right forum for these kinds of questions (the website
states no "technical support queries"; I'm not sure if this qualifies).
If not, sorry for the disturbance; just simply direct me elsewhere then.

Anyway, I'm currently trying to get into how Xen works in detail, so
for one I've been reading a lot of code, but also I dumped the P2M table
of my PVH guest to get a feel for how things are layed out in memory. I
mean there is the usual stuff, such as lots of RAM, and the APIC is
mapped at 0xFEE00000 and the APCI tables at 0xFC000000 onwards. But two
things stuck out to me, which for the life of me I couldn't figure out
from just reading the code. The first one is, there are a few pages at
the end of the 32bit address space (from 0xFEFF8000 to 0xFEFFF000),
which according to the E820 is declared simply as "reserved". The other
thing is, the first 512 pages at the beginning of the address space are
mapped linearly, which usually leads to them being mapped as a single
2MB pages. But there is this one page at 0x00001000 that sticks out
completely. By that I mean (to make things more concrete), in my PVH
guest the page at 0x00000000 maps to 0x13C200000, 0x00002000 maps to
0x13C202000, 0x00003000 maps to 0x13C203000, etc. But 0x00001000 maps
to 0xB8DBD000, which seems very odd to me (at least from simply looking
at the addresses). My initial guess was that this is some bootloader
related stuff, but Google didn't show up any info related to that
memory area, and most of the x86/PC boot stuff seems to happen below
the 0x1000 mark.

Would someone be so kind to tell me what those two things are? Many
thanks in advance.

(Btw. I'm running Xen 4.13.1 on Intel x64, booting my PVH guest with
PyGRUB, if it's relevant.)

Best regards,
Peter


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Questions about PVH memory layout
  2020-06-28  6:58 Questions about PVH memory layout joshua_peter
@ 2020-06-29  8:57 ` Roger Pau Monné
  2020-06-29  9:57   ` Aw: " joshua_peter
  0 siblings, 1 reply; 3+ messages in thread
From: Roger Pau Monné @ 2020-06-29  8:57 UTC (permalink / raw)
  To: joshua_peter; +Cc: xen-devel

On Sun, Jun 28, 2020 at 08:58:14AM +0200, joshua_peter@web.de wrote:
> Hello everyone,
> 
> I hope this is the right forum for these kinds of questions (the website
> states no "technical support queries"; I'm not sure if this qualifies).
> If not, sorry for the disturbance; just simply direct me elsewhere then.
> 
> Anyway, I'm currently trying to get into how Xen works in detail, so
> for one I've been reading a lot of code, but also I dumped the P2M table
> of my PVH guest to get a feel for how things are layed out in memory. I
> mean there is the usual stuff, such as lots of RAM, and the APIC is
> mapped at 0xFEE00000 and the APCI tables at 0xFC000000 onwards. But two
> things stuck out to me, which for the life of me I couldn't figure out
> from just reading the code. The first one is, there are a few pages at
> the end of the 32bit address space (from 0xFEFF8000 to 0xFEFFF000),
> which according to the E820 is declared simply as "reserved".

Those are the HVM special pages, which are used for various Xen
specific things, like the shared memory ring for the PV console.
They are setup in tools/libxc/xc_dom_x86.c (see SPECIALPAGE_*).

> The other
> thing is, the first 512 pages at the beginning of the address space are
> mapped linearly, which usually leads to them being mapped as a single
> 2MB pages. But there is this one page at 0x00001000 that sticks out
> completely. By that I mean (to make things more concrete), in my PVH
> guest the page at 0x00000000 maps to 0x13C200000, 0x00002000 maps to
> 0x13C202000, 0x00003000 maps to 0x13C203000, etc. But 0x00001000 maps
> to 0xB8DBD000, which seems very odd to me (at least from simply looking
> at the addresses).

Are you booting some OS on the guest before dumping the memory map?
Keep in mind guest have the ability to modify the physmap, either by
mapping Xen shared areas (like the shared info page) or just by using
ballooning in order to poke holes into it (which can be populated
later). It's either that or some kind of bug/missoptimization in
meminit_hvm (also part of tools/libxc/xc_dom_x86.c).

Can you check if this 'weird' mapping at 0x1000 is also present if you
boot the guest with 'xl create -p foo.cfg'? (that will prevent the
guests from running, so that you can get the memory map before the
guest has modified it in any way).

> My initial guess was that this is some bootloader
> related stuff, but Google didn't show up any info related to that
> memory area, and most of the x86/PC boot stuff seems to happen below
> the 0x1000 mark.

If you are booting with pygrub there's no bootloader at all, so
whatever is happening is either done by the toolstack or the OS you are
loading.

Roger.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Aw: Re: Questions about PVH memory layout
  2020-06-29  8:57 ` Roger Pau Monné
@ 2020-06-29  9:57   ` joshua_peter
  0 siblings, 0 replies; 3+ messages in thread
From: joshua_peter @ 2020-06-29  9:57 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel

Hi Roger,

thank you very much for your help. Further replies are inline.

> Gesendet: Montag, 29. Juni 2020 um 10:57 Uhr
> Von: "Roger Pau Monné" <roger.pau@citrix.com>
> An: joshua_peter@web.de
> Cc: xen-devel@lists.xenproject.org
> Betreff: Re: Questions about PVH memory layout
>
> > The other
> > thing is, the first 512 pages at the beginning of the address space are
> > mapped linearly, which usually leads to them being mapped as a single
> > 2MB pages. But there is this one page at 0x00001000 that sticks out
> > completely. By that I mean (to make things more concrete), in my PVH
> > guest the page at 0x00000000 maps to 0x13C200000, 0x00002000 maps to
> > 0x13C202000, 0x00003000 maps to 0x13C203000, etc. But 0x00001000 maps
> > to 0xB8DBD000, which seems very odd to me (at least from simply looking
> > at the addresses).
> 
> Are you booting some OS on the guest before dumping the memory map?
> Keep in mind guest have the ability to modify the physmap, either by
> mapping Xen shared areas (like the shared info page) or just by using
> ballooning in order to poke holes into it (which can be populated
> later). It's either that or some kind of bug/missoptimization in
> meminit_hvm (also part of tools/libxc/xc_dom_x86.c).

Yes, my bad. I'm booting into Arch Linux. This must have been lost while I
was editing my e-mail.

> Can you check if this 'weird' mapping at 0x1000 is also present if you
> boot the guest with 'xl create -p foo.cfg'? (that will prevent the
> guests from running, so that you can get the memory map before the
> guest has modified it in any way).

Yeah, starting with the "-p" flag does get rid of this 'weird' mapping.

Thank you again. This cleared up a bunch of things.

Best regards,
Peter


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-06-29  9:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-28  6:58 Questions about PVH memory layout joshua_peter
2020-06-29  8:57 ` Roger Pau Monné
2020-06-29  9:57   ` Aw: " joshua_peter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).