All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] QEMU savevm RAM page offsets
@ 2013-08-13 13:30 Juerg Haefliger
  2013-08-13 16:03 ` Andreas Färber
  0 siblings, 1 reply; 9+ messages in thread
From: Juerg Haefliger @ 2013-08-13 13:30 UTC (permalink / raw)
  To: qemu-devel

Hi,

I'm writing/extending a little tool (courtesy of Andrew @pikewerks)
that dumps the RAM pages from a savevm file to a raw memory dump file
so that it can be analysed using tools that require a raw dump as
input.

I can successfully locate and extract the pages and write them out to
file. This seems to work for smaller VMs but when the memory size of a
VM approaches 3.5 GB, things start to break, i.e., the analysis tool
(volatility in this case) trips over the file. I believe this is
because of PCI devices that are memory mapped below the 4GB memory
mark which my tool doesn't account for at the moment. In other words,
my tool puts all the pages in consecutive order without leaving
'holes' for the memory mapped devices. Question: is the information
where the holes are and what the 'real' page offsets are in the savevm
file or how could I gather that info?

Any help is greatly appreciated.

Thanks
...Juerg

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 13:30 [Qemu-devel] QEMU savevm RAM page offsets Juerg Haefliger
@ 2013-08-13 16:03 ` Andreas Färber
  2013-08-13 16:51   ` Laszlo Ersek
  0 siblings, 1 reply; 9+ messages in thread
From: Andreas Färber @ 2013-08-13 16:03 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: qemu-devel

Hi,

Am 13.08.2013 15:30, schrieb Juerg Haefliger:
> I'm writing/extending a little tool (courtesy of Andrew @pikewerks)
> that dumps the RAM pages from a savevm file to a raw memory dump file
> so that it can be analysed using tools that require a raw dump as
> input.

Can't you just use QEMU's guest-memory-dump API? Either directly or
after loadvm'ing it.

Regards,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 16:03 ` Andreas Färber
@ 2013-08-13 16:51   ` Laszlo Ersek
  2013-08-13 16:58     ` Laszlo Ersek
  0 siblings, 1 reply; 9+ messages in thread
From: Laszlo Ersek @ 2013-08-13 16:51 UTC (permalink / raw)
  To: Andreas Färber, Juerg Haefliger; +Cc: qemu-devel

On 08/13/13 18:03, Andreas Färber wrote:
> Hi,
> 
> Am 13.08.2013 15:30, schrieb Juerg Haefliger:
>> I'm writing/extending a little tool (courtesy of Andrew @pikewerks)
>> that dumps the RAM pages from a savevm file to a raw memory dump file
>> so that it can be analysed using tools that require a raw dump as
>> input.
> 
> Can't you just use QEMU's guest-memory-dump API? Either directly or
> after loadvm'ing it.

That used to suffer from the exact same problem Juerg described, but I
fixed it for 1.6.

See the series at

  http://thread.gmane.org/gmane.comp.emulators.qemu/226715

(See patch 4/4 for a diagram that has been called "nice" in a private
email.)

Commit hashes:

1  2cac260 dump: clamp guest-provided mapping lengths to ramblock sizes
2  5ee163e dump: introduce GuestPhysBlockList
3  c5d7f60 dump: populate guest_phys_blocks
4  56c4bfb dump: rebase from host-private RAMBlock offsets to
           guest-physical addresses

(Red Hat BZ: <https://bugzilla.redhat.com/show_bug.cgi?id=981582>.)

In short, you have to use guest-physical addresses (hwaddr) instead of
qemu-internal RAMBlock offsets (ram_addr_t), because the vmcore analysis
tool ("crash" eg.) works with guest-phys addresses as well.

See also the HACKING file, section "2.1. Scalars".

So yes, use the dump-guest-memory QMP/HMP command.

Laszlo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 16:51   ` Laszlo Ersek
@ 2013-08-13 16:58     ` Laszlo Ersek
  2013-08-13 17:52       ` Juerg Haefliger
  0 siblings, 1 reply; 9+ messages in thread
From: Laszlo Ersek @ 2013-08-13 16:58 UTC (permalink / raw)
  To: Andreas Färber, Juerg Haefliger; +Cc: qemu-devel

(apologies for responding to myself)

On 08/13/13 18:51, Laszlo Ersek wrote:
> On 08/13/13 18:03, Andreas Färber wrote:
>> Hi,
>>
>> Am 13.08.2013 15:30, schrieb Juerg Haefliger:
>>> I'm writing/extending a little tool (courtesy of Andrew @pikewerks)
>>> that dumps the RAM pages from a savevm file to a raw memory dump file
>>> so that it can be analysed using tools that require a raw dump as
>>> input.
>>
>> Can't you just use QEMU's guest-memory-dump API? Either directly or
>> after loadvm'ing it.
> 
> That used to suffer from the exact same problem Juerg described, but I
> fixed it for 1.6.

... note that the savevm file format doesn't need fixing; it is meant
for internal consumption only (ie. guest RAM saved in ram_addr_t space,
then loaded back into ram_addr_t space). It is not meant for external
tools that expect guest-phys addresses (= hwaddr).

Laszlo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 16:58     ` Laszlo Ersek
@ 2013-08-13 17:52       ` Juerg Haefliger
  2013-08-13 18:07         ` Paolo Bonzini
  0 siblings, 1 reply; 9+ messages in thread
From: Juerg Haefliger @ 2013-08-13 17:52 UTC (permalink / raw)
  To: Laszlo Ersek; +Cc: Andreas Färber, qemu-devel

On Tue, Aug 13, 2013 at 6:58 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> (apologies for responding to myself)
>
> On 08/13/13 18:51, Laszlo Ersek wrote:
>> On 08/13/13 18:03, Andreas Färber wrote:
>>> Hi,
>>>
>>> Am 13.08.2013 15:30, schrieb Juerg Haefliger:
>>>> I'm writing/extending a little tool (courtesy of Andrew @pikewerks)
>>>> that dumps the RAM pages from a savevm file to a raw memory dump file
>>>> so that it can be analysed using tools that require a raw dump as
>>>> input.
>>>
>>> Can't you just use QEMU's guest-memory-dump API? Either directly or
>>> after loadvm'ing it.
>>
>> That used to suffer from the exact same problem Juerg described, but I
>> fixed it for 1.6.
>
> ... note that the savevm file format doesn't need fixing; it is meant
> for internal consumption only (ie. guest RAM saved in ram_addr_t space,
> then loaded back into ram_addr_t space). It is not meant for external
> tools that expect guest-phys addresses (= hwaddr).

I didn't mean to imply that the savevm format is broken and needed
fixing. I was just wondering if the data is there and I simply hadn't
found it. Upgrading QEMU is not an option at the moment since these
are tightly controlled productions machines. Is it possible to loadvm
a savevm file from 1.0 with 1.6 to then use guest-memory-dump?

...Juerg


> Laszlo
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 17:52       ` Juerg Haefliger
@ 2013-08-13 18:07         ` Paolo Bonzini
  2013-08-13 19:06           ` Juerg Haefliger
  0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2013-08-13 18:07 UTC (permalink / raw)
  To: Juerg Haefliger
  Cc: Michael Tokarev, Laszlo Ersek, Andreas Färber, qemu-devel

Il 13/08/2013 19:52, Juerg Haefliger ha scritto:
> I didn't mean to imply that the savevm format is broken and needed
> fixing. I was just wondering if the data is there and I simply hadn't
> found it. Upgrading QEMU is not an option at the moment since these
> are tightly controlled productions machines. Is it possible to loadvm
> a savevm file from 1.0 with 1.6 to then use guest-memory-dump?

Yes, it should, but one important thing since 1.0 has been the merger of
qemu-kvm and QEMU.  What distribution are you using?  I know Fedora
allows qemu-kvm-1.0 to QEMU-1.6 compatibility, but I don't know about
others.

Michael Tokarev is the maintainer of the Debian package, so he may be
able to answer.

Alternatively, you can modify your utility to simply add 512 MB to the
addresses above 3.5 GB.

Paolo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 18:07         ` Paolo Bonzini
@ 2013-08-13 19:06           ` Juerg Haefliger
  2013-08-13 19:25             ` Laszlo Ersek
  0 siblings, 1 reply; 9+ messages in thread
From: Juerg Haefliger @ 2013-08-13 19:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Michael Tokarev, Laszlo Ersek, Andreas Färber, qemu-devel

On Tue, Aug 13, 2013 at 8:07 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 13/08/2013 19:52, Juerg Haefliger ha scritto:
>> I didn't mean to imply that the savevm format is broken and needed
>> fixing. I was just wondering if the data is there and I simply hadn't
>> found it. Upgrading QEMU is not an option at the moment since these
>> are tightly controlled productions machines. Is it possible to loadvm
>> a savevm file from 1.0 with 1.6 to then use guest-memory-dump?
>
> Yes, it should, but one important thing since 1.0 has been the merger of
> qemu-kvm and QEMU.  What distribution are you using?  I know Fedora
> allows qemu-kvm-1.0 to QEMU-1.6 compatibility, but I don't know about
> others.

Ubuntu 12.04


> Michael Tokarev is the maintainer of the Debian package, so he may be
> able to answer.
>
> Alternatively, you can modify your utility to simply add 512 MB to the
> addresses above 3.5 GB.

Is it really as simple as that? Isn't the OS (particularly Windows)
possibly doing some crazy remapping that needs to be taken into
account? meminfo on a VM with 4GB running Windows 2008 shows the
following:

C:\Users\Administrator\Desktop\MemInfo\amd64>MemInfo.exe -r
MemInfo v2.10 - Show PFN database information
Copyright (C) 2007-2009 Alex Ionescu
www.alex-ionescu.com

Physical Memory Range: 0000000000001000 to 000000000009B000 (154 pages, 616 KB)
Physical Memory Range: 0000000000100000 to 00000000DFFFD000 (917245
pages, 3668980 KB)
Physical Memory Range: 0000000100000000 to 0000000120000000 (131072
pages, 524288 KB)
MmHighestPhysicalPage: 1179648


...Juerg

> Paolo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 19:06           ` Juerg Haefliger
@ 2013-08-13 19:25             ` Laszlo Ersek
  2013-08-16  6:12               ` Juerg Haefliger
  0 siblings, 1 reply; 9+ messages in thread
From: Laszlo Ersek @ 2013-08-13 19:25 UTC (permalink / raw)
  To: Juerg Haefliger
  Cc: Paolo Bonzini, Michael Tokarev, Andreas Färber, qemu-devel

On 08/13/13 21:06, Juerg Haefliger wrote:
> On Tue, Aug 13, 2013 at 8:07 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 13/08/2013 19:52, Juerg Haefliger ha scritto:
>>> I didn't mean to imply that the savevm format is broken and needed
>>> fixing. I was just wondering if the data is there and I simply hadn't
>>> found it. Upgrading QEMU is not an option at the moment since these
>>> are tightly controlled productions machines. Is it possible to loadvm
>>> a savevm file from 1.0 with 1.6 to then use guest-memory-dump?
>>
>> Yes, it should, but one important thing since 1.0 has been the merger of
>> qemu-kvm and QEMU.  What distribution are you using?  I know Fedora
>> allows qemu-kvm-1.0 to QEMU-1.6 compatibility, but I don't know about
>> others.
> 
> Ubuntu 12.04
> 
> 
>> Michael Tokarev is the maintainer of the Debian package, so he may be
>> able to answer.
>>
>> Alternatively, you can modify your utility to simply add 512 MB to the
>> addresses above 3.5 GB.
> 
> Is it really as simple as that? Isn't the OS (particularly Windows)
> possibly doing some crazy remapping that needs to be taken into
> account? meminfo on a VM with 4GB running Windows 2008 shows the
> following:
> 
> C:\Users\Administrator\Desktop\MemInfo\amd64>MemInfo.exe -r
> MemInfo v2.10 - Show PFN database information
> Copyright (C) 2007-2009 Alex Ionescu
> www.alex-ionescu.com
> 
> Physical Memory Range: 0000000000001000 to 000000000009B000 (154 pages, 616 KB)
> Physical Memory Range: 0000000000100000 to 00000000DFFFD000 (917245
> pages, 3668980 KB)
> Physical Memory Range: 0000000100000000 to 0000000120000000 (131072
> pages, 524288 KB)
> MmHighestPhysicalPage: 1179648

That should be fine, I think. The 384 KB hole between 640KB and 1MB is
actually contiguously backed by RAMBlock, it is just not (necessarily)
presented as conventional memory to the guest. You can treat the [0,
0x0e0000000) left-closed, right-open interval as contiguous.

Again, check out the diagram in 4/4 that I linked before. Compare it to
pc_init1() in "hw/pc_piix.c", at tag "v1.0" in
<git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git>. Look for the
variable "below_4g_mem_size".

Laszlo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU savevm RAM page offsets
  2013-08-13 19:25             ` Laszlo Ersek
@ 2013-08-16  6:12               ` Juerg Haefliger
  0 siblings, 0 replies; 9+ messages in thread
From: Juerg Haefliger @ 2013-08-16  6:12 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Paolo Bonzini, Michael Tokarev, Andreas Färber, qemu-devel

>>>> I didn't mean to imply that the savevm format is broken and needed
>>>> fixing. I was just wondering if the data is there and I simply hadn't
>>>> found it. Upgrading QEMU is not an option at the moment since these
>>>> are tightly controlled productions machines. Is it possible to loadvm
>>>> a savevm file from 1.0 with 1.6 to then use guest-memory-dump?
>>>
>>> Yes, it should, but one important thing since 1.0 has been the merger of
>>> qemu-kvm and QEMU.  What distribution are you using?  I know Fedora
>>> allows qemu-kvm-1.0 to QEMU-1.6 compatibility, but I don't know about
>>> others.
>>
>> Ubuntu 12.04
>>
>>
>>> Michael Tokarev is the maintainer of the Debian package, so he may be
>>> able to answer.
>>>
>>> Alternatively, you can modify your utility to simply add 512 MB to the
>>> addresses above 3.5 GB.
>>
>> Is it really as simple as that? Isn't the OS (particularly Windows)
>> possibly doing some crazy remapping that needs to be taken into
>> account? meminfo on a VM with 4GB running Windows 2008 shows the
>> following:
>>
>> C:\Users\Administrator\Desktop\MemInfo\amd64>MemInfo.exe -r
>> MemInfo v2.10 - Show PFN database information
>> Copyright (C) 2007-2009 Alex Ionescu
>> www.alex-ionescu.com
>>
>> Physical Memory Range: 0000000000001000 to 000000000009B000 (154 pages, 616 KB)
>> Physical Memory Range: 0000000000100000 to 00000000DFFFD000 (917245
>> pages, 3668980 KB)
>> Physical Memory Range: 0000000100000000 to 0000000120000000 (131072
>> pages, 524288 KB)
>> MmHighestPhysicalPage: 1179648
>
> That should be fine, I think. The 384 KB hole between 640KB and 1MB is
> actually contiguously backed by RAMBlock, it is just not (necessarily)
> presented as conventional memory to the guest. You can treat the [0,
> 0x0e0000000) left-closed, right-open interval as contiguous.

Indeed simply adding a 512 MB hole between 3.5 GB and 4 GB did the
trick. Thanks a lot.


> Again, check out the diagram in 4/4 that I linked before. Compare it to
> pc_init1() in "hw/pc_piix.c", at tag "v1.0" in
> <git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git>. Look for the
> variable "below_4g_mem_size".

Nice diagram and very helpful.

Thanks
...Juerg


> Laszlo

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-08-16  6:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-13 13:30 [Qemu-devel] QEMU savevm RAM page offsets Juerg Haefliger
2013-08-13 16:03 ` Andreas Färber
2013-08-13 16:51   ` Laszlo Ersek
2013-08-13 16:58     ` Laszlo Ersek
2013-08-13 17:52       ` Juerg Haefliger
2013-08-13 18:07         ` Paolo Bonzini
2013-08-13 19:06           ` Juerg Haefliger
2013-08-13 19:25             ` Laszlo Ersek
2013-08-16  6:12               ` Juerg Haefliger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.