* .img on nfs, relative on ram, consuming mass ram
@ 2010-09-15 9:39 TOURNIER Frédéric
2010-09-16 9:09 ` Andre Przywara
2011-09-19 12:12 ` Rickard Lundin
0 siblings, 2 replies; 12+ messages in thread
From: TOURNIER Frédéric @ 2010-09-15 9:39 UTC (permalink / raw)
To: kvm
Hi !
Here's my config :
Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
Hosts : AMD 64X2, Phenom and Core 2 duo
OS : Slackware 64 13.0
Kernel : 2.6.35.4 and many previous versions
I use a PXE server to boot semi-diskless (swap partitions and some local stuff) stations.
This server also serves a read-only nfs folder, with plenty of .img on it.
When clients connects, a relative image is created in /tmp, which is a tmpfs, so hosted in ram.
And here i go on my 2G stations :
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /tmp/relimg.img
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /dev/shm/relimg.img
I tried both. Always the same result : the ram is consumed quickly, and mass swap occurs.
On a 4G system, i see kvm uses more than 1024, maybe 1200.
And everytime a launch a program inside the vm, the amount of the host free ram (not cached) diminishes, which is weird, because it should have been reserved.
So on a 2G system, swap occurs very fast and the machine slow a lot down.
An on a total diskless system, this leads fast to a freeze.
I have no problem if i use a relative image on disk :
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-15 9:39 .img on nfs, relative on ram, consuming mass ram TOURNIER Frédéric
@ 2010-09-16 9:09 ` Andre Przywara
2010-09-16 12:01 ` TOURNIER Frédéric
2010-09-16 14:49 ` David S. Ahern
2011-09-19 12:12 ` Rickard Lundin
1 sibling, 2 replies; 12+ messages in thread
From: Andre Przywara @ 2010-09-16 9:09 UTC (permalink / raw)
To: TOURNIER Frédéric; +Cc: kvm
TOURNIER Frédéric wrote:
> Hi !
> Here's my config :
>
> Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
> Hosts : AMD 64X2, Phenom and Core 2 duo
> OS : Slackware 64 13.0
> Kernel : 2.6.35.4 and many previous versions
>
> I use a PXE server to boot semi-diskless (swap partitions and some local stuff) stations.
> This server also serves a read-only nfs folder, with plenty of .img on it.
> When clients connects, a relative image is created in /tmp, which is a tmpfs, so hosted in ram.
>
> And here i go on my 2G stations :
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /tmp/relimg.img
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /dev/shm/relimg.img
>
> I tried both. Always the same result : the ram is consumed quickly, and mass swap occurs.
Which is only natural, as tmpfs is promising to never swap. So it will
take precedence over other RAM (that's why it is limited to half of the
memory by default). As soon as the guest has (re)written more disk
sectors than your free RAM can hold, the system will start to swap out
your guest RAM (and other host applications).
So in general you should avoid putting relative disk images to tmpfs if
your host memory is limited. As a workaround you could try to further
limit the tmpfs max size (mount -t tmpfs -o size=512M none /dev/shm),
but this could lead to data loss in your guest as it possibly cannot
back the written sectors anymore.
> On a 4G system, i see kvm uses more than 1024, maybe 1200.
> And everytime a launch a program inside the vm, the amount of the host free ram (not cached) diminishes, which is weird, because it should have been reserved.
KVM uses on-demand paging like other applications. So it will not
reserve memory for your guest (unless you use hugetlbfs's -mempath):
$ kvm -cdrom ttylinux_ser.iso -nographic -m 3072M
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6015 user 20 0 3205m 128m 3020 S 2 2.2 0:04.94 kvm
Regards,
Andre.
>
> So on a 2G system, swap occurs very fast and the machine slow a lot down.
> An on a total diskless system, this leads fast to a freeze.
>
> I have no problem if i use a relative image on disk :
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 9:09 ` Andre Przywara
@ 2010-09-16 12:01 ` TOURNIER Frédéric
2010-09-16 12:03 ` Andre Przywara
2010-09-16 14:49 ` David S. Ahern
1 sibling, 1 reply; 12+ messages in thread
From: TOURNIER Frédéric @ 2010-09-16 12:01 UTC (permalink / raw)
To: Andre Przywara; +Cc: kvm
Ok, thanks for taking time.
I'll dig into your answers.
So as i run relative.img on diskless systems with original.img on nfs, what are the best practice/tips i can use ?
Is ramfs more suitable than tmpfs ?
Fred.
On Thu, 16 Sep 2010 11:09:49 +0200
Andre Przywara <andre.przywara@amd.com> wrote:
> TOURNIER Frédéric wrote:
> > Hi !
> > Here's my config :
> >
> > Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
> > Hosts : AMD 64X2, Phenom and Core 2 duo
> > OS : Slackware 64 13.0
> > Kernel : 2.6.35.4 and many previous versions
> >
> > I use a PXE server to boot semi-diskless (swap partitions and some local stuff) stations.
> > This server also serves a read-only nfs folder, with plenty of .img on it.
> > When clients connects, a relative image is created in /tmp, which is a tmpfs, so hosted in ram.
> >
> > And here i go on my 2G stations :
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /tmp/relimg.img
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /dev/shm/relimg.img
> >
> > I tried both. Always the same result : the ram is consumed quickly, and mass swap occurs.
> Which is only natural, as tmpfs is promising to never swap. So it will
> take precedence over other RAM (that's why it is limited to half of the
> memory by default). As soon as the guest has (re)written more disk
> sectors than your free RAM can hold, the system will start to swap out
> your guest RAM (and other host applications).
> So in general you should avoid putting relative disk images to tmpfs if
> your host memory is limited. As a workaround you could try to further
> limit the tmpfs max size (mount -t tmpfs -o size=512M none /dev/shm),
> but this could lead to data loss in your guest as it possibly cannot
> back the written sectors anymore.
> > On a 4G system, i see kvm uses more than 1024, maybe 1200.
> > And everytime a launch a program inside the vm, the amount of the host free ram (not cached) diminishes, which is weird, because it should have been reserved.
> KVM uses on-demand paging like other applications. So it will not
> reserve memory for your guest (unless you use hugetlbfs's -mempath):
> $ kvm -cdrom ttylinux_ser.iso -nographic -m 3072M
> $ top
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>
> 6015 user 20 0 3205m 128m 3020 S 2 2.2 0:04.94 kvm
>
>
> Regards,
> Andre.
>
> >
> > So on a 2G system, swap occurs very fast and the machine slow a lot down.
> > An on a total diskless system, this leads fast to a freeze.
> >
> > I have no problem if i use a relative image on disk :
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none
>
> --
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> Tel: +49 351 448-3567-12
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 12:01 ` TOURNIER Frédéric
@ 2010-09-16 12:03 ` Andre Przywara
2010-09-16 13:19 ` Stefan Hajnoczi
0 siblings, 1 reply; 12+ messages in thread
From: Andre Przywara @ 2010-09-16 12:03 UTC (permalink / raw)
To: TOURNIER Frédéric; +Cc: kvm
TOURNIER Frédéric wrote:
> Ok, thanks for taking time.
> I'll dig into your answers.
>
> So as i run relative.img on diskless systems with original.img on nfs, what are the best practice/tips i can use ?
I thinks it is "-snapshot" you are looking for.
This will put the backing store into "normal" RAM, and you can later
commit it to the original image if needed. See the qemu manpage for more
details. In a nutshell you just specify the original image and add
-snapshot to the command line.
Regards,
Andre.
>
> Is ramfs more suitable than tmpfs ?
>
> Fred.
>
> On Thu, 16 Sep 2010 11:09:49 +0200
> Andre Przywara <andre.przywara@amd.com> wrote:
>
>> TOURNIER Frédéric wrote:
>>> Hi !
>>> Here's my config :
>>>
>>> Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
>>> Hosts : AMD 64X2, Phenom and Core 2 duo
>>> OS : Slackware 64 13.0
>>> Kernel : 2.6.35.4 and many previous versions
>>>
>>> I use a PXE server to boot semi-diskless (swap partitions and some local stuff) stations.
>>> This server also serves a read-only nfs folder, with plenty of .img on it.
>>> When clients connects, a relative image is created in /tmp, which is a tmpfs, so hosted in ram.
>>>
>>> And here i go on my 2G stations :
>>> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /tmp/relimg.img
>>> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 /dev/shm/relimg.img
>>>
>>> I tried both. Always the same result : the ram is consumed quickly, and mass swap occurs.
>> Which is only natural, as tmpfs is promising to never swap. So it will
>> take precedence over other RAM (that's why it is limited to half of the
>> memory by default). As soon as the guest has (re)written more disk
>> sectors than your free RAM can hold, the system will start to swap out
>> your guest RAM (and other host applications).
>> So in general you should avoid putting relative disk images to tmpfs if
>> your host memory is limited. As a workaround you could try to further
>> limit the tmpfs max size (mount -t tmpfs -o size=512M none /dev/shm),
>> but this could lead to data loss in your guest as it possibly cannot
>> back the written sectors anymore.
>>> On a 4G system, i see kvm uses more than 1024, maybe 1200.
>>> And everytime a launch a program inside the vm, the amount of the host free ram (not cached) diminishes, which is weird, because it should have been reserved.
>> KVM uses on-demand paging like other applications. So it will not
>> reserve memory for your guest (unless you use hugetlbfs's -mempath):
>> $ kvm -cdrom ttylinux_ser.iso -nographic -m 3072M
>> $ top
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>
>> 6015 user 20 0 3205m 128m 3020 S 2 2.2 0:04.94 kvm
>>
>>
>> Regards,
>> Andre.
>>
>>> So on a 2G system, swap occurs very fast and the machine slow a lot down.
>>> An on a total diskless system, this leads fast to a freeze.
>>>
>>> I have no problem if i use a relative image on disk :
>>> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none
>> --
>> Andre Przywara
>> AMD-Operating System Research Center (OSRC), Dresden, Germany
>> Tel: +49 351 448-3567-12
>>
>
--
Andre Przywara
AMD-OSRC (Dresden)
Tel: x29712
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 12:03 ` Andre Przywara
@ 2010-09-16 13:19 ` Stefan Hajnoczi
2010-09-16 13:48 ` Andre Przywara
0 siblings, 1 reply; 12+ messages in thread
From: Stefan Hajnoczi @ 2010-09-16 13:19 UTC (permalink / raw)
To: Andre Przywara; +Cc: TOURNIER Frédéric, kvm
2010/9/16 Andre Przywara <andre.przywara@amd.com>:
> TOURNIER Frédéric wrote:
>>
>> Ok, thanks for taking time.
>> I'll dig into your answers.
>>
>> So as i run relative.img on diskless systems with original.img on nfs,
>> what are the best practice/tips i can use ?
>
> I thinks it is "-snapshot" you are looking for.
> This will put the backing store into "normal" RAM, and you can later commit
> it to the original image if needed. See the qemu manpage for more details.
> In a nutshell you just specify the original image and add -snapshot to the
> command line.
-snapshot creates a temporary qcow2 image in /tmp whose backing file
is your original image. I'm not sure what you mean by "This will put
the backing store into "normal" RAM"?
Stefan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 13:19 ` Stefan Hajnoczi
@ 2010-09-16 13:48 ` Andre Przywara
2010-09-16 15:59 ` TOURNIER Frédéric
2010-09-20 13:30 ` TOURNIER Frédéric
0 siblings, 2 replies; 12+ messages in thread
From: Andre Przywara @ 2010-09-16 13:48 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: TOURNIER Frédéric, kvm
Stefan Hajnoczi wrote:
> 2010/9/16 Andre Przywara <andre.przywara@amd.com>:
>> TOURNIER Frédéric wrote:
>>> Ok, thanks for taking time.
>>> I'll dig into your answers.
>>>
>>> So as i run relative.img on diskless systems with original.img on nfs,
>>> what are the best practice/tips i can use ?
>> I thinks it is "-snapshot" you are looking for.
>> This will put the backing store into "normal" RAM, and you can later commit
>> it to the original image if needed. See the qemu manpage for more details.
>> In a nutshell you just specify the original image and add -snapshot to the
>> command line.
>
> -snapshot creates a temporary qcow2 image in /tmp whose backing file
> is your original image. I'm not sure what you mean by "This will put
> the backing store into "normal" RAM"?
Stefan, you are right. I never looked into the code and because the file
in /tmp is deleted just after creation there wasn't a sign of it.
For some reason I thought that the buffer would just be allocated in
memory. Sorry, my mistake and thanks for pointing this out.
So Fred, unfortunately this does not solve your problem. I guess you run
into a general problem. If the guest actually changes so much of the
disk that this cannot be backed by RAM in the host, you have lost.
One solution could be to just make (at least parts of) the disk
read-only (a write protected /usr partition works quite well).
If you are sure that writes are not that frequent, you could think of
putting the overlay file also on the remote storage (NFS). Although this
is rather slow, it shouldn't matter if there aren't many writes and the
local page cache should catch most of the accesses (while still being
nice to other RAM users).
Regards,
Andre.
>
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 9:09 ` Andre Przywara
2010-09-16 12:01 ` TOURNIER Frédéric
@ 2010-09-16 14:49 ` David S. Ahern
1 sibling, 0 replies; 12+ messages in thread
From: David S. Ahern @ 2010-09-16 14:49 UTC (permalink / raw)
To: Andre Przywara; +Cc: TOURNIER Frédéric, kvm
On 09/16/10 03:09, Andre Przywara wrote:
> Which is only natural, as tmpfs is promising to never swap. So it will
pages in tmpfs can swap. That's difference between ramfs and tmpfs. From
Documentation/filesystems/tmpfs.txt:
"tmpfs puts everything into the kernel internal caches and grows and
shrinks to accommodate the files it contains and is able to swap
unneeded pages out to swap space."
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 13:48 ` Andre Przywara
@ 2010-09-16 15:59 ` TOURNIER Frédéric
2010-09-20 13:30 ` TOURNIER Frédéric
1 sibling, 0 replies; 12+ messages in thread
From: TOURNIER Frédéric @ 2010-09-16 15:59 UTC (permalink / raw)
To: Andre Przywara; +Cc: Stefan Hajnoczi, kvm
I can't do this because i need performance.
I'm currently doing some tests. Will post soon.
My config's map :
NFS & PXE server qemu-kvm host
----------- ------------
| img.img |__________|relimg.img|
| readonly| net | |
----------- ------------
Keep in touch and thx for time.
Fred.
On Thu, 16 Sep 2010 15:48:18 +0200
Andre Przywara <andre.przywara@amd.com> wrote:
> Stefan Hajnoczi wrote:
> > 2010/9/16 Andre Przywara <andre.przywara@amd.com>:
> >> TOURNIER Frédéric wrote:
> >>> Ok, thanks for taking time.
> >>> I'll dig into your answers.
> >>>
> >>> So as i run relative.img on diskless systems with original.img on nfs,
> >>> what are the best practice/tips i can use ?
> >> I thinks it is "-snapshot" you are looking for.
> >> This will put the backing store into "normal" RAM, and you can later commit
> >> it to the original image if needed. See the qemu manpage for more details.
> >> In a nutshell you just specify the original image and add -snapshot to the
> >> command line.
> >
> > -snapshot creates a temporary qcow2 image in /tmp whose backing file
> > is your original image. I'm not sure what you mean by "This will put
> > the backing store into "normal" RAM"?
> Stefan, you are right. I never looked into the code and because the file
> in /tmp is deleted just after creation there wasn't a sign of it.
> For some reason I thought that the buffer would just be allocated in
> memory. Sorry, my mistake and thanks for pointing this out.
>
> So Fred, unfortunately this does not solve your problem. I guess you run
> into a general problem. If the guest actually changes so much of the
> disk that this cannot be backed by RAM in the host, you have lost.
> One solution could be to just make (at least parts of) the disk
> read-only (a write protected /usr partition works quite well).
> If you are sure that writes are not that frequent, you could think of
> putting the overlay file also on the remote storage (NFS). Although this
> is rather slow, it shouldn't matter if there aren't many writes and the
> local page cache should catch most of the accesses (while still being
> nice to other RAM users).
>
> Regards,
> Andre.
> >
> > Stefan
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
>
>
> --
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> Tel: +49 351 448-3567-12
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-16 13:48 ` Andre Przywara
2010-09-16 15:59 ` TOURNIER Frédéric
@ 2010-09-20 13:30 ` TOURNIER Frédéric
2010-09-20 14:00 ` Andre Przywara
1 sibling, 1 reply; 12+ messages in thread
From: TOURNIER Frédéric @ 2010-09-20 13:30 UTC (permalink / raw)
To: Andre Przywara; +Cc: Stefan Hajnoczi, kvm
Heres my benches, done in two days so dates are weird and results are very approximative.
What surprises me are in the Part 2, swap occured.
In 3 and 4, the ram is eaten up, even if the vm just booted.
------------------------------------
Part 0
End of boot :
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 500836 1556004 0 2244 359504
-/+ buffers/cache: 139088 1917752
Swap: 3903784 0 3903784
------------------------------------
Part 1
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img,cache=none
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 1656280 400560 0 34884 378332
-/+ buffers/cache: 1243064 813776
Swap: 3903784 0 3903784
bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
58946 -rw-r--r-- 1 ftournier info 60424192 2010-09-16 17:49 /mnt/hd/sda/sda1/tmp/relqlio.img
650M download inside the vm
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 1677648 379192 0 33860 397716
-/+ buffers/cache: 1246072 810768
Swap: 3903784 0 3903784
bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
914564 -rw-r--r-- 1 ftournier info 935723008 2010-09-20 14:07 /mnt/hd/sda/sda1/tmp/relqlio.img
------------------------------------
Part 2
reboot
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 2040172 16668 0 32952 758948
-/+ buffers/cache: 1248272 808568
Swap: 3903784 0 3903784
bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
60739 -rw-r--r-- 1 ftournier info 62259200 2010-09-16 17:57 /mnt/hd/sda/sda1/tmp/relqlio.img
650M download inside the vm
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 2040540 16300 0 34412 765208
-/+ buffers/cache: 1240920 815920
Swap: 3903784 8160 3895624
bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
842430 -rw-r--r-- 1 ftournier info 861929472 2010-09-20 14:20 /mnt/hd/sda/sda1/tmp/relqlio.img
------------------------------------
Part 3
reboot
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/tmp/relqlio.img
note : /tmp is a tmpfs filesystem
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 2009688 47152 0 248 766328
-/+ buffers/cache: 1243112 813728
Swap: 3903784 0 3903784
bash-3.1$ ls -lsa /tmp/relqlio.img
59848 -rw-r--r-- 1 ftournier info 61407232 2010-09-16 18:04 /tmp/relqlio.img
650M download inside the vm
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 2041404 15436 0 128 921276
-/+ buffers/cache: 1120000 936840
Swap: 3903784 248804 3654980
bash-3.1$ ls -lsa /tmp/relqlio.img
885448 -rw-r--r-- 1 ftournier info 906821632 2010-09-20 14:40 /tmp/relqlio.img
------------------------------------
Part 4
reboot
qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/dev/shm/relqlio.img
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 2009980 46860 0 172 767328
-/+ buffers/cache: 1242480 814360
Swap: 3903784 0 3903784
bash-3.1$ ls -lsa /dev/shm/relqlio.img
58496 -rw-r--r-- 1 ftournier info 59899904 2010-09-16 18:11 /dev/shm/relqlio.img
650M download inside the vm
bash-3.1$ free
total used free shared buffers cached
Mem: 2056840 2041576 15264 0 92 938976
-/+ buffers/cache: 1102508 954332
Swap: 3903784 266232 3637552
bash-3.1$ ls -lsa /dev/shm/relqlio.img
1016912 -rw-r--r-- 1 ftournier info 1039400960 2010-09-20 15:15 /dev/shm/relqlio.img
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-20 13:30 ` TOURNIER Frédéric
@ 2010-09-20 14:00 ` Andre Przywara
2010-09-20 15:34 ` TOURNIER Frédéric
0 siblings, 1 reply; 12+ messages in thread
From: Andre Przywara @ 2010-09-20 14:00 UTC (permalink / raw)
To: TOURNIER Frédéric; +Cc: Stefan Hajnoczi, kvm
TOURNIER Frédéric wrote:
> Heres my benches, done in two days so dates are weird and results are very approximative.
> What surprises me are in the Part 2, swap occured.
I don't know exactly why, but I have seen a small usage of swap
occasionally without real memory pressure. So I'd consider this normal.
> In 3 and 4, the ram is eaten up, even if the vm just booted.
Where is the RAM eaten up? I see always always 800 MB free, some more
even after the d/l:
You have to look at the second line of the free column, no the first
one. As you can see the OS has still enough RAM to afford a large cache,
so it uses this. Unused RAM is just a waste of resources (because it is
always there and there is no reason to not use it). If the 'cached'
contains a lot of clean pages, the OS can simply claim them should an
application request more memory. If you want a proof of this, try:
# echo 3 > /proc/sys/vm/drop_caches
This should free the cache and give you a high "real" free value back.
Have you tried cache=none with the tmpfs scenario? That should save you
some of the host's cached memory (note the difference between part 1 and
part2 in that respect), maybe at the expense of the guest's memory used
more heavily. Your choice here, that depends on the actual memory
utilization in the guest.
As I said before, it is not a very good idea to use such a setup (with
the relative image on tmpfs) if you are doing actual disk I/O,
especially large writes. AFAIK QCOW[2] does not really shrink, it only
grows, so you will end up with out-of-memory at some point.
But if you can restrict the amount of written data, this may work.
Regards,
Andre.
P.S. Sorry for the confusion about tmpfs vs. ramfs in my last week's mail.
>
> ------------------------------------
> Part 0
>
> End of boot :
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 500836 1556004 0 2244 359504
> -/+ buffers/cache: 139088 1917752
> Swap: 3903784 0 3903784
>
> ------------------------------------
>
> Part 1
>
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img,cache=none
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 1656280 400560 0 34884 378332
> -/+ buffers/cache: 1243064 813776
> Swap: 3903784 0 3903784
>
> bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> 58946 -rw-r--r-- 1 ftournier info 60424192 2010-09-16 17:49 /mnt/hd/sda/sda1/tmp/relqlio.img
>
> 650M download inside the vm
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 1677648 379192 0 33860 397716
> -/+ buffers/cache: 1246072 810768
> Swap: 3903784 0 3903784
>
> bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> 914564 -rw-r--r-- 1 ftournier info 935723008 2010-09-20 14:07 /mnt/hd/sda/sda1/tmp/relqlio.img
>
> ------------------------------------
>
> Part 2
>
> reboot
>
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 2040172 16668 0 32952 758948
> -/+ buffers/cache: 1248272 808568
> Swap: 3903784 0 3903784
>
> bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> 60739 -rw-r--r-- 1 ftournier info 62259200 2010-09-16 17:57 /mnt/hd/sda/sda1/tmp/relqlio.img
>
> 650M download inside the vm
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 2040540 16300 0 34412 765208
> -/+ buffers/cache: 1240920 815920
> Swap: 3903784 8160 3895624
>
> bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> 842430 -rw-r--r-- 1 ftournier info 861929472 2010-09-20 14:20 /mnt/hd/sda/sda1/tmp/relqlio.img
>
> ------------------------------------
>
> Part 3
>
> reboot
>
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/tmp/relqlio.img
>
> note : /tmp is a tmpfs filesystem
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 2009688 47152 0 248 766328
> -/+ buffers/cache: 1243112 813728
> Swap: 3903784 0 3903784
>
> bash-3.1$ ls -lsa /tmp/relqlio.img
> 59848 -rw-r--r-- 1 ftournier info 61407232 2010-09-16 18:04 /tmp/relqlio.img
>
> 650M download inside the vm
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 2041404 15436 0 128 921276
> -/+ buffers/cache: 1120000 936840
> Swap: 3903784 248804 3654980
>
> bash-3.1$ ls -lsa /tmp/relqlio.img
> 885448 -rw-r--r-- 1 ftournier info 906821632 2010-09-20 14:40 /tmp/relqlio.img
>
> ------------------------------------
>
> Part 4
>
> reboot
>
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/dev/shm/relqlio.img
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 2009980 46860 0 172 767328
> -/+ buffers/cache: 1242480 814360
> Swap: 3903784 0 3903784
>
> bash-3.1$ ls -lsa /dev/shm/relqlio.img
> 58496 -rw-r--r-- 1 ftournier info 59899904 2010-09-16 18:11 /dev/shm/relqlio.img
>
> 650M download inside the vm
>
> bash-3.1$ free
> total used free shared buffers cached
> Mem: 2056840 2041576 15264 0 92 938976
> -/+ buffers/cache: 1102508 954332
> Swap: 3903784 266232 3637552
>
> bash-3.1$ ls -lsa /dev/shm/relqlio.img
> 1016912 -rw-r--r-- 1 ftournier info 1039400960 2010-09-20 15:15 /dev/shm/relqlio.img
>
--
Andre Przywara
AMD-OSRC (Dresden)
Tel: x29712
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-20 14:00 ` Andre Przywara
@ 2010-09-20 15:34 ` TOURNIER Frédéric
0 siblings, 0 replies; 12+ messages in thread
From: TOURNIER Frédéric @ 2010-09-20 15:34 UTC (permalink / raw)
To: Andre Przywara; +Cc: Stefan Hajnoczi, kvm
On Mon, 20 Sep 2010 16:00:53 +0200
Andre Przywara <andre.przywara@amd.com> wrote:
> TOURNIER Frédéric wrote:
> > Heres my benches, done in two days so dates are weird and results are very approximative.
> > What surprises me are in the Part 2, swap occured.
> I don't know exactly why, but I have seen a small usage of swap
> occasionally without real memory pressure. So I'd consider this normal.
Mmm i don't like strange normal things... Anyway my current setting is the N°1.
And my target the 3 or 4 ^^.
> > In 3 and 4, the ram is eaten up, even if the vm just booted.
> Where is the RAM eaten up? I see always always 800 MB free, some more
> even after the d/l:
> You have to look at the second line of the free column, no the first
> one. As you can see the OS has still enough RAM to afford a large cache,
> so it uses this. Unused RAM is just a waste of resources (because it is
> always there and there is no reason to not use it). If the 'cached'
> contains a lot of clean pages, the OS can simply claim them should an
> application request more memory. If you want a proof of this, try:
> # echo 3 > /proc/sys/vm/drop_caches
> This should free the cache and give you a high "real" free value back.
Ok i'll take a closer look. But i see no reason why so much cache is used.
I think there's some kind of "duplicated pages" between nfs and qemu-kvm.
Maybe there's an idea for a future enhancement for some "-nfs-image" switch ?
> Have you tried cache=none with the tmpfs scenario?
Oh yeah i tried and tried. Unfortunately this is impossible : Invalid argument
For shm and ramfs too.
> That should save you
> some of the host's cached memory (note the difference between part 1 and
> part2 in that respect), maybe at the expense of the guest's memory used
> more heavily. Your choice here, that depends on the actual memory
> utilization in the guest.
>
> As I said before, it is not a very good idea to use such a setup (with
> the relative image on tmpfs) if you are doing actual disk I/O,
> especially large writes. AFAIK QCOW[2] does not really shrink, it only
> grows, so you will end up with out-of-memory at some point.
> But if you can restrict the amount of written data, this may work.
Well i'm aware of this "dangerous" setting but i really tried to make it work because it's so comfortable.
If some of you readers have some spare time, and two machines (2G ram on each is a good start), try this setting.
Read on NFS, write on local, ram if possible. The performance of the guest is awesome, especially if the original .img is pre-cached in the ram of the server.
>
> Regards,
> Andre.
>
> P.S. Sorry for the confusion about tmpfs vs. ramfs in my last week's mail.
No problem.
Thank you for taking time.
And beeing answered by someone@amd.com is a must ^^
Fred.
>
> >
> > ------------------------------------
> > Part 0
> >
> > End of boot :
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 500836 1556004 0 2244 359504
> > -/+ buffers/cache: 139088 1917752
> > Swap: 3903784 0 3903784
> >
> > ------------------------------------
> >
> > Part 1
> >
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img,cache=none
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 1656280 400560 0 34884 378332
> > -/+ buffers/cache: 1243064 813776
> > Swap: 3903784 0 3903784
> >
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 58946 -rw-r--r-- 1 ftournier info 60424192 2010-09-16 17:49 /mnt/hd/sda/sda1/tmp/relqlio.img
> >
> > 650M download inside the vm
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 1677648 379192 0 33860 397716
> > -/+ buffers/cache: 1246072 810768
> > Swap: 3903784 0 3903784
> >
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 914564 -rw-r--r-- 1 ftournier info 935723008 2010-09-20 14:07 /mnt/hd/sda/sda1/tmp/relqlio.img
> >
> > ------------------------------------
> >
> > Part 2
> >
> > reboot
> >
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 2040172 16668 0 32952 758948
> > -/+ buffers/cache: 1248272 808568
> > Swap: 3903784 0 3903784
> >
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 60739 -rw-r--r-- 1 ftournier info 62259200 2010-09-16 17:57 /mnt/hd/sda/sda1/tmp/relqlio.img
> >
> > 650M download inside the vm
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 2040540 16300 0 34412 765208
> > -/+ buffers/cache: 1240920 815920
> > Swap: 3903784 8160 3895624
> >
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 842430 -rw-r--r-- 1 ftournier info 861929472 2010-09-20 14:20 /mnt/hd/sda/sda1/tmp/relqlio.img
> >
> > ------------------------------------
> >
> > Part 3
> >
> > reboot
> >
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/tmp/relqlio.img
> >
> > note : /tmp is a tmpfs filesystem
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 2009688 47152 0 248 766328
> > -/+ buffers/cache: 1243112 813728
> > Swap: 3903784 0 3903784
> >
> > bash-3.1$ ls -lsa /tmp/relqlio.img
> > 59848 -rw-r--r-- 1 ftournier info 61407232 2010-09-16 18:04 /tmp/relqlio.img
> >
> > 650M download inside the vm
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 2041404 15436 0 128 921276
> > -/+ buffers/cache: 1120000 936840
> > Swap: 3903784 248804 3654980
> >
> > bash-3.1$ ls -lsa /tmp/relqlio.img
> > 885448 -rw-r--r-- 1 ftournier info 906821632 2010-09-20 14:40 /tmp/relqlio.img
> >
> > ------------------------------------
> >
> > Part 4
> >
> > reboot
> >
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/dev/shm/relqlio.img
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 2009980 46860 0 172 767328
> > -/+ buffers/cache: 1242480 814360
> > Swap: 3903784 0 3903784
> >
> > bash-3.1$ ls -lsa /dev/shm/relqlio.img
> > 58496 -rw-r--r-- 1 ftournier info 59899904 2010-09-16 18:11 /dev/shm/relqlio.img
> >
> > 650M download inside the vm
> >
> > bash-3.1$ free
> > total used free shared buffers cached
> > Mem: 2056840 2041576 15264 0 92 938976
> > -/+ buffers/cache: 1102508 954332
> > Swap: 3903784 266232 3637552
> >
> > bash-3.1$ ls -lsa /dev/shm/relqlio.img
> > 1016912 -rw-r--r-- 1 ftournier info 1039400960 2010-09-20 15:15 /dev/shm/relqlio.img
> >
>
>
> --
> Andre Przywara
> AMD-OSRC (Dresden)
> Tel: x29712
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: .img on nfs, relative on ram, consuming mass ram
2010-09-15 9:39 .img on nfs, relative on ram, consuming mass ram TOURNIER Frédéric
2010-09-16 9:09 ` Andre Przywara
@ 2011-09-19 12:12 ` Rickard Lundin
1 sibling, 0 replies; 12+ messages in thread
From: Rickard Lundin @ 2011-09-19 12:12 UTC (permalink / raw)
To: kvm
>
> Hi !
> Here's my config :
>
> Version : qemu-kvm-0.12.5, qemu-kvm-0.12.4
> Hosts : AMD 64X2, Phenom and Core 2 duo
> OS : Slackware 64 13.0
> Kernel : 2.6.35.4 and many previous versions
>
> I use a PXE server to boot semi-diskless (swap partitions and some local
stuff) stations.
> This server also serves a read-only nfs folder, with plenty of .img on it.
> When clients connects, a relative image is created in /tmp, which is a
tmpfs, so hosted in ram.
>
> And here i go on my 2G stations :
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -
soundhw es1370 /tmp/relimg.img
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -
soundhw es1370 /dev/shm/relimg.img
>
> I tried both. Always the same result : the ram is consumed quickly, and mass
swap occurs.
> On a 4G system, i see kvm uses more than 1024, maybe 1200.
> And everytime a launch a program inside the vm, the amount of the host free
ram (not cached) diminishes,
> which is weird, because it should have been reserved.
>
> So on a 2G system, swap occurs very fast and the machine slow a lot down.
> An on a total diskless system, this leads fast to a freeze.
>
> I have no problem if i use a relative image on disk :
> qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -
soundhw es1370 -drive file=/mnt/hd/sda/sda1/tmp/relimg.img,cache=none
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo <at> vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
I am also looking for a "speed solution" so i put the whole image in /dev/shm.
host machine is ubuntu 11.10 betha , and guest is ubuntu 11.04.
it works , and i get 500mbyte/sec using virtio. .. I was hoping for a lot of
more since its a i920 , 24gig host machine.
I will try disable the cache i guess it will improve the speed.
My question is , what would the optimal filesystem of the guest be ? om using
ext4 , but its a bit silly since the the host kvm image is in ram.
..ive got 12g /dev/shm , since the hostmache is 24gb. my kvm image is 6 gig.
/Rickard
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2011-09-19 12:15 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-15 9:39 .img on nfs, relative on ram, consuming mass ram TOURNIER Frédéric
2010-09-16 9:09 ` Andre Przywara
2010-09-16 12:01 ` TOURNIER Frédéric
2010-09-16 12:03 ` Andre Przywara
2010-09-16 13:19 ` Stefan Hajnoczi
2010-09-16 13:48 ` Andre Przywara
2010-09-16 15:59 ` TOURNIER Frédéric
2010-09-20 13:30 ` TOURNIER Frédéric
2010-09-20 14:00 ` Andre Przywara
2010-09-20 15:34 ` TOURNIER Frédéric
2010-09-16 14:49 ` David S. Ahern
2011-09-19 12:12 ` Rickard Lundin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.