* KVM performance
@ 2009-04-03 11:32 BRAUN, Stefanie
2009-04-06 11:45 ` Avi Kivity
2009-04-06 12:13 ` Hauke Hoffmann
0 siblings, 2 replies; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-03 11:32 UTC (permalink / raw)
To: kvm
Hallo,
as I want to switch from XEN to KVM I've made some performance tests
to see if KVM is as peformant as XEN. But tests with a VMU that receives
a streamed video, adds a small logo to the video and streams it to a
client
have shown that XEN performs much betten than KVM.
In XEN the vlc (videolan client used to receive, process and send the
video) process
within the vmu has a cpuload of 33,8 % whereas in KVM
the vlc process has a cpuload of 99.9 %.
I'am not sure why, does anybody now some settings to improve
the KVM performance?
Thank you.
Regards, Stefanie.
Used hardware and settings:
In the tests I've used the same host hardware for XEN and KVM:
- Dual Core AMD 2.2 GHz, 8 GB RAM
- Tested OSes for KVM Host: Fedora 10, 2.6.27.5-117.fc10.x86_64 with kvm
version 10.fc10 version 74
also tested in january: compiled kernel with
kvm-83
- KVM Guest settings: OS: Fedora 9 2.6.25-14.fc9.x86_64 (i386 also
tested)
RAM: 256 MB (same for XEN vmu)
CPU: 1 Core with 2,2 GHz (same for XEN vmu)
tested nic models: rtl8139, e1000, virtio
Tested Scenario: VMU receives a streamed video , adds a logo (watermark)
to the video stream and then streams it to a client
Results:
XEN:
Host cpu load (virt-manager): 23%
VMU cpu load (virt-manager): 18 %
VLC process within VMU (top): 33,8%
KVM:
no virt-manager cpu load as I started the vmu with the kvm command
Host cpu load : 52%
qemu-kvm process (top) 77-100%
VLC process within vmu (top): 80 - 99,9%
KVM command to start vmu
/usr/bin/qemu-kvm -boot c -hda /images/vmu01.raw -m 256 -net
nic,vlan=0,macaddr=aa:bb:cc:dd:ee:10,model=virtio -net
tap,ifname=tap0,vlan=0,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/qem
u-ifdown -vnc 127.0.0.1:1 -k de --daemonize
________________________________
Alcatel-Lucent Deutschland AG
Bell Labs Germany
Service Infrastructure, ZFZ-SI
Stefanie Braun
Phone: +49.711.821-34865
Fax: +49.711.821-32453
Postal address:
Alcatel-Lucent Deutschland AG
Lorenzstrasse 10
D-70435 STUTTGART
Mail: stefanie.braun@alcatel-lucent.de
Alcatel-Lucent Deutschland AG
Sitz der Gesellschaft: Stuttgart - Amtsgericht Stuttgart HRB 4026
Vorsitzender des Aufsichtsrats: Michael Oppenhoff Vorstand: Alf Henryk
Wulf (Vors.), Dr. Rainer Fechner
________________________________
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: KVM performance
2009-04-03 11:32 KVM performance BRAUN, Stefanie
@ 2009-04-06 11:45 ` Avi Kivity
[not found] ` <133D9897FB9C5E4E9DF2779DC91E947C51834A@SLFSNX.rcs.alcatel-research.de>
2009-04-06 12:13 ` Hauke Hoffmann
1 sibling, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2009-04-06 11:45 UTC (permalink / raw)
To: BRAUN, Stefanie; +Cc: kvm
BRAUN, Stefanie wrote:
> Hallo,
>
> as I want to switch from XEN to KVM I've made some performance tests
> to see if KVM is as peformant as XEN. But tests with a VMU that receives
> a streamed video, adds a small logo to the video and streams it to a
> client
> have shown that XEN performs much betten than KVM.
> In XEN the vlc (videolan client used to receive, process and send the
> video) process
> within the vmu has a cpuload of 33,8 % whereas in KVM
> the vlc process has a cpuload of 99.9 %.
> I'am not sure why, does anybody now some settings to improve
> the KVM performance?
>
Is this a tcp test?
Can you test receive and transmit separately?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: KVM performance
2009-04-03 11:32 KVM performance BRAUN, Stefanie
2009-04-06 11:45 ` Avi Kivity
@ 2009-04-06 12:13 ` Hauke Hoffmann
2009-04-06 16:30 ` AW: " BRAUN, Stefanie
2009-04-07 12:58 ` BRAUN, Stefanie
1 sibling, 2 replies; 14+ messages in thread
From: Hauke Hoffmann @ 2009-04-06 12:13 UTC (permalink / raw)
To: kvm; +Cc: BRAUN, Stefanie
On Friday 03 April 2009 13:32:50 you wrote:
> Hallo,
>
> as I want to switch from XEN to KVM I've made some performance tests
> to see if KVM is as peformant as XEN. But tests with a VMU that receives
> a streamed video, adds a small logo to the video and streams it to a
> client
> have shown that XEN performs much betten than KVM.
> In XEN the vlc (videolan client used to receive, process and send the
> video) process
> within the vmu has a cpuload of 33,8 % whereas in KVM
> the vlc process has a cpuload of 99.9 %.
> I'am not sure why, does anybody now some settings to improve
> the KVM performance?
>
> Thank you.
> Regards, Stefanie.
>
>
> Used hardware and settings:
> In the tests I've used the same host hardware for XEN and KVM:
> - Dual Core AMD 2.2 GHz, 8 GB RAM
> - Tested OSes for KVM Host: Fedora 10, 2.6.27.5-117.fc10.x86_64 with kvm
> version 10.fc10 version 74
> also tested in january: compiled kernel with
> kvm-83
>
> - KVM Guest settings: OS: Fedora 9 2.6.25-14.fc9.x86_64 (i386 also
> tested)
> RAM: 256 MB (same for XEN vmu)
> CPU: 1 Core with 2,2 GHz (same for XEN vmu)
> tested nic models: rtl8139, e1000, virtio
>
> Tested Scenario: VMU receives a streamed video , adds a logo (watermark)
> to the video stream and then streams it to a client
>
> Results:
>
> XEN:
> Host cpu load (virt-manager): 23%
> VMU cpu load (virt-manager): 18 %
> VLC process within VMU (top): 33,8%
>
> KVM:
> no virt-manager cpu load as I started the vmu with the kvm command
> Host cpu load : 52%
> qemu-kvm process (top) 77-100%
> VLC process within vmu (top): 80 - 99,9%
>
> KVM command to start vmu
> /usr/bin/qemu-kvm -boot c -hda /images/vmu01.raw -m 256 -net
> nic,vlan=0,macaddr=aa:bb:cc:dd:ee:10,model=virtio -net
> tap,ifname=tap0,vlan=0,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/qem
> u-ifdown -vnc 127.0.0.1:1 -k de --daemonize
Hi Stefanie,
does vlc perform operations on disc (eg caching, logging, ...)?
When it cache you can use virtio also for the disk.
Just change
-hda /images/vmu01.raw
to
-drive file=/images/vmu01.raw,if=virtio,boot=on
Regards
Hauke
>
>
>
>
>
> ________________________________
>
> Alcatel-Lucent Deutschland AG
> Bell Labs Germany
> Service Infrastructure, ZFZ-SI
> Stefanie Braun
> Phone: +49.711.821-34865
> Fax: +49.711.821-32453
>
> Postal address:
> Alcatel-Lucent Deutschland AG
> Lorenzstrasse 10
> D-70435 STUTTGART
>
> Mail: stefanie.braun@alcatel-lucent.de
>
>
>
> Alcatel-Lucent Deutschland AG
> Sitz der Gesellschaft: Stuttgart - Amtsgericht Stuttgart HRB 4026
> Vorsitzender des Aufsichtsrats: Michael Oppenhoff Vorstand: Alf Henryk
> Wulf (Vors.), Dr. Rainer Fechner
>
> ________________________________
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
hauke hoffmann service and electronic systems
Moristeig 60, D-23556 Lübeck
Telefon: +49 (0) 451 8896462
Fax: +49 (0) 451 8896461
Mobil: +49 (0) 170 7580491
E-Mail: office@hauke-hoffmann.net
PGP public key: www.hauke-hoffmann.net/static/pgp/kontakt.asc
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: KVM performance
2009-04-06 12:13 ` Hauke Hoffmann
@ 2009-04-06 16:30 ` BRAUN, Stefanie
2009-04-07 12:58 ` BRAUN, Stefanie
1 sibling, 0 replies; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-06 16:30 UTC (permalink / raw)
To: kvm; +Cc: Hauke Hoffmann
-----Ursprüngliche Nachricht-----
Von: Hauke Hoffmann [mailto:kontakt@hauke-hoffmann.net]
Gesendet: Montag, 6. April 2009 14:13
An: kvm@vger.kernel.org
Cc: BRAUN, Stefanie
Betreff: Re: KVM performance
On Friday 03 April 2009 13:32:50 you wrote:
> Hallo,
>
> as I want to switch from XEN to KVM I've made some performance tests
> to see if KVM is as peformant as XEN. But tests with a VMU that
> receives a streamed video, adds a small logo to the video and streams
> it to a client have shown that XEN performs much betten than KVM.
> In XEN the vlc (videolan client used to receive, process and send the
> video) process
> within the vmu has a cpuload of 33,8 % whereas in KVM the vlc process
> has a cpuload of 99.9 %.
> I'am not sure why, does anybody now some settings to improve the KVM
> performance?
>
> Thank you.
> Regards, Stefanie.
>
>
> Used hardware and settings:
> In the tests I've used the same host hardware for XEN and KVM:
> - Dual Core AMD 2.2 GHz, 8 GB RAM
> - Tested OSes for KVM Host: Fedora 10, 2.6.27.5-117.fc10.x86_64 with
> kvm version 10.fc10 version 74
> also tested in january: compiled kernel
> with
> kvm-83
>
> - KVM Guest settings: OS: Fedora 9 2.6.25-14.fc9.x86_64 (i386 also
> tested)
> RAM: 256 MB (same for XEN vmu)
> CPU: 1 Core with 2,2 GHz (same for XEN vmu)
> tested nic models: rtl8139, e1000, virtio
>
> Tested Scenario: VMU receives a streamed video , adds a logo
> (watermark) to the video stream and then streams it to a client
>
> Results:
>
> XEN:
> Host cpu load (virt-manager): 23%
> VMU cpu load (virt-manager): 18 %
> VLC process within VMU (top): 33,8%
>
> KVM:
> no virt-manager cpu load as I started the vmu with the kvm command
> Host cpu load : 52%
> qemu-kvm process (top) 77-100%
> VLC process within vmu (top): 80 - 99,9%
>
> KVM command to start vmu
> /usr/bin/qemu-kvm -boot c -hda /images/vmu01.raw -m 256 -net
> nic,vlan=0,macaddr=aa:bb:cc:dd:ee:10,model=virtio -net
> tap,ifname=tap0,vlan=0,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/q
> em u-ifdown -vnc 127.0.0.1:1 -k de --daemonize
Hi Stefanie,
does vlc perform operations on disc (eg caching, logging, ...)?
When it cache you can use virtio also for the disk.
Just change
-hda /images/vmu01.raw
to
-drive file=/images/vmu01.raw,if=virtio,boot=on
Regards
Hauke
Hi Hauke,
Thanks for your replay.
The vlc does not perform excessive operations on disc.
Even so I've added disk virtio to the vmu setup.
But the qemu-kvm process in the host and the
vlc process within the vmu still consume up to 100%.
Regards,
Stefanie
>
>
>
>
>
> ________________________________
>
> Alcatel-Lucent Deutschland AG
> Bell Labs Germany
> Service Infrastructure, ZFZ-SI
> Stefanie Braun
> Phone: +49.711.821-34865
> Fax: +49.711.821-32453
>
> Postal address:
> Alcatel-Lucent Deutschland AG
> Lorenzstrasse 10
> D-70435 STUTTGART
>
> Mail: stefanie.braun@alcatel-lucent.de
>
>
>
> Alcatel-Lucent Deutschland AG
> Sitz der Gesellschaft: Stuttgart - Amtsgericht Stuttgart HRB 4026
> Vorsitzender des Aufsichtsrats: Michael Oppenhoff Vorstand: Alf Henryk
> Wulf (Vors.), Dr. Rainer Fechner
>
> ________________________________
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html
--
hauke hoffmann service and electronic systems
Moristeig 60, D-23556 Lübeck
Telefon: +49 (0) 451 8896462
Fax: +49 (0) 451 8896461
Mobil: +49 (0) 170 7580491
E-Mail: office@hauke-hoffmann.net
PGP public key: www.hauke-hoffmann.net/static/pgp/kontakt.asc
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: KVM performance
2009-04-06 12:13 ` Hauke Hoffmann
2009-04-06 16:30 ` AW: " BRAUN, Stefanie
@ 2009-04-07 12:58 ` BRAUN, Stefanie
1 sibling, 0 replies; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-07 12:58 UTC (permalink / raw)
To: kvm; +Cc: Hauke Hoffmann
Hi,
I'm not sure anymore if I really tested your suggested disk virtio setup yesterday,
because today the vmu does not start when using the virtio setup.
During boot time, the volgroup00 can not be found.
At the moment I'm still searching why this error occurs.
working: -drive file=/images/vmu01.raw,index=0,media=disk,boot=on
not working: -drive file=/images/vmu01.raw,index=0,media=disk,boot=on,if=virtio
not working: -drive file=/images/vmu01.raw,boot=on,if=virtio
-----Ursprüngliche Nachricht-----
Von: Hauke Hoffmann [mailto:kontakt@hauke-hoffmann.net]
Gesendet: Montag, 6. April 2009 14:13
An: kvm@vger.kernel.org
Cc: BRAUN, Stefanie
Betreff: Re: KVM performance
On Friday 03 April 2009 13:32:50 you wrote:
> Hallo,
>
> as I want to switch from XEN to KVM I've made some performance tests
> to see if KVM is as peformant as XEN. But tests with a VMU that
> receives a streamed video, adds a small logo to the video and streams
> it to a client have shown that XEN performs much betten than KVM.
> In XEN the vlc (videolan client used to receive, process and send the
> video) process
> within the vmu has a cpuload of 33,8 % whereas in KVM the vlc process
> has a cpuload of 99.9 %.
> I'am not sure why, does anybody now some settings to improve the KVM
> performance?
>
> Thank you.
> Regards, Stefanie.
>
>
> Used hardware and settings:
> In the tests I've used the same host hardware for XEN and KVM:
> - Dual Core AMD 2.2 GHz, 8 GB RAM
> - Tested OSes for KVM Host: Fedora 10, 2.6.27.5-117.fc10.x86_64 with
> kvm version 10.fc10 version 74
> also tested in january: compiled kernel
> with
> kvm-83
>
> - KVM Guest settings: OS: Fedora 9 2.6.25-14.fc9.x86_64 (i386 also
> tested)
> RAM: 256 MB (same for XEN vmu)
> CPU: 1 Core with 2,2 GHz (same for XEN vmu)
> tested nic models: rtl8139, e1000, virtio
>
> Tested Scenario: VMU receives a streamed video , adds a logo
> (watermark) to the video stream and then streams it to a client
>
> Results:
>
> XEN:
> Host cpu load (virt-manager): 23%
> VMU cpu load (virt-manager): 18 %
> VLC process within VMU (top): 33,8%
>
> KVM:
> no virt-manager cpu load as I started the vmu with the kvm command
> Host cpu load : 52%
> qemu-kvm process (top) 77-100%
> VLC process within vmu (top): 80 - 99,9%
>
> KVM command to start vmu
> /usr/bin/qemu-kvm -boot c -hda /images/vmu01.raw -m 256 -net
> nic,vlan=0,macaddr=aa:bb:cc:dd:ee:10,model=virtio -net
> tap,ifname=tap0,vlan=0,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/q
> em u-ifdown -vnc 127.0.0.1:1 -k de --daemonize
Hi Stefanie,
does vlc perform operations on disc (eg caching, logging, ...)?
When it cache you can use virtio also for the disk.
Just change
-hda /images/vmu01.raw
to
-drive file=/images/vmu01.raw,if=virtio,boot=on
Regards
Hauke
>
>
>
>
>
> ________________________________
>
> Alcatel-Lucent Deutschland AG
> Bell Labs Germany
> Service Infrastructure, ZFZ-SI
> Stefanie Braun
> Phone: +49.711.821-34865
> Fax: +49.711.821-32453
>
> Postal address:
> Alcatel-Lucent Deutschland AG
> Lorenzstrasse 10
> D-70435 STUTTGART
>
> Mail: stefanie.braun@alcatel-lucent.de
>
>
>
> Alcatel-Lucent Deutschland AG
> Sitz der Gesellschaft: Stuttgart - Amtsgericht Stuttgart HRB 4026
> Vorsitzender des Aufsichtsrats: Michael Oppenhoff Vorstand: Alf Henryk
> Wulf (Vors.), Dr. Rainer Fechner
>
> ________________________________
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html
--
hauke hoffmann service and electronic systems
Moristeig 60, D-23556 Lübeck
Telefon: +49 (0) 451 8896462
Fax: +49 (0) 451 8896461
Mobil: +49 (0) 170 7580491
E-Mail: office@hauke-hoffmann.net
PGP public key: www.hauke-hoffmann.net/static/pgp/kontakt.asc
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: AW: KVM performance
[not found] ` <49DA2F54.8090109@redhat.com>
@ 2009-04-07 17:00 ` BRAUN, Stefanie
2009-04-07 17:34 ` Avi Kivity
0 siblings, 1 reply; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-07 17:00 UTC (permalink / raw)
To: kvm; +Cc: Avi Kivity
Hello,
I think that I've mixed the values in my first email for this topic and actually provided the values without network virtio enabled.
So the values for a kvm vmu with enabled virtio are indeed a little better but not as good as Xen.
At the moment I'm still working to get the virtio disk vmu setup working as I think it would be interesting how the performance values would improve.
All following tests have been executed using a vmu (ram 512 mb, 1 core 2,2 GHz) and the vlc (video player, that can f.e. stream, receive, transcode videos)
VMU setup for first performance values (without network virtio)
/usr/bin/qemu-kvm -boot c -hda /images/vmu01.raw -m 512 -net nic,vlan=0,macaddr=aa:bb:cc:dd:ee:10 -net tap,ifname=tap0,vlan=0,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/qemu-ifdown -net nic,vlan=1,macaddr=aa:bb:cc:dd:ee:11 -net tap,ifname=tap1,vlan=1,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/qemu-ifdown -vnc 127.0.0.1:2 -k de --daemonize
VMU setup for second performance values (with network virtio)
/usr/bin/qemu-kvm -boot c -hda /images/vmu01.raw -m 512 -net nic,vlan=0,macaddr=aa:bb:cc:dd:ee:10,model=virtio -net tap,ifname=tap0,vlan=0,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/qemu-ifdown -net nic,vlan=1,macaddr=aa:bb:cc:dd:ee:11,model=virtio -net tap,ifname=tap1,vlan=1,script=/etc/kvm/qemu-ifup,downscript=/etc/kvm/qemu-ifdown -vnc 127.0.0.1:2 -k de --daemonize
The first column of performance values show the VMU without virtio network, the second column with virtio network
1. Subtest: VLC reads video from local disk and streams it via udp to another pc
Host performance: 11% 11%
kvm process in host (top): 22% 22%
vlc process in vmu (top): 15% 7%
2. Subtest: Just receiving a video via udp (no displaying as no X11 is installed on the vmu)
Host performance: 16% 10%
kvm process in host (top) : 30% 17%
vlc process in vmu (top) : 3% 3%
3. Subtest: Receiving a video via udp and saving it locally in a file
Host performance: 17% 11%
kvm process in host (top) : 38% 24%
vlc process in vmu (top) : 12% 11%
4. Subtest: Reading video locally, adding a logo to the video stream and then saving the video locally
Host performance: 50% 50%
kvm process in host (top) : 99% 99%
vlc process in vmu (top) : 99% 99%
5. Subtest: Receiving the video from pc 1 and at the same time streaming the received video to pc 2
Host performance: 23% 18%
kvm process in host (top) : 22% 35%
vlc process in vmu (top) : 48% 10%
6. The originial test: receiving streamed video, adding a logo and the sending it to another pc
Host performance: 52% 50%
kvm process in host (top) : 77-99% 60-99% (for both most time 99%)
vlc process in vmu (top) : 80-99% 50-99% (for both most time 99%)
I've have repeated almost all tests with XEN
1. Subtest: VLC reads video from local disk and streams it via udp to another pc
Host performance (Domain-0 + vmu)(virt-manager): 4%
VMU (virt-manager) : 2%
vlc process in vmu (top) : 1%
3. Subtest: Receiving a video via udp and saving it locally in a file
Host performance (Domain-0 + vmu)(virt-manager): 7%
VMU (virt-manager) : 4%
vlc process in vmu (top) : 3%
4. Subtest: Reading video locally, adding a logo to the video stream and then saving the video locally
Host performance (Domain-0 + vmu)(virt-manager): 3-55%
VMU (virt-manager) : 0-50%
vlc process in vmu (top) : 14 -99% varies a lot
5. Subtest: Receiving the video from pc 1 and at the same time streaming the received video to pc 2
Host performance (Domain-0 + vmu)(virt-manager): 6%
VMU (virt-manager) : 3%
vlc process in vmu (top) : 1%
6. The originial test: receiving streamed video, adding a logo and the sending it to another pc
Host performance (Domain-0 + vmu)(virt-manager): 23%
VMU (virt-manager) : 18%
vlc process in vmu (top) : 33,8%
-----Ursprüngliche Nachricht-----
Von: Avi Kivity [mailto:avi@redhat.com]
Gesendet: Montag, 6. April 2009 18:36
An: BRAUN, Stefanie
Betreff: Re: AW: KVM performance
BRAUN, Stefanie wrote:
> Is this a tcp test?
>
> Can you test receive and transmit separately?
>
> Hello,
>
> it's a "transcoder" test, but without transcoding between video
> formats, the vmu just adds a logo (a watermark) into the video.
>
> At the same time the vmu performed several actions:
> - receiving a streamed video via udp
> - adding a logo to the video
> - sending the streamed video via udp
>
> But I think I can split up the test into the following subtests and
> provide further performance values Sub test 1 receive: - Receiving
> the video from network (udp) and saving locally Sub test 2 transmit: -
> Reading the video from local ressource and sending via network Sub test 3 process: - Reading the video from local ressource, adding the logo to the video stream and saving it again locally.
>
>
We have a known issue with udp transmits, you might be hitting that.
Please do separate your tests so we can see what the root cause is.
--
I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: AW: AW: KVM performance
2009-04-07 17:00 ` AW: AW: " BRAUN, Stefanie
@ 2009-04-07 17:34 ` Avi Kivity
2009-04-08 11:38 ` AW: " BRAUN, Stefanie
0 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2009-04-07 17:34 UTC (permalink / raw)
To: BRAUN, Stefanie; +Cc: kvm
BRAUN, Stefanie wrote:
> 1. Subtest: VLC reads video from local disk and streams it via udp to another pc
> Host performance: 11% 11%
> kvm process in host (top): 22% 22%
> vlc process in vmu (top): 15% 7%
>
>
While this isn't wonderful, it's not your major bottleneck now. What's
the bandwidth generated by the workload?
>
> 4. Subtest: Reading video locally, adding a logo to the video stream and then saving the video locally
> Host performance: 50% 50%
> kvm process in host (top) : 99% 99%
> vlc process in vmu (top) : 99% 99%
>
Now this is bad. Please provide the output of 'kvm_stat -1' while this
is running. Also, describe the guest. Is it Linux? if so, i386 or
x86_64? and is CONFIG_HIGHMEM enabled?
UDP performance is a known issue now, and we are working on it. TCP is
much better due to segmentation offload.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: AW: AW: KVM performance
2009-04-07 17:34 ` Avi Kivity
@ 2009-04-08 11:38 ` BRAUN, Stefanie
2009-04-09 15:34 ` BRAUN, Stefanie
0 siblings, 1 reply; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-08 11:38 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
[-- Attachment #1: Type: text/plain, Size: 1189 bytes --]
BRAUN, Stefanie wrote:
> 1. Subtest: VLC reads video from local disk and streams it via udp to
another pc
> Host performance: 11% 11%
> kvm process in host (top): 22% 22%
> vlc process in vmu (top): 15% 7%
>
>
While this isn't wonderful, it's not your major bottleneck now. What's
the bandwidth generated by the workload?
Generated Bandwidth : 6500 kbit per sec
>
> 4. Subtest: Reading video locally, adding a logo to the video stream
and then saving the video locally
> Host performance: 50% 50%
> kvm process in host (top) : 99% 99%
> vlc process in vmu (top) : 99% 99%
>
Now this is bad. Please provide the output of 'kvm_stat -1' while this
is running. Also, describe the guest. Is it Linux? if so, i386 or
x86_64? and is CONFIG_HIGHMEM enabled?
Linux, Fedora 10, x86_64, (2.6.27.21-170.2.56.fc10.x86_64)
The config file does not contain a CONFIG_HIGHMEM parameter.
UDP performance is a known issue now, and we are working on it. TCP is
much better due to segmentation offload.
--
I have a truly marvellous patch that fixes the bug which this signature
is too narrow to contain.
[-- Attachment #2: vmu01_stat --]
[-- Type: application/octet-stream, Size: 28557 bytes --]
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 4031 36 0 0 36 0 2003 0 0 0 2027 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4014 37 0 0 37 0 1997 0 0 0 2016 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4039 41 0 0 41 0 2008 0 0 0 2031 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4031 34 0 0 34 0 2005 0 0 0 2026 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4122 36 0 0 118 0 2011 0 0 83 2023 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4724 35 0 0 626 0 2112 0 0 592 2014 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4019 36 0 0 36 0 1999 0 0 0 2020 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4042 40 0 0 40 1 2005 0 0 0 2034 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 2
0 4040 36 0 0 36 0 2010 0 0 0 2029 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4035 35 0 0 35 0 2008 0 0 0 2027 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4119 35 0 0 117 0 2009 0 0 83 2022 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4018 36 0 0 36 0 1999 0 0 0 2019 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4037 42 0 0 42 0 2005 0 0 0 2032 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4035 34 0 0 34 0 2005 0 0 0 2030 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4034 34 0 0 34 0 2008 0 0 0 2026 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4122 37 0 0 119 0 2009 0 0 83 2025 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4018 36 0 0 36 0 1999 0 0 0 2019 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4023 31 0 0 31 0 2003 0 0 0 2020 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4007 2 0 0 2 0 2005 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4053 15 0 0 37 2 2012 0 0 31 2004 0 0 0 0 0 0 2 2 0 0 0 1 2 0 0 0 5
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 4096 20 0 0 80 0 2015 0 0 78 1999 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4694 6 0 0 597 0 2108 0 0 592 1988 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4010 10 0 0 10 0 2003 0 0 0 2007 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4019 2 0 0 2 0 2012 0 0 0 2003 0 0 0 0 0 0 2 2 0 0 0 4 2 0 0 0 4
0 4010 2 0 0 2 0 2008 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4181 58 0 0 140 0 2029 0 0 135 2003 0 0 0 0 0 0 6 6 0 0 0 7 6 0 0 0 12
0 3993 4 0 0 4 0 1999 0 0 0 1994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4051 11 0 0 11 0 2001 0 0 0 2050 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4010 4 0 0 4 0 2005 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 5262 39 0 0 1147 3 2055 0 0 1144 2028 0 0 0 0 0 0 3 3 0 0 0 0 3 0 0 0 6
0 4133 49 0 0 114 0 2016 0 0 109 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 3991 5 0 0 5 0 1999 0 0 0 1992 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4017 7 0 0 7 4 2003 0 0 0 2006 0 0 0 0 0 0 4 4 0 0 0 0 4 0 0 0 5
0 4011 4 0 0 4 1 2005 0 0 0 2004 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 2
0 4010 2 0 0 2 0 2008 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4099 23 0 0 87 0 2012 0 0 83 2003 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3
0 3993 4 0 0 4 0 1999 0 0 0 1994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4706 10 0 0 601 0 2112 0 0 592 1996 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4015 4 0 0 4 0 2010 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4010 3 0 0 3 0 2008 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 4003 2 0 0 2 0 2003 0 0 0 2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4094 25 0 0 89 0 2011 0 0 83 1997 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4007 7 0 0 7 0 2003 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4010 4 0 0 4 0 2005 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4010 3 0 0 3 0 2008 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4003 2 0 0 2 0 2003 0 0 0 2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4091 23 0 0 88 0 2011 0 0 83 1993 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4008 7 0 0 7 0 2003 0 0 0 2005 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4009 5 0 0 5 0 2005 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4011 4 0 0 4 0 2008 0 0 0 2003 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4003 2 0 0 2 0 2003 0 0 0 2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4094 33 0 0 89 0 2011 0 0 83 1998 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4005 8 0 0 8 0 2003 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4714 5 0 0 596 0 2119 0 0 592 1997 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5
0 4049 12 0 0 34 0 2012 0 0 31 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3
0 4003 2 0 0 2 0 2003 0 0 0 2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 3993 4 0 0 4 0 1999 0 0 0 1994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4108 27 0 0 91 0 2015 0 0 83 2007 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4009 4 0 0 4 0 2005 0 0 0 2003 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4013 3 0 0 3 0 2011 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 4003 2 0 0 2 0 2003 0 0 0 2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 3999 4 0 0 4 0 2004 0 0 0 1995 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4114 30 0 0 94 0 2015 0 0 83 2013 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4009 4 0 0 4 0 2005 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4070 37 0 0 54 0 2013 0 0 52 2003 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3
0 4007 2 0 0 2 0 2005 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 3993 5 0 0 5 0 1999 0 0 0 1994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4109 28 0 0 92 0 2015 0 0 83 2008 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4020 5 0 0 5 0 2010 0 0 0 2004 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 4
0 4722 4 0 0 595 0 2117 0 0 592 1993 0 0 0 0 0 0 0 0 0 0 0 14 0 0 0 0 5
0 4020 3 0 0 3 0 2005 0 0 0 2001 0 0 0 0 0 0 0 0 0 0 0 13 0 0 0 0 1
0 4008 3 0 0 3 0 1999 0 0 0 1994 0 0 0 0 0 0 0 0 0 0 0 15 0 0 0 0 1
0 4149 58 0 0 117 0 2016 0 0 109 2006 0 0 0 0 0 0 0 0 0 0 0 13 0 0 0 0 4
0 4003 6 0 0 6 0 2001 0 0 0 1998 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 1
0 4011 2 0 0 2 0 2008 0 0 0 2002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 4618 109 0 0 113 0 2428 0 0 104 2029 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 28
0 4413 82 0 0 85 0 2289 0 0 72 2014 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 28
0 4112 28 0 0 92 0 2017 0 0 83 2010 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0 4001 5 0 0 5 0 2001 0 0 0 2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 4069 12 0 0 29 0 2012 0 0 26 2003 0 0 0 1 0 0 0 0 0 0 0 28 0 0 0 0 3
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 4280 81 0 0 132 0 2019 0 0 130 2005 0 0 0 0 0 0 0 0 0 0 0 124 0 0 0 0 4
0 4287 42 0 0 110 0 2017 0 0 104 1999 0 0 0 0 0 0 0 0 0 0 0 166 0 0 0 0 5
0 4391 56 0 0 197 0 2030 0 0 187 2012 0 0 0 0 0 0 0 0 0 0 0 159 0 0 0 0 7
0 4226 31 0 0 82 0 2022 0 0 78 2006 0 0 0 0 0 0 0 0 0 0 0 119 0 0 0 0 5
Traceback (most recent call last):
File "./kvm_stat", line 124, in <module>
log(stats)
File "./kvm_stat", line 94, in log
time.sleep(1)
KeyboardInterrupt
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: AW: AW: KVM performance
2009-04-08 11:38 ` AW: " BRAUN, Stefanie
@ 2009-04-09 15:34 ` BRAUN, Stefanie
2009-04-11 16:19 ` Avi Kivity
0 siblings, 1 reply; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-09 15:34 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
Hello,
now I was able to start the guest vmu with disk virtio, and some of the
tests with disk involvement even improved a bit.
But the test in which a logo is added to the video stream does not
improve. I don't know why the performance is so bad?
Subtest: Reading video locally, adding a logo to the video stream and
then saving the video locally
Host performance: 50%
kvm process in host (top) : 99%
vlc process in vmu (top) : 99%
The output of kvm_stat -1 during the subtest is the following:
efer_reload 0 0
exits 9913473 3994
fpu_reload 393453 4
halt_exits 768222 0
halt_wakeup 497108 0
host_state_reload 3266556 4
hypercalls 508554 0
insn_emulation 5405007 1999
insn_emulation_fail 0 0
invlpg 0 0
io_exits 1879454 0
irq_exits 568541 1995
irq_window 0 0
largepages 0 0
mmio_exits 145028 0
mmu_cache_miss 51455 0
mmu_flooded 40895 0
mmu_pde_zapped 34101 0
mmu_pte_updated 448719 0
mmu_pte_write 858494 0
mmu_recycled 0 0
mmu_shadow_zapped 50590 0
nmi_window 0 0
pf_fixed 494176 0
pf_guest 378754 0
remote_tlb_flush 0 0
request_irq 0 0
signal_exits 1 0
tlb_flush 1076949 1
The guest I've started
has 512 MB RAM, 1 Core 2.2 GHz of the host which is a dual core machine.
Guest settings: RAM: 512 MB
CPU: 1 Core with 2,2 GHz
Tested i386 OS as well as x86_64 with same performance results.
OS: Fedora 10 i386;
2.6.27.21-170.2.56.fc10.i686
CONFIG_HIGHMEM=y
CONFIG_HIGHMEM4G=y
OS: Fedora 10 x86_64;
2.6.27.21-170.2.56.fc10.x86_64
The config file does not contain the HIGHMEM
parameter.
Host settings: OS: Fedora 10 x86_64; 2.6.27.5-117.fc10.x86_64
KVM: Version 74 Release 10.fc10
Best regards,
Steffi
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: AW: AW: AW: KVM performance
2009-04-09 15:34 ` BRAUN, Stefanie
@ 2009-04-11 16:19 ` Avi Kivity
2009-04-14 8:26 ` AW: " BRAUN, Stefanie
0 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2009-04-11 16:19 UTC (permalink / raw)
To: BRAUN, Stefanie; +Cc: kvm
BRAUN, Stefanie wrote:
> Hello,
>
> now I was able to start the guest vmu with disk virtio, and some of the
> tests with disk involvement even improved a bit.
> But the test in which a logo is added to the video stream does not
> improve. I don't know why the performance is so bad?
>
> Subtest: Reading video locally, adding a logo to the video stream and
> then saving the video locally
> Host performance: 50%
> kvm process in host (top) : 99%
> vlc process in vmu (top) : 99%
>
>
> The output of kvm_stat -1 during the subtest is the following:
>
> efer_reload 0 0
> exits 9913473 3994
>
This indicates that kvm is running in guest mode all of the time and is
therefore quite efficient. Perhaps the test uses sse instructions which
kvm doesn't expose? Try adding -cpu core2duo to the command line.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: AW: AW: AW: KVM performance
2009-04-11 16:19 ` Avi Kivity
@ 2009-04-14 8:26 ` BRAUN, Stefanie
2009-04-14 8:47 ` Avi Kivity
0 siblings, 1 reply; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-14 8:26 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
Hello,
the host runs on a Dual-Core AMD Opteron Processor.
Does there exist a similar AMD parameter?
Regard Stefanie
-----Ursprüngliche Nachricht-----
Von: Avi Kivity [mailto:avi@redhat.com]
Gesendet: Samstag, 11. April 2009 18:19
An: BRAUN, Stefanie
Cc: kvm@vger.kernel.org
Betreff: Re: AW: AW: AW: KVM performance
BRAUN, Stefanie wrote:
> Hello,
>
> now I was able to start the guest vmu with disk virtio, and some of
> the tests with disk involvement even improved a bit.
> But the test in which a logo is added to the video stream does not
> improve. I don't know why the performance is so bad?
>
> Subtest: Reading video locally, adding a logo to the video stream and
> then saving the video locally
> Host performance: 50%
> kvm process in host (top) : 99%
> vlc process in vmu (top) : 99%
>
>
> The output of kvm_stat -1 during the subtest is the following:
>
> efer_reload 0 0
> exits 9913473 3994
>
This indicates that kvm is running in guest mode all of the time and is therefore quite efficient. Perhaps the test uses sse instructions which kvm doesn't expose? Try adding -cpu core2duo to the command line.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: AW: AW: AW: AW: KVM performance
2009-04-14 8:26 ` AW: " BRAUN, Stefanie
@ 2009-04-14 8:47 ` Avi Kivity
2009-04-16 13:27 ` AW: " BRAUN, Stefanie
0 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2009-04-14 8:47 UTC (permalink / raw)
To: BRAUN, Stefanie; +Cc: kvm
BRAUN, Stefanie wrote:
> Hello,
> the host runs on a Dual-Core AMD Opteron Processor.
> Does there exist a similar AMD parameter?
>
You can add individual host cpu features by using '-cpu
qemu64,+feature', where feature is taken from the host /proc/cpuinfo.
Do you know which cpu features the program can take advantage of?
Also please try replacing the constant 0x0007040600070406ULL in
kernel/x86/svm.c with 0x0606060606060606ULL and see what happens (don't
forget to reload the modules).
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 14+ messages in thread
* AW: AW: AW: AW: AW: KVM performance
2009-04-14 8:47 ` Avi Kivity
@ 2009-04-16 13:27 ` BRAUN, Stefanie
2009-04-16 14:40 ` Avi Kivity
0 siblings, 1 reply; 14+ messages in thread
From: BRAUN, Stefanie @ 2009-04-16 13:27 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
Hello,
I've compiled a new kernel v2.6.27-rc5 with the modified svm.c.
But the behaviour of the vlc process in the guest is still the same.
I've exported additional cpu features to the guest, e.g. mmxext with kvm-84.
But no performance changes.
I was not able to export the cpu flags 3dnow and 3dnowext to the guest, no error but they are not visible in /proc/cpuinfo. Not sure why.
Regards Stefanie
BRAUN, Stefanie wrote:
> qemu-kvm -cpu ? only shows
> qemu64, qemu32, 486, pentium, pentium2, pentium3, athlon
>
It can also take additional +feature or -feature parameters.
Oh, maybe kvm-84 doesn't have this support? try http://userweb.kernel.org/~avi/kvm-85rc6/.
-----Ursprüngliche Nachricht-----
Von: Avi Kivity [mailto:avi@redhat.com]
Gesendet: Dienstag, 14. April 2009 10:48
An: BRAUN, Stefanie
Cc: kvm@vger.kernel.org
Betreff: Re: AW: AW: AW: AW: KVM performance
BRAUN, Stefanie wrote:
> Hello,
> the host runs on a Dual-Core AMD Opteron Processor.
> Does there exist a similar AMD parameter?
>
You can add individual host cpu features by using '-cpu qemu64,+feature', where feature is taken from the host /proc/cpuinfo.
Do you know which cpu features the program can take advantage of?
Also please try replacing the constant 0x0007040600070406ULL in kernel/x86/svm.c with 0x0606060606060606ULL and see what happens (don't forget to reload the modules).
--
I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: AW: AW: AW: AW: AW: KVM performance
2009-04-16 13:27 ` AW: " BRAUN, Stefanie
@ 2009-04-16 14:40 ` Avi Kivity
0 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2009-04-16 14:40 UTC (permalink / raw)
To: BRAUN, Stefanie; +Cc: kvm
BRAUN, Stefanie wrote:
> Hello,
> I've compiled a new kernel v2.6.27-rc5 with the modified svm.c.
> But the behaviour of the vlc process in the guest is still the same.
>
> I've exported additional cpu features to the guest, e.g. mmxext with kvm-84.
> But no performance changes.
>
> I was not able to export the cpu flags 3dnow and 3dnowext to the guest, no error but they are not visible in /proc/cpuinfo. Not sure why.
>
>
Can you test on an intel host (relative performance host vs guest)?
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2009-04-16 14:41 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-03 11:32 KVM performance BRAUN, Stefanie
2009-04-06 11:45 ` Avi Kivity
[not found] ` <133D9897FB9C5E4E9DF2779DC91E947C51834A@SLFSNX.rcs.alcatel-research.de>
[not found] ` <49DA2F54.8090109@redhat.com>
2009-04-07 17:00 ` AW: AW: " BRAUN, Stefanie
2009-04-07 17:34 ` Avi Kivity
2009-04-08 11:38 ` AW: " BRAUN, Stefanie
2009-04-09 15:34 ` BRAUN, Stefanie
2009-04-11 16:19 ` Avi Kivity
2009-04-14 8:26 ` AW: " BRAUN, Stefanie
2009-04-14 8:47 ` Avi Kivity
2009-04-16 13:27 ` AW: " BRAUN, Stefanie
2009-04-16 14:40 ` Avi Kivity
2009-04-06 12:13 ` Hauke Hoffmann
2009-04-06 16:30 ` AW: " BRAUN, Stefanie
2009-04-07 12:58 ` BRAUN, Stefanie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).