* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
@ 2013-04-16 12:15 ` Serge Hallyn
2013-04-16 17:07 ` Paolo Bonzini
` (26 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Serge Hallyn @ 2013-04-16 12:15 UTC (permalink / raw)
To: qemu-devel
** Also affects: qemu
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
2013-04-16 12:15 ` [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues Serge Hallyn
@ 2013-04-16 17:07 ` Paolo Bonzini
2013-04-16 19:35 ` C Cormier
` (25 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-04-16 17:07 UTC (permalink / raw)
To: qemu-devel
The results of comment 23 suggest that the issue is not 100%
reproducible. Can you please run the benchmark 3-4 times
(presave/postrestore) and showall 4 results? One benchmark only, e.g.
"simple read" will do.
Also please try putting a big file on disk (something like "dd
if=/dev/zero of=bigfile count=64K bs=64K") and then doing "cat bigfile >
/dev/null" after restoring. Please check if that makes performance more
consistent.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
2013-04-16 12:15 ` [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues Serge Hallyn
2013-04-16 17:07 ` Paolo Bonzini
@ 2013-04-16 19:35 ` C Cormier
2013-05-01 0:38 ` Jonathan Jefferson
` (24 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: C Cormier @ 2013-04-16 19:35 UTC (permalink / raw)
To: qemu-devel
Can you clarify what's not 100% reproducible? The only time that it is
not reproducible on my system is between different qemu machine types as
I listed. If tests are performed on same machine-type they are
reproducible 100% of the time on the same host and vm guest as shown in
comment #23.
I have re-run what your requesting for machine type pc-1.0.
---machine type pc-1.0---
-Presave-
Simple read: 0.1273 microseconds
Simple read: 0.1259 microseconds
Simple read: 0.1270 microseconds
Simple read: 0.1268 microseconds
-postrestore-
performing: dd if=/dev/zero of=bigfile count=32K bs=64K
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 15.2912 s, 140 MB/s
performing: cat bigfile > /dev/null
Simple read: 0.2700 microseconds
Simple read: 0.2736 microseconds
Simple read: 0.2713 microseconds
Simple read: 0.2747 microseconds
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (2 preceding siblings ...)
2013-04-16 19:35 ` C Cormier
@ 2013-05-01 0:38 ` Jonathan Jefferson
2013-05-01 22:46 ` Jonathan Jefferson
` (23 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Jonathan Jefferson @ 2013-05-01 0:38 UTC (permalink / raw)
To: qemu-devel
I have a few VMs (precise) that process high-volume transaction jobs
each night. After I've done a live-migrate operation to replace faulty
power supply on a bare-metal server, we encountered sluggish performance
on the migrated VMs, significant higher CPU is recorded in particular,
where the same nightly job would consume way more CPU and took more time
to finish on identical hardware.
Upon investigation, we noticed that the only change introduced was the
"live migrate" operation. Upon rebooting the guest OS of the VMs, the
performance is back to normal. I suspect we're hitting the same problem
as the one filed here.. I will attempt to run lmbench next to see if I
would notice similar behavior on system calls costs as the one recorded
in comments #19..21,23.
------
Latest KVM is used from Ubuntu 12.04 LTS :: qemu-kvm (1.0+noroms-0ubuntu14.8)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (3 preceding siblings ...)
2013-05-01 0:38 ` Jonathan Jefferson
@ 2013-05-01 22:46 ` Jonathan Jefferson
2013-05-08 18:11 ` C Cormier
` (22 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Jonathan Jefferson @ 2013-05-01 22:46 UTC (permalink / raw)
To: qemu-devel
I used this handy tool to run system call preliminary benchmarks: http://code.google.com/p/byte-unixbench/
In a nutshell, what I found is a confirmation that live migration does indeed degrade performance on precise KVM.
I hope the below results help narrow down this critical problem to eventually have it resolved in 12.04 LTS version.
detail results:
Compiled the benchmarking tool and then:
root@sample-vm:~/UnixBench# ./Run syscall
Output:
** before live-migration **
------------------------------------------------------------------------
Benchmark Run: Wed May 01 2013 20:29:54 - 20:32:04
1 CPU in system; running 1 parallel copy of tests
System Call Overhead 4177612.4 lps (10.0 s, 7 samples)
System Benchmarks Partial Index BASELINE RESULT INDEX
System Call Overhead 15000.0 4177612.4 2785.1
========
System Benchmarks Index Score (Partial Only) 2785.1
------------------------------------------------------------------------
** after live-migration **
------------------------------------------------------------------------
Benchmark Run: Wed May 01 2013 20:35:16 - 20:37:26
1 CPU in system; running 1 parallel copy of tests
System Call Overhead 3065118.3 lps (10.0 s, 7 samples)
System Benchmarks Partial Index BASELINE RESULT INDEX
System Call Overhead 15000.0 3065118.3 2043.4
========
System Benchmarks Index Score (Partial Only) 2043.4
------------------------------------------------------------------------
XML domain dump:
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>1</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<os>
<type arch='x86_64' machine='pc-1.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='HIDEME'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='HIDEME'/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (4 preceding siblings ...)
2013-05-01 22:46 ` Jonathan Jefferson
@ 2013-05-08 18:11 ` C Cormier
2013-05-08 18:38 ` Serge Hallyn
` (21 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: C Cormier @ 2013-05-08 18:11 UTC (permalink / raw)
To: qemu-devel
Update:
>From our testing this bug affects KVM Hypervisors on Intel processors
that have the EPT feature enabled with Kernels 3.0 and greater. A list
of Intel EPT supported CPUs here
(http://ark.intel.com/Products/VirtualizationTechnology).
When using a KVM Hypervisor Host with Linux kernel 3.0 or newer kernel
with Intel EPT this bug shows itself. If the kvm_intel module is loaded
with option "ept=N" guest performance is significantly decreased versus
enabled, but it does maintain consistent performance pre and post
restoration/migration.
Exceptions:
-A KVM Host with 2.6.32 or 2.6.39 Kernel with EPT enabled this bug is not triggered.
-A KVM Host without the EPT feature enabled Intel CPU this bug is not triggered.
-A KVM Host with Kernel 3.0+ and EPT kvm_intel module option disabled in this bug is not triggered
A KVM hypervisor with EPT enabled on Linux Kernel > 3.0 appears to be
the key here.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (5 preceding siblings ...)
2013-05-08 18:11 ` C Cormier
@ 2013-05-08 18:38 ` Serge Hallyn
2013-05-08 19:00 ` [Qemu-devel] [Bug 1100843] Missing required logs Brad Figg
` (20 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Serge Hallyn @ 2013-05-08 18:38 UTC (permalink / raw)
To: qemu-devel
** Also affects: linux (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
New
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Missing required logs.
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (6 preceding siblings ...)
2013-05-08 18:38 ` Serge Hallyn
@ 2013-05-08 19:00 ` Brad Figg
2013-05-09 15:51 ` [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues Jonathan Jefferson
` (19 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Brad Figg @ 2013-05-08 19:00 UTC (permalink / raw)
To: qemu-devel
This bug is missing log files that will aid in diagnosing the problem.
>From a terminal window please run:
apport-collect 1100843
and then change the status of the bug to 'Confirmed'.
If, due to the nature of the issue you have encountered, you are unable
to run this command, please add a comment stating that fact and change
the bug status to 'Confirmed'.
This change has been made by an automated script, maintained by the
Ubuntu Kernel Team.
** Changed in: linux (Ubuntu)
Status: New => Incomplete
** Tags added: precise
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Incomplete
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (7 preceding siblings ...)
2013-05-08 19:00 ` [Qemu-devel] [Bug 1100843] Missing required logs Brad Figg
@ 2013-05-09 15:51 ` Jonathan Jefferson
2013-05-24 17:11 ` Paolo Bonzini
` (18 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Jonathan Jefferson @ 2013-05-09 15:51 UTC (permalink / raw)
To: qemu-devel
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (8 preceding siblings ...)
2013-05-09 15:51 ` [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues Jonathan Jefferson
@ 2013-05-24 17:11 ` Paolo Bonzini
2013-05-24 17:18 ` Paolo Bonzini
` (17 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-05-24 17:11 UTC (permalink / raw)
To: qemu-devel
Can you please check if you have EPT enabled? This could be
https://bugzilla.kernel.org/show_bug.cgi?id=58771
** Bug watch added: Linux Kernel Bug Tracker #58771
http://bugzilla.kernel.org/show_bug.cgi?id=58771
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (9 preceding siblings ...)
2013-05-24 17:11 ` Paolo Bonzini
@ 2013-05-24 17:18 ` Paolo Bonzini
2013-05-24 17:40 ` C Cormier
` (16 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-05-24 17:18 UTC (permalink / raw)
To: qemu-devel
Oops, I missed Chris's comment #28. Thanks.
>From comment #23, the 1.4 machine type seems to be "fast", while 1.3 is
slow. This doesn't make much sense, given the differences between the
two machine types:
enable_compat_apic_id_mode();
.driver = "usb-tablet",\
.property = "usb_version",\
.value = stringify(1),\
.driver = "virtio-net-pci",\
.property = "ctrl_mac_addr",\
.value = "off", \
.driver = "virtio-net-pci", \
.property = "mq", \
.value = "off", \
.driver = "e1000",\
.property = "autonegotiation",\
.value = "off",\
This is why I suspected the issue was not 100% reproducible.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (10 preceding siblings ...)
2013-05-24 17:18 ` Paolo Bonzini
@ 2013-05-24 17:40 ` C Cormier
2013-07-08 18:28 ` Fletcher Kubota
` (15 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: C Cormier @ 2013-05-24 17:40 UTC (permalink / raw)
To: qemu-devel
@Paolo yes, when i was doing that testing i was able to consistently
reproduce those results in #23, but it was a red herring, as of now i
cannot reproduce the results in #23 consistently (i suspect it may have
had something to do with the order i was executing tests but didn’t
chase it any further).
Yes, EPT enabled, I submitted that kernel bug in #30.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (11 preceding siblings ...)
2013-05-24 17:40 ` C Cormier
@ 2013-07-08 18:28 ` Fletcher Kubota
2013-09-02 9:21 ` Stephen Gran
` (14 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Fletcher Kubota @ 2013-07-08 18:28 UTC (permalink / raw)
To: qemu-devel
My HyperDex cluster nodes performance dropped significantly after migrating them (virsh migrate --live ...).they are hosted on precise KVM (12.04.2 Precise Pangolin). first Google search result landed me on this page. it seems i'm not the only one who's encountering this problem. I hope this gets resolved soon as livemigration is a major feature for any hypervisor solution in my opinion ...
cheers
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (12 preceding siblings ...)
2013-07-08 18:28 ` Fletcher Kubota
@ 2013-09-02 9:21 ` Stephen Gran
2013-09-08 9:40 ` Stephen Gran
` (13 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Stephen Gran @ 2013-09-02 9:21 UTC (permalink / raw)
To: qemu-devel
We are reliably seeing this post live-migration on an openstack
platform.
Setup:
hypervisor ==> Ubuntu 12.04.3 LTS
libvirt ===> 1.0.2-0ubuntu11.13.04.2~cloud0
qemu-kvm ===> 1.0+noroms-0ubuntu14.10
storage: NFS exports
Guest VM OS: Ubuntu 12.04.1 LTS and CentOS 6.4
We have ept enabled.
Sample instance:
<domain type="kvm">
<uuid>f3c16d27-2586-44c8-b9d9-84b74b42b5d3</uuid>
<name>instance-00000508</name>
<memory>4194304</memory>
<vcpu>2</vcpu>
<os>
<type>hvm</type>
<boot dev="hd"/>
</os>
<features>
<acpi/>
</features>
<clock offset="utc">
<timer name="pit" tickpolicy="delay"/>
<timer name="rtc" tickpolicy="catchup"/>
</clock>
<cpu mode="host-model" match="exact"/>
<devices>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2" cache="none"/>
<source file="/var/lib/nova/instances/instance-00000508/disk"/>
<target bus="virtio" dev="vda"/>
</disk>
<interface type="bridge">
<mac address="fa:16:3e:5d:0e:6a"/>
<model type="virtio"/>
<source bridge="qbrf43e9d83-56"/>
<filterref filter="nova-instance-instance-00000508-fa163e5d0e6a">
<parameter name="IP" value="10.253.138.156"/>
<parameter name="DHCPSERVER" value="10.253.138.51"/>
</filterref>
</interface>
<serial type="file">
<source path="/var/lib/nova/instances/instance-00000508/console.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
</devices>
</domain>
We have a test environment and are willing to assist in debugging.
Please let us know what we can do to help.
Cheers,
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (13 preceding siblings ...)
2013-09-02 9:21 ` Stephen Gran
@ 2013-09-08 9:40 ` Stephen Gran
2013-09-25 3:56 ` Chris J Arges
` (12 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Stephen Gran @ 2013-09-08 9:40 UTC (permalink / raw)
To: qemu-devel
This is being looked at in an upstream thread at
http://lists.gnu.org/archive/html/qemu-devel/2013-07/msg01850.html
Cheers,
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (14 preceding siblings ...)
2013-09-08 9:40 ` Stephen Gran
@ 2013-09-25 3:56 ` Chris J Arges
2013-09-26 14:04 ` Chris J Arges
` (11 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-09-25 3:56 UTC (permalink / raw)
To: qemu-devel
** Changed in: qemu-kvm (Ubuntu)
Assignee: (unassigned) => Chris J Arges (arges)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
Triaged
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (15 preceding siblings ...)
2013-09-25 3:56 ` Chris J Arges
@ 2013-09-26 14:04 ` Chris J Arges
2013-09-26 20:33 ` Chris J Arges
` (10 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-09-26 14:04 UTC (permalink / raw)
To: qemu-devel
** Changed in: qemu-kvm (Ubuntu)
Status: Triaged => In Progress
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “linux” package in Ubuntu:
Confirmed
Status in “qemu-kvm” package in Ubuntu:
In Progress
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (16 preceding siblings ...)
2013-09-26 14:04 ` Chris J Arges
@ 2013-09-26 20:33 ` Chris J Arges
2013-10-06 13:57 ` Zhang Haoyu
2013-10-07 13:47 ` Chris J Arges
` (9 subsequent siblings)
27 siblings, 1 reply; 39+ messages in thread
From: Chris J Arges @ 2013-09-26 20:33 UTC (permalink / raw)
To: qemu-devel
>From my testing this has been fixed in the saucy version (1.5.0) of qemu. It is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972
However later in the history this commit was reverted, and again broke this. The other commit that fixes this is:
211ea74022f51164a7729030b28eec90b6c99a08
So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a v1 packages of a precise backport here, I've confirmed performance differences between savevm/loadvm cycles:
http://people.canonical.com/~arges/lp1100843/precise/
** No longer affects: linux (Ubuntu)
** Also affects: qemu-kvm (Ubuntu Precise)
Importance: Undecided
Status: New
** Also affects: qemu-kvm (Ubuntu Quantal)
Importance: Undecided
Status: New
** Also affects: qemu-kvm (Ubuntu Raring)
Importance: Undecided
Status: New
** Also affects: qemu-kvm (Ubuntu Saucy)
Importance: High
Assignee: Chris J Arges (arges)
Status: In Progress
** Changed in: qemu-kvm (Ubuntu Precise)
Assignee: (unassigned) => Chris J Arges (arges)
** Changed in: qemu-kvm (Ubuntu Quantal)
Assignee: (unassigned) => Chris J Arges (arges)
** Changed in: qemu-kvm (Ubuntu Raring)
Assignee: (unassigned) => Chris J Arges (arges)
** Changed in: qemu-kvm (Ubuntu Precise)
Importance: Undecided => High
** Changed in: qemu-kvm (Ubuntu Quantal)
Importance: Undecided => High
** Changed in: qemu-kvm (Ubuntu Raring)
Importance: Undecided => High
** Changed in: qemu-kvm (Ubuntu Saucy)
Assignee: Chris J Arges (arges) => (unassigned)
** Changed in: qemu-kvm (Ubuntu Saucy)
Status: In Progress => Fix Released
** Changed in: qemu-kvm (Ubuntu Raring)
Status: New => Triaged
** Changed in: qemu-kvm (Ubuntu Quantal)
Status: New => Triaged
** Changed in: qemu-kvm (Ubuntu Precise)
Status: New => In Progress
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
In Progress
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
2013-09-26 20:33 ` Chris J Arges
@ 2013-10-06 13:57 ` Zhang Haoyu
2013-10-07 6:38 ` [Qemu-devel] " Peter Lieven
0 siblings, 1 reply; 39+ messages in thread
From: Zhang Haoyu @ 2013-10-06 13:57 UTC (permalink / raw)
To: Bug 1100843, gleb, eblake, quintela, pbonzini, afaerber,
xiaoguangrong, mtosatti, kvm, mst
Cc: qemu-devel
>>From my testing this has been fixed in the saucy version (1.5.0) of
qemu. It is fixed by this patch:
>f1c72795af573b24a7da5eb52375c9aba8a37972
>
>However later in the history this commit was reverted, and again broke
this. The other commit that fixes this is:
>211ea74022f51164a7729030b28eec90b6c99a08
>
See below post,please.
https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
Thanks,
Zhang Haoyu
>So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a
v1 packages of a precise backport here, I've confirmed performance
differences between savevm/loadvm cycles:
>http://people.canonical.com/~arges/lp1100843/precise/
>
>** No longer affects: linux (Ubuntu)
>
>** Also affects: qemu-kvm (Ubuntu Precise)
> Importance: Undecided
> Status: New
>
>** Also affects: qemu-kvm (Ubuntu Quantal)
> Importance: Undecided
> Status: New
>
>** Also affects: qemu-kvm (Ubuntu Raring)
> Importance: Undecided
> Status: New
>
>** Also affects: qemu-kvm (Ubuntu Saucy)
> Importance: High
> Assignee: Chris J Arges (arges)
> Status: In Progress
>
>** Changed in: qemu-kvm (Ubuntu Precise)
> Assignee: (unassigned) => Chris J Arges (arges)
>
>** Changed in: qemu-kvm (Ubuntu Quantal)
> Assignee: (unassigned) => Chris J Arges (arges)
>
>** Changed in: qemu-kvm (Ubuntu Raring)
> Assignee: (unassigned) => Chris J Arges (arges)
>
>** Changed in: qemu-kvm (Ubuntu Precise)
> Importance: Undecided => High
>
>** Changed in: qemu-kvm (Ubuntu Quantal)
> Importance: Undecided => High
>
>** Changed in: qemu-kvm (Ubuntu Raring)
> Importance: Undecided => High
>
>** Changed in: qemu-kvm (Ubuntu Saucy)
> Assignee: Chris J Arges (arges) => (unassigned)
>
>** Changed in: qemu-kvm (Ubuntu Saucy)
> Status: In Progress => Fix Released
>
>** Changed in: qemu-kvm (Ubuntu Raring)
> Status: New => Triaged
>
>** Changed in: qemu-kvm (Ubuntu Quantal)
> Status: New => Triaged
>
>** Changed in: qemu-kvm (Ubuntu Precise)
> Status: New => In Progress
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Bug 1100843] Re: Live Migration Causes Performance Issues
2013-10-06 13:57 ` Zhang Haoyu
@ 2013-10-07 6:38 ` Peter Lieven
0 siblings, 0 replies; 39+ messages in thread
From: Peter Lieven @ 2013-10-07 6:38 UTC (permalink / raw)
To: Zhang Haoyu, Bug 1100843, gleb, eblake, quintela, pbonzini,
afaerber, xiaoguangrong, mtosatti, kvm, mst
Cc: qemu-devel
On 06.10.2013 15:57, Zhang Haoyu wrote:
>> >From my testing this has been fixed in the saucy version (1.5.0) of
> qemu. It is fixed by this patch:
>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>
>> However later in the history this commit was reverted, and again broke
> this. The other commit that fixes this is:
>> 211ea74022f51164a7729030b28eec90b6c99a08
>>
> See below post,please.
> https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
I would still like to fix qemu to not load roms etc. if we set up a migration target. In this case
we could drop the madvise, skip the checking for zero pages and also could avoid sending
zero pages at all. It would be the cleanest solution.
Peter
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
@ 2013-10-07 6:38 ` Peter Lieven
0 siblings, 0 replies; 39+ messages in thread
From: Peter Lieven @ 2013-10-07 6:38 UTC (permalink / raw)
To: Zhang Haoyu, Bug 1100843, gleb, eblake, quintela, pbonzini,
afaerber, xiaoguangrong, mtosatti, kvm, mst
Cc: qemu-devel
On 06.10.2013 15:57, Zhang Haoyu wrote:
>> >From my testing this has been fixed in the saucy version (1.5.0) of
> qemu. It is fixed by this patch:
>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>
>> However later in the history this commit was reverted, and again broke
> this. The other commit that fixes this is:
>> 211ea74022f51164a7729030b28eec90b6c99a08
>>
> See below post,please.
> https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
I would still like to fix qemu to not load roms etc. if we set up a migration target. In this case
we could drop the madvise, skip the checking for zero pages and also could avoid sending
zero pages at all. It would be the cleanest solution.
Peter
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
2013-10-07 6:38 ` [Qemu-devel] " Peter Lieven
@ 2013-10-07 9:37 ` Paolo Bonzini
-1 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-10-07 9:37 UTC (permalink / raw)
To: Peter Lieven
Cc: Zhang Haoyu, Bug 1100843, gleb, eblake, quintela, afaerber,
xiaoguangrong, mtosatti, kvm, mst, qemu-devel
Il 07/10/2013 08:38, Peter Lieven ha scritto:
> On 06.10.2013 15:57, Zhang Haoyu wrote:
>>> >From my testing this has been fixed in the saucy version (1.5.0) of
>> qemu. It is fixed by this patch:
>>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>>
>>> However later in the history this commit was reverted, and again broke
>> this. The other commit that fixes this is:
>>> 211ea74022f51164a7729030b28eec90b6c99a08
>>>
>> See below post,please.
>> https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
>
> I would still like to fix qemu to not load roms etc. if we set up a
> migration target. In this case
> we could drop the madvise, skip the checking for zero pages and also
> could avoid sending
> zero pages at all. It would be the cleanest solution.
It's in general not easy to do this if you take non-x86 targets into
account.
Paolo
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
@ 2013-10-07 9:37 ` Paolo Bonzini
0 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-10-07 9:37 UTC (permalink / raw)
To: Peter Lieven
Cc: kvm, gleb, quintela, mtosatti, qemu-devel, Zhang Haoyu,
xiaoguangrong, Bug 1100843, mst, afaerber
Il 07/10/2013 08:38, Peter Lieven ha scritto:
> On 06.10.2013 15:57, Zhang Haoyu wrote:
>>> >From my testing this has been fixed in the saucy version (1.5.0) of
>> qemu. It is fixed by this patch:
>>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>>
>>> However later in the history this commit was reverted, and again broke
>> this. The other commit that fixes this is:
>>> 211ea74022f51164a7729030b28eec90b6c99a08
>>>
>> See below post,please.
>> https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
>
> I would still like to fix qemu to not load roms etc. if we set up a
> migration target. In this case
> we could drop the madvise, skip the checking for zero pages and also
> could avoid sending
> zero pages at all. It would be the cleanest solution.
It's in general not easy to do this if you take non-x86 targets into
account.
Paolo
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
2013-10-07 9:37 ` Paolo Bonzini
@ 2013-10-07 9:49 ` Peter Lieven
-1 siblings, 0 replies; 39+ messages in thread
From: Peter Lieven @ 2013-10-07 9:49 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Zhang Haoyu, Bug 1100843, gleb, eblake, quintela, afaerber,
xiaoguangrong, mtosatti, kvm, mst, qemu-devel
On 07.10.2013 11:37, Paolo Bonzini wrote:
> Il 07/10/2013 08:38, Peter Lieven ha scritto:
>> On 06.10.2013 15:57, Zhang Haoyu wrote:
>>>> >From my testing this has been fixed in the saucy version (1.5.0) of
>>> qemu. It is fixed by this patch:
>>>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>>>
>>>> However later in the history this commit was reverted, and again broke
>>> this. The other commit that fixes this is:
>>>> 211ea74022f51164a7729030b28eec90b6c99a08
>>>>
>>> See below post,please.
>>> https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
>> I would still like to fix qemu to not load roms etc. if we set up a
>> migration target. In this case
>> we could drop the madvise, skip the checking for zero pages and also
>> could avoid sending
>> zero pages at all. It would be the cleanest solution.
> It's in general not easy to do this if you take non-x86 targets into
> account.
What about the dirty way to zero out all non zero pages at the beginning of
ram_load?
Peter
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
@ 2013-10-07 9:49 ` Peter Lieven
0 siblings, 0 replies; 39+ messages in thread
From: Peter Lieven @ 2013-10-07 9:49 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm, gleb, quintela, mtosatti, qemu-devel, Zhang Haoyu,
xiaoguangrong, Bug 1100843, mst, afaerber
On 07.10.2013 11:37, Paolo Bonzini wrote:
> Il 07/10/2013 08:38, Peter Lieven ha scritto:
>> On 06.10.2013 15:57, Zhang Haoyu wrote:
>>>> >From my testing this has been fixed in the saucy version (1.5.0) of
>>> qemu. It is fixed by this patch:
>>>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>>>
>>>> However later in the history this commit was reverted, and again broke
>>> this. The other commit that fixes this is:
>>>> 211ea74022f51164a7729030b28eec90b6c99a08
>>>>
>>> See below post,please.
>>> https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
>> I would still like to fix qemu to not load roms etc. if we set up a
>> migration target. In this case
>> we could drop the madvise, skip the checking for zero pages and also
>> could avoid sending
>> zero pages at all. It would be the cleanest solution.
> It's in general not easy to do this if you take non-x86 targets into
> account.
What about the dirty way to zero out all non zero pages at the beginning of
ram_load?
Peter
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
2013-10-07 9:49 ` Peter Lieven
@ 2013-10-07 9:55 ` Paolo Bonzini
-1 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-10-07 9:55 UTC (permalink / raw)
To: Peter Lieven
Cc: Zhang Haoyu, Bug 1100843, gleb, eblake, quintela, afaerber,
xiaoguangrong, mtosatti, kvm, mst, qemu-devel
Il 07/10/2013 11:49, Peter Lieven ha scritto:
>> It's in general not easy to do this if you take non-x86 targets into
>> account.
> What about the dirty way to zero out all non zero pages at the beginning of
> ram_load?
I'm not sure I follow?
Paolo
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
@ 2013-10-07 9:55 ` Paolo Bonzini
0 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-10-07 9:55 UTC (permalink / raw)
To: Peter Lieven
Cc: kvm, gleb, quintela, mtosatti, qemu-devel, Zhang Haoyu,
xiaoguangrong, Bug 1100843, mst, afaerber
Il 07/10/2013 11:49, Peter Lieven ha scritto:
>> It's in general not easy to do this if you take non-x86 targets into
>> account.
> What about the dirty way to zero out all non zero pages at the beginning of
> ram_load?
I'm not sure I follow?
Paolo
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Bug 1100843] Re: Live Migration Causes Performance Issues
2013-10-07 9:55 ` Paolo Bonzini
@ 2013-10-10 8:17 ` Peter Lieven
-1 siblings, 0 replies; 39+ messages in thread
From: Peter Lieven @ 2013-10-10 8:17 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm, gleb, quintela, mtosatti, qemu-devel, Zhang Haoyu,
xiaoguangrong, Bug 1100843, mst, afaerber
On 07.10.2013 11:55, Paolo Bonzini wrote:
> Il 07/10/2013 11:49, Peter Lieven ha scritto:
>>> It's in general not easy to do this if you take non-x86 targets into
>>> account.
>> What about the dirty way to zero out all non zero pages at the beginning of
>> ram_load?
> I'm not sure I follow?
sth like this for each ram block at the beginning of ram_load.
+ base = memory_region_get_ram_ptr(block->mr);
+ for (offset = 0; offset < block->length;
+ offset += TARGET_PAGE_SIZE) {
+ if (!is_zero_page(base + offset)) {
+ memset(base + offset, 0x00, TARGET_PAGE_SIZE);
+ }
+ }
+
Then add a capability "skip_zero_pages" which does not sent them on the source
and enables this zeroing. it would also be possible to skip the zero check
for each incoming compressed pages.
Peter
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
@ 2013-10-10 8:17 ` Peter Lieven
0 siblings, 0 replies; 39+ messages in thread
From: Peter Lieven @ 2013-10-10 8:17 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm, gleb, quintela, mtosatti, qemu-devel, Zhang Haoyu,
xiaoguangrong, Bug 1100843, mst, afaerber
On 07.10.2013 11:55, Paolo Bonzini wrote:
> Il 07/10/2013 11:49, Peter Lieven ha scritto:
>>> It's in general not easy to do this if you take non-x86 targets into
>>> account.
>> What about the dirty way to zero out all non zero pages at the beginning of
>> ram_load?
> I'm not sure I follow?
sth like this for each ram block at the beginning of ram_load.
+ base = memory_region_get_ram_ptr(block->mr);
+ for (offset = 0; offset < block->length;
+ offset += TARGET_PAGE_SIZE) {
+ if (!is_zero_page(base + offset)) {
+ memset(base + offset, 0x00, TARGET_PAGE_SIZE);
+ }
+ }
+
Then add a capability "skip_zero_pages" which does not sent them on the source
and enables this zeroing. it would also be possible to skip the zero check
for each incoming compressed pages.
Peter
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (17 preceding siblings ...)
2013-09-26 20:33 ` Chris J Arges
@ 2013-10-07 13:47 ` Chris J Arges
2013-10-07 13:59 ` Chris J Arges
` (8 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-10-07 13:47 UTC (permalink / raw)
To: qemu-devel
I found that two patches need to be backported to solve this issue:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
I've added the necessary bits into precise and tried a few tests:
1) Measure performance before and after savevm/loadvm.
2) Measure performance before and after a migrate to the same host.
In both cases the performance measured by something like lmbench was the same as the previous run.
A test build is available here:
http://people.canonical.com/~arges/lp1100843/precise_v2/
** Patch added: "fix-lp1100843-precise.debdiff"
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+attachment/3864309/+files/fix-lp1100843-precise.debdiff
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
In Progress
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (18 preceding siblings ...)
2013-10-07 13:47 ` Chris J Arges
@ 2013-10-07 13:59 ` Chris J Arges
2013-10-07 14:18 ` Chris J Arges
` (7 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-10-07 13:59 UTC (permalink / raw)
To: qemu-devel
** Description changed:
+ SRU Justification
+ [Impact]
+ * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
+
+ [Test Case]
+
+ * savevm/loadvm:
+ - Create a VM and install a test suite such as lmbench.
+ - Get numbers right after boot and record them.
+ - Open up the qemu monitor and type the following:
+ stop
+ savevm 0
+ loadvm 0
+ c
+ - Measure performance and record numbers.
+ - Compare if numbers are within margin of error.
+ * migrate:
+ - Create VM, install lmbench, get numbers.
+ - Open up qemu monitor and type the following:
+ stop
+ migrate "exec:dd of=~/save.vm"
+ quit
+ - Start a new VM using qemu but add the following argument:
+ -incoming "exec:dd if=~/save.vm"
+ - Run performance test and compare.
+
+ If performance measured is similar then we pass the test case.
+
+ [Regression Potential]
+
+ * The fix is a backport of two upstream patches:
+ ad0b5321f1f797274603ebbe20108b0750baee94
+ 211ea74022f51164a7729030b28eec90b6c99a08
+
+ On patch allows QEMU to use THP if its enabled.
+ The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
+
+ --
+
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
- I'm seeing a performance degradation after live migration on Precise,
+ I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests, and
then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being used
after migration, but this didn't pan out. Someone else has a similar
problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid
f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink from
/var/lib/one/datastores/0/2/disk.0 above)
-
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
-
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
-
- I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
+ I don't have phoronix results for 10.04 handy, but they were within 1%
+ of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
-
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
In Progress
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
On patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (19 preceding siblings ...)
2013-10-07 13:59 ` Chris J Arges
@ 2013-10-07 14:18 ` Chris J Arges
2013-10-07 20:33 ` Chris J Arges
` (6 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-10-07 14:18 UTC (permalink / raw)
To: qemu-devel
** Description changed:
SRU Justification
- [Impact]
- * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
+ [Impact]
+ * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
- * savevm/loadvm:
- - Create a VM and install a test suite such as lmbench.
- - Get numbers right after boot and record them.
- - Open up the qemu monitor and type the following:
- stop
- savevm 0
- loadvm 0
- c
- - Measure performance and record numbers.
- - Compare if numbers are within margin of error.
- * migrate:
- - Create VM, install lmbench, get numbers.
- - Open up qemu monitor and type the following:
- stop
- migrate "exec:dd of=~/save.vm"
- quit
- - Start a new VM using qemu but add the following argument:
- -incoming "exec:dd if=~/save.vm"
- - Run performance test and compare.
-
- If performance measured is similar then we pass the test case.
+ * savevm/loadvm:
+ - Create a VM and install a test suite such as lmbench.
+ - Get numbers right after boot and record them.
+ - Open up the qemu monitor and type the following:
+ stop
+ savevm 0
+ loadvm 0
+ c
+ - Measure performance and record numbers.
+ - Compare if numbers are within margin of error.
+ * migrate:
+ - Create VM, install lmbench, get numbers.
+ - Open up qemu monitor and type the following:
+ stop
+ migrate "exec:dd of=~/save.vm"
+ quit
+ - Start a new VM using qemu but add the following argument:
+ -incoming "exec:dd if=~/save.vm"
+ - Run performance test and compare.
+
+ If performance measured is similar then we pass the test case.
[Regression Potential]
- * The fix is a backport of two upstream patches:
+ * The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
On patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
+ * I've also run the qa-regression-testing test-qemu.py script and it passes all tests.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests, and
then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being used
after migration, but this didn't pan out. Someone else has a similar
problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid
f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink from
/var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
In Progress
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
On patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it passes all tests.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (20 preceding siblings ...)
2013-10-07 14:18 ` Chris J Arges
@ 2013-10-07 20:33 ` Chris J Arges
2013-10-10 22:25 ` Brian Murray
` (5 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-10-07 20:33 UTC (permalink / raw)
To: qemu-devel
** Description changed:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
- On patch allows QEMU to use THP if its enabled.
+ One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
- * I've also run the qa-regression-testing test-qemu.py script and it passes all tests.
+ * I've also run the qa-regression-testing test-qemu.py script and it
+ passes all tests.
+
+ [Additional Information]
+
+ Kernels from 3.2 onwards are affected, and all have the config:
+ CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
+ applicable.
+
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests, and
then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being used
after migration, but this didn't pan out. Someone else has a similar
problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid
f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink from
/var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
In Progress
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (21 preceding siblings ...)
2013-10-07 20:33 ` Chris J Arges
@ 2013-10-10 22:25 ` Brian Murray
2013-10-11 13:16 ` Chris J Arges
` (4 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Brian Murray @ 2013-10-10 22:25 UTC (permalink / raw)
To: qemu-devel
Hello Mark, or anyone else affected,
Accepted qemu-kvm into precise-proposed. The package will build now and
be available at http://launchpad.net/ubuntu/+source/qemu-kvm/1.0+noroms-
0ubuntu14.12 in a few hours, and then in the -proposed repository.
Please help us by testing this new package. See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to
enable and use -proposed. Your feedback will aid us getting this update
out to other Ubuntu users.
If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-needed to verification-done. If it does not fix the
bug for you, please add a comment stating that, and change the tag to
verification-failed. In either case, details of your testing will help
us make a better decision.
Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!
** Changed in: qemu-kvm (Ubuntu Precise)
Status: In Progress => Fix Committed
** Tags added: verification-needed
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
Fix Committed
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (22 preceding siblings ...)
2013-10-10 22:25 ` Brian Murray
@ 2013-10-11 13:16 ` Chris J Arges
2013-10-24 17:54 ` Launchpad Bug Tracker
` (3 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-10-11 13:16 UTC (permalink / raw)
To: qemu-devel
I have verified this on my local machine using virt-manager's save
memory, savevm/loadvm via the qemu monitor , and migrate via qemu
monitor.
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
Fix Committed
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (23 preceding siblings ...)
2013-10-11 13:16 ` Chris J Arges
@ 2013-10-24 17:54 ` Launchpad Bug Tracker
2013-10-24 17:54 ` [Qemu-devel] [Bug 1100843] Update Released Brian Murray
` (2 subsequent siblings)
27 siblings, 0 replies; 39+ messages in thread
From: Launchpad Bug Tracker @ 2013-10-24 17:54 UTC (permalink / raw)
To: qemu-devel
This bug was fixed in the package qemu-kvm - 1.0+noroms-0ubuntu14.12
---------------
qemu-kvm (1.0+noroms-0ubuntu14.12) precise-proposed; urgency=low
* migration-do-not-overwrite-zero-pages.patch,
call-madv-hugepage-for-guest-ram-allocations.patch:
Fix performance degradation after migrations, and savevm/loadvm.
(LP: #1100843)
-- Chris J Arges <chris.j.arges@ubuntu.com> Wed, 02 Oct 2013 16:26:27 -0500
** Changed in: qemu-kvm (Ubuntu Precise)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
Fix Released
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Update Released
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (24 preceding siblings ...)
2013-10-24 17:54 ` Launchpad Bug Tracker
@ 2013-10-24 17:54 ` Brian Murray
2013-11-08 22:21 ` [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues Chris J Arges
2013-11-27 13:00 ` Paolo Bonzini
27 siblings, 0 replies; 39+ messages in thread
From: Brian Murray @ 2013-10-24 17:54 UTC (permalink / raw)
To: qemu-devel
The verification of this Stable Release Update has completed
successfully and the package has now been released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report. In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regresssions.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
Fix Released
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (25 preceding siblings ...)
2013-10-24 17:54 ` [Qemu-devel] [Bug 1100843] Update Released Brian Murray
@ 2013-11-08 22:21 ` Chris J Arges
2013-11-27 13:00 ` Paolo Bonzini
27 siblings, 0 replies; 39+ messages in thread
From: Chris J Arges @ 2013-11-08 22:21 UTC (permalink / raw)
To: qemu-devel
** Changed in: qemu-kvm (Ubuntu Quantal)
Assignee: Chris J Arges (arges) => (unassigned)
** Changed in: qemu-kvm (Ubuntu Raring)
Assignee: Chris J Arges (arges) => (unassigned)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
Fix Released
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread
* [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
[not found] <20130117163740.7157.55600.malonedeb@gac.canonical.com>
` (26 preceding siblings ...)
2013-11-08 22:21 ` [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues Chris J Arges
@ 2013-11-27 13:00 ` Paolo Bonzini
27 siblings, 0 replies; 39+ messages in thread
From: Paolo Bonzini @ 2013-11-27 13:00 UTC (permalink / raw)
To: qemu-devel
Fix will be part of QEMU 1.7.0 (commit fc1c4a5, migration: drop
MADVISE_DONT_NEED for incoming zero pages, 2013-10-24).
** Changed in: qemu
Status: New => Fix Committed
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
Fix Committed
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
Fix Released
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue.
[Test Case]
* savevm/loadvm:
- Create a VM and install a test suite such as lmbench.
- Get numbers right after boot and record them.
- Open up the qemu monitor and type the following:
stop
savevm 0
loadvm 0
c
- Measure performance and record numbers.
- Compare if numbers are within margin of error.
* migrate:
- Create VM, install lmbench, get numbers.
- Open up qemu monitor and type the following:
stop
migrate "exec:dd of=~/save.vm"
quit
- Start a new VM using qemu but add the following argument:
-incoming "exec:dd if=~/save.vm"
- Run performance test and compare.
If performance measured is similar then we pass the test case.
[Regression Potential]
* The fix is a backport of two upstream patches:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
One patch allows QEMU to use THP if its enabled.
The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration).
* I've also run the qa-regression-testing test-qemu.py script and it
passes all tests.
[Additional Information]
Kernels from 3.2 onwards are affected, and all have the config:
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
applicable.
--
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1%
of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
^ permalink raw reply [flat|nested] 39+ messages in thread