kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-12  0:47 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-12  0:47 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 09:59
Message generated for change (Comment added) made by kmshanah
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

>Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-12 11:17

Message:
Another update - I've done some more testing with different host kernel
versions and kvm versions. It may be that this bug is not dependant on the
kvm version, but the host kernel. It seems that I get the guest stalls when
2.6.28 is the host kernel (with kvm-82 userspace and modules), but stepping
back to 2.6.27.10 (but still kvm-82 userspace and modules) I am unable to
trigger the problem as yet.

If things are still looking good on 2.6.27.10 by the end of the day, I'll
start a git bisect on the host kernel. I can probably only get one or two
compile/test cycles done per day, but hopefully I'll get to the bottom of
this by the end of the week.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 23:04

Message:
Hopefully this is okay - I sat watching the ping output in one terminal and
when I noticed the delays showing up I ran vmstat -1 a couple of times in
another terminal. As best I can tell I got it to run during the period with
the stalled packets, but it doesn't feel like a very precise way of doing
it.

Here's three examples - each shows the ping output around the time of the
stall(s) and the 'vmstat -1' output below each should coincide with the
time when the stalls were occurring.

64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=455 ttl=64
time=0.532 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=456 ttl=64
time=0.546 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=457 ttl=64
time=0.536 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=458 ttl=64
time=0.564 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=459 ttl=64
time=3870 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=460 ttl=64
time=2862 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=461 ttl=64
time=1854 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=462 ttl=64
time=846 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=463 ttl=64
time=0.519 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=464 ttl=64
time=0.285 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=465 ttl=64
time=0.526 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=466 ttl=64
time=0.514 ms

Fri Jan  9 22:51:46 CST 2009
efer_reload                   36         0
exits                  254320392     38515
fpu_reload              10447942      3010
halt_exits              14380628      3056
halt_wakeup              2974535       794
host_state_reload       59721044     12192
hypercalls                 86694        20
insn_emulation          20950110      5688
insn_emulation_fail          439         0
invlpg                  10240323      2736
io_exits               100781188     12198
irq_exits                1101409       818
irq_injections          18151357      5237
irq_window               3000109      1857
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1559403      1028
mmu_cache_miss            632126        20
mmu_flooded               390870        18
mmu_pde_zapped            583597         0
mmu_pte_updated          2865958       115
mmu_pte_write            3305636       124
mmu_recycled               11499         0
mmu_shadow_zapped         869397        19
mmu_unsync                 14604         2
mmu_unsync_global             52         3
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50061696      8219
pf_guest                19885071        44
remote_tlb_flush          396841       482
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25014198      5954


64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=653 ttl=64
time=0.544 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=654 ttl=64
time=0.528 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=655 ttl=64
time=0.559 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=656 ttl=64
time=0.510 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=657 ttl=64
time=646 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=658 ttl=64
time=280 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=659 ttl=64
time=314 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=660 ttl=64
time=426 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=661 ttl=64
time=279 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=662 ttl=64
time=909 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=663 ttl=64
time=50.1 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=664 ttl=64
time=1314 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=665 ttl=64
time=306 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=666 ttl=64
time=0.520 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=667 ttl=64
time=0.540 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=668 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=669 ttl=64
time=0.463 ms

Fri Jan  9 22:55:05 CST 2009
efer_reload                   36         0
exits                  259877054     21262
fpu_reload              10735085       904
halt_exits              15068364      2937
halt_wakeup              3140715       755
host_state_reload       61698616      9249
hypercalls                 90713        20
insn_emulation          21669992      3342
insn_emulation_fail          439         0
invlpg                  10299732       189
io_exits               102981490     10766
irq_exits                1121997       261
irq_injections          18987120      4384
irq_window               3131184      1378
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1589318        70
mmu_cache_miss            641035         6
mmu_flooded               397311         6
mmu_pde_zapped            589210         0
mmu_pte_updated          2878333        19
mmu_pte_write            3322010        22
mmu_recycled               11541         0
mmu_shadow_zapped         881104        12
mmu_unsync                 14980       -26
mmu_unsync_global             80       -26
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50348837      1654
pf_guest                19974876         9
remote_tlb_flush          406310         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25414312       705

Fri Jan  9 22:55:07 CST 2009
efer_reload                   36         0
exits                  259923167     22544
fpu_reload              10737416       999
halt_exits              15075348      2923
halt_wakeup              3142504       732
host_state_reload       61719360      9164
hypercalls                 90765        22
insn_emulation          21676868      3324
insn_emulation_fail          439         0
invlpg                  10299986       219
io_exits               103005825     10631
irq_exits                1122654       287
irq_injections          18996545      4286
irq_window               3133448      1258
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1589442       116
mmu_cache_miss            641047        12
mmu_flooded               397323        12
mmu_pde_zapped            589210         0
mmu_pte_updated          2878384        35
mmu_pte_write            3322067        41
mmu_recycled               11541         0
mmu_shadow_zapped         881116        12
mmu_unsync                 14993         3
mmu_unsync_global             95         4
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50352007      3057
pf_guest                19974896        18
remote_tlb_flush          406317         4
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25415857       904


64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=721 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=722 ttl=64
time=0.551 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=723 ttl=64
time=0.466 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=724 ttl=64
time=0.509 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=725 ttl=64
time=693 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=726 ttl=64
time=3397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=727 ttl=64
time=2397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=728 ttl=64
time=1397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=729 ttl=64
time=398 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=730 ttl=64
time=941 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=731 ttl=64
time=1482 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=732 ttl=64
time=479 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=733 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=734 ttl=64
time=0.505 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=735 ttl=64
time=0.507 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=736 ttl=64
time=0.517 ms

Fri Jan  9 22:56:10 CST 2009
efer_reload                   36         0
exits                  261534571     17234
fpu_reload              10800236      1321
halt_exits              15286560      2917
halt_wakeup              3187952       733
host_state_reload       62313130      8560
hypercalls                 92031        21
insn_emulation          21876899      2682
insn_emulation_fail          439         0
invlpg                  10315553        37
io_exits               103688141      9916
irq_exits                1128041       310
irq_injections          19255926      3755
irq_window               3177005       788
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1598313        18
mmu_cache_miss            643199         0
mmu_flooded               398982         0
mmu_pde_zapped            590559         0
mmu_pte_updated          2881041         1
mmu_pte_write            3325787         1
mmu_recycled               11541         0
mmu_shadow_zapped         884135         0
mmu_unsync                 15024        -9
mmu_unsync_global             57       -10
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50419230        84
pf_guest                19997609         2
remote_tlb_flush          408036         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25507415       454

Fri Jan  9 22:56:13 CST 2009
efer_reload                   36         0
exits                  261583705     21909
fpu_reload              10804448      1674
halt_exits              15293967      2915
halt_wakeup              3189804       720
host_state_reload       62336274      9638
hypercalls                 92079        18
insn_emulation          21884951      3758
insn_emulation_fail          439         0
invlpg                  10316276       705
io_exits               103714609     10873
irq_exits                1128838       299
irq_injections          19266675      4707
irq_window               3180199      1728
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1598498       180
mmu_cache_miss            643200         0
mmu_flooded               398982         0
mmu_pde_zapped            590559         0
mmu_pte_updated          2881048         7
mmu_pte_write            3325794         7
mmu_recycled               11541         0
mmu_shadow_zapped         884144         9
mmu_unsync                 15006       -36
mmu_unsync_global             40       -34
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50420142       802
pf_guest                19997630         0
remote_tlb_flush          408042         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25509336      1365


----------------------------------------------------------------------

Comment By: Amit Shah (amitshah)
Date: 2009-01-09 22:17

Message:
If you experience a stall, can you see what the kvm_stat output is at that
time and attach that?

(kvm_stat is included in the kvm tarball)

----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 22:12

Message:
I've done some more testing tonight and unfortunately it's not quite as
easy to reproduce as I had hoped. Here's what I got so far:

Test1 - Linux 2.6.27.10, kvm-82

$ ping -c 600 hermes-old

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599283ms
rtt min/avg/max/mdev = 0.211/0.407/7.145/0.345 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599423ms
rtt min/avg/max/mdev = 0.194/0.497/6.364/0.290 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599506ms
rtt min/avg/max/mdev = 0.254/0.577/22.347/0.932 ms

Test2 - Linux 2.6.28, kvm-82

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599367ms
rtt min/avg/max/mdev = 0.206/0.408/6.768/0.279 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599347ms
rtt min/avg/max/mdev = 0.254/0.582/77.566/3.166 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599681ms
rtt min/avg/max/mdev = 0.256/328.184/10335.007/1024.143 ms, pipe 11

So, the problem did surface again but only the one time. It could be a
2.6.28 specific thing, a bad interaction between the running guests,
something that triggers after a certain amount of guest/host uptime... who
knows. I need to find a more reliable way to trigger it before I will be
able to bisect. Any suggestions welcome.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 11:17

Message:
I forgot to give the command line for the Linux guest:

/usr/local/kvm/bin/qemu-system-x86_64 \
    -smp 2 \
    -localtime -m 2048 \
    -hda kvm-17-1.img \
    -hdb kvm-17-tmp.img \
    -net nic,vlan=0,macaddr=52:54:00:12:34:67,model=rtl8139 \
    -net tap,vlan=0,ifname=tap17,script=no \
    -vnc 127.0.0.1:17 -usbdevice tablet \
    -daemonize


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-17 11:26 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-17 11:26 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 09:59
Message generated for change (Comment added) made by kmshanah
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

>Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-17 21:56

Message:
Okay, git bisect on the host kernel points to the commit below as the
problem. I'll file a bug on bugzilla.kernel.org and see what they say.

14800984706bf6936bbec5187f736e928be5c218 is first bad commit
commit 14800984706bf6936bbec5187f736e928be5c218
Author: Mike Galbraith <efault@gmx.de>
Date:   Fri Nov 7 15:26:50 2008 +0100

    sched: fine-tune SD_MC_INIT
    
    Tune SD_MC_INIT the same way as SD_CPU_INIT:
    unset SD_BALANCE_NEWIDLE, and set SD_WAKE_BALANCE.
    
    This improves vmark by 5%:
    
    vmark         132102 125968 125497 messages/sec    avg 127855.66   
.984
    vmark         139404 131719 131272 messages/sec    avg 134131.66  
1.033
    
    Signed-off-by: Mike Galbraith <efault@gmx.de>
    Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Ingo Molnar <mingo@elte.hu>
    
     # *DOCUMENTATION*

:040000 040000 33b285285748415ddbb5b3dc947a70d9e7cafc94
f70fac391f313b567179b4de24fb3808d057ff87 M	include


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-12 11:17

Message:
Another update - I've done some more testing with different host kernel
versions and kvm versions. It may be that this bug is not dependant on the
kvm version, but the host kernel. It seems that I get the guest stalls when
2.6.28 is the host kernel (with kvm-82 userspace and modules), but stepping
back to 2.6.27.10 (but still kvm-82 userspace and modules) I am unable to
trigger the problem as yet.

If things are still looking good on 2.6.27.10 by the end of the day, I'll
start a git bisect on the host kernel. I can probably only get one or two
compile/test cycles done per day, but hopefully I'll get to the bottom of
this by the end of the week.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 23:04

Message:
Hopefully this is okay - I sat watching the ping output in one terminal and
when I noticed the delays showing up I ran vmstat -1 a couple of times in
another terminal. As best I can tell I got it to run during the period with
the stalled packets, but it doesn't feel like a very precise way of doing
it.

Here's three examples - each shows the ping output around the time of the
stall(s) and the 'vmstat -1' output below each should coincide with the
time when the stalls were occurring.

64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=455 ttl=64
time=0.532 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=456 ttl=64
time=0.546 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=457 ttl=64
time=0.536 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=458 ttl=64
time=0.564 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=459 ttl=64
time=3870 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=460 ttl=64
time=2862 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=461 ttl=64
time=1854 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=462 ttl=64
time=846 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=463 ttl=64
time=0.519 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=464 ttl=64
time=0.285 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=465 ttl=64
time=0.526 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=466 ttl=64
time=0.514 ms

Fri Jan  9 22:51:46 CST 2009
efer_reload                   36         0
exits                  254320392     38515
fpu_reload              10447942      3010
halt_exits              14380628      3056
halt_wakeup              2974535       794
host_state_reload       59721044     12192
hypercalls                 86694        20
insn_emulation          20950110      5688
insn_emulation_fail          439         0
invlpg                  10240323      2736
io_exits               100781188     12198
irq_exits                1101409       818
irq_injections          18151357      5237
irq_window               3000109      1857
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1559403      1028
mmu_cache_miss            632126        20
mmu_flooded               390870        18
mmu_pde_zapped            583597         0
mmu_pte_updated          2865958       115
mmu_pte_write            3305636       124
mmu_recycled               11499         0
mmu_shadow_zapped         869397        19
mmu_unsync                 14604         2
mmu_unsync_global             52         3
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50061696      8219
pf_guest                19885071        44
remote_tlb_flush          396841       482
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25014198      5954


64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=653 ttl=64
time=0.544 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=654 ttl=64
time=0.528 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=655 ttl=64
time=0.559 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=656 ttl=64
time=0.510 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=657 ttl=64
time=646 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=658 ttl=64
time=280 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=659 ttl=64
time=314 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=660 ttl=64
time=426 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=661 ttl=64
time=279 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=662 ttl=64
time=909 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=663 ttl=64
time=50.1 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=664 ttl=64
time=1314 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=665 ttl=64
time=306 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=666 ttl=64
time=0.520 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=667 ttl=64
time=0.540 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=668 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=669 ttl=64
time=0.463 ms

Fri Jan  9 22:55:05 CST 2009
efer_reload                   36         0
exits                  259877054     21262
fpu_reload              10735085       904
halt_exits              15068364      2937
halt_wakeup              3140715       755
host_state_reload       61698616      9249
hypercalls                 90713        20
insn_emulation          21669992      3342
insn_emulation_fail          439         0
invlpg                  10299732       189
io_exits               102981490     10766
irq_exits                1121997       261
irq_injections          18987120      4384
irq_window               3131184      1378
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1589318        70
mmu_cache_miss            641035         6
mmu_flooded               397311         6
mmu_pde_zapped            589210         0
mmu_pte_updated          2878333        19
mmu_pte_write            3322010        22
mmu_recycled               11541         0
mmu_shadow_zapped         881104        12
mmu_unsync                 14980       -26
mmu_unsync_global             80       -26
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50348837      1654
pf_guest                19974876         9
remote_tlb_flush          406310         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25414312       705

Fri Jan  9 22:55:07 CST 2009
efer_reload                   36         0
exits                  259923167     22544
fpu_reload              10737416       999
halt_exits              15075348      2923
halt_wakeup              3142504       732
host_state_reload       61719360      9164
hypercalls                 90765        22
insn_emulation          21676868      3324
insn_emulation_fail          439         0
invlpg                  10299986       219
io_exits               103005825     10631
irq_exits                1122654       287
irq_injections          18996545      4286
irq_window               3133448      1258
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1589442       116
mmu_cache_miss            641047        12
mmu_flooded               397323        12
mmu_pde_zapped            589210         0
mmu_pte_updated          2878384        35
mmu_pte_write            3322067        41
mmu_recycled               11541         0
mmu_shadow_zapped         881116        12
mmu_unsync                 14993         3
mmu_unsync_global             95         4
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50352007      3057
pf_guest                19974896        18
remote_tlb_flush          406317         4
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25415857       904


64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=721 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=722 ttl=64
time=0.551 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=723 ttl=64
time=0.466 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=724 ttl=64
time=0.509 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=725 ttl=64
time=693 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=726 ttl=64
time=3397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=727 ttl=64
time=2397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=728 ttl=64
time=1397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=729 ttl=64
time=398 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=730 ttl=64
time=941 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=731 ttl=64
time=1482 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=732 ttl=64
time=479 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=733 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=734 ttl=64
time=0.505 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=735 ttl=64
time=0.507 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=736 ttl=64
time=0.517 ms

Fri Jan  9 22:56:10 CST 2009
efer_reload                   36         0
exits                  261534571     17234
fpu_reload              10800236      1321
halt_exits              15286560      2917
halt_wakeup              3187952       733
host_state_reload       62313130      8560
hypercalls                 92031        21
insn_emulation          21876899      2682
insn_emulation_fail          439         0
invlpg                  10315553        37
io_exits               103688141      9916
irq_exits                1128041       310
irq_injections          19255926      3755
irq_window               3177005       788
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1598313        18
mmu_cache_miss            643199         0
mmu_flooded               398982         0
mmu_pde_zapped            590559         0
mmu_pte_updated          2881041         1
mmu_pte_write            3325787         1
mmu_recycled               11541         0
mmu_shadow_zapped         884135         0
mmu_unsync                 15024        -9
mmu_unsync_global             57       -10
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50419230        84
pf_guest                19997609         2
remote_tlb_flush          408036         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25507415       454

Fri Jan  9 22:56:13 CST 2009
efer_reload                   36         0
exits                  261583705     21909
fpu_reload              10804448      1674
halt_exits              15293967      2915
halt_wakeup              3189804       720
host_state_reload       62336274      9638
hypercalls                 92079        18
insn_emulation          21884951      3758
insn_emulation_fail          439         0
invlpg                  10316276       705
io_exits               103714609     10873
irq_exits                1128838       299
irq_injections          19266675      4707
irq_window               3180199      1728
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1598498       180
mmu_cache_miss            643200         0
mmu_flooded               398982         0
mmu_pde_zapped            590559         0
mmu_pte_updated          2881048         7
mmu_pte_write            3325794         7
mmu_recycled               11541         0
mmu_shadow_zapped         884144         9
mmu_unsync                 15006       -36
mmu_unsync_global             40       -34
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50420142       802
pf_guest                19997630         0
remote_tlb_flush          408042         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25509336      1365


----------------------------------------------------------------------

Comment By: Amit Shah (amitshah)
Date: 2009-01-09 22:17

Message:
If you experience a stall, can you see what the kvm_stat output is at that
time and attach that?

(kvm_stat is included in the kvm tarball)

----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 22:12

Message:
I've done some more testing tonight and unfortunately it's not quite as
easy to reproduce as I had hoped. Here's what I got so far:

Test1 - Linux 2.6.27.10, kvm-82

$ ping -c 600 hermes-old

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599283ms
rtt min/avg/max/mdev = 0.211/0.407/7.145/0.345 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599423ms
rtt min/avg/max/mdev = 0.194/0.497/6.364/0.290 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599506ms
rtt min/avg/max/mdev = 0.254/0.577/22.347/0.932 ms

Test2 - Linux 2.6.28, kvm-82

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599367ms
rtt min/avg/max/mdev = 0.206/0.408/6.768/0.279 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599347ms
rtt min/avg/max/mdev = 0.254/0.582/77.566/3.166 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599681ms
rtt min/avg/max/mdev = 0.256/328.184/10335.007/1024.143 ms, pipe 11

So, the problem did surface again but only the one time. It could be a
2.6.28 specific thing, a bad interaction between the running guests,
something that triggers after a certain amount of guest/host uptime... who
knows. I need to find a more reliable way to trigger it before I will be
able to bisect. Any suggestions welcome.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 11:17

Message:
I forgot to give the command line for the Linux guest:

/usr/local/kvm/bin/qemu-system-x86_64 \
    -smp 2 \
    -localtime -m 2048 \
    -hda kvm-17-1.img \
    -hdb kvm-17-tmp.img \
    -net nic,vlan=0,macaddr=52:54:00:12:34:67,model=rtl8139 \
    -net tap,vlan=0,ifname=tap17,script=no \
    -vnc 127.0.0.1:17 -usbdevice tablet \
    -daemonize


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-09 12:34 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-09 12:34 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 09:59
Message generated for change (Comment added) made by kmshanah
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

>Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 23:04

Message:
Hopefully this is okay - I sat watching the ping output in one terminal and
when I noticed the delays showing up I ran vmstat -1 a couple of times in
another terminal. As best I can tell I got it to run during the period with
the stalled packets, but it doesn't feel like a very precise way of doing
it.

Here's three examples - each shows the ping output around the time of the
stall(s) and the 'vmstat -1' output below each should coincide with the
time when the stalls were occurring.

64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=455 ttl=64
time=0.532 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=456 ttl=64
time=0.546 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=457 ttl=64
time=0.536 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=458 ttl=64
time=0.564 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=459 ttl=64
time=3870 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=460 ttl=64
time=2862 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=461 ttl=64
time=1854 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=462 ttl=64
time=846 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=463 ttl=64
time=0.519 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=464 ttl=64
time=0.285 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=465 ttl=64
time=0.526 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=466 ttl=64
time=0.514 ms

Fri Jan  9 22:51:46 CST 2009
efer_reload                   36         0
exits                  254320392     38515
fpu_reload              10447942      3010
halt_exits              14380628      3056
halt_wakeup              2974535       794
host_state_reload       59721044     12192
hypercalls                 86694        20
insn_emulation          20950110      5688
insn_emulation_fail          439         0
invlpg                  10240323      2736
io_exits               100781188     12198
irq_exits                1101409       818
irq_injections          18151357      5237
irq_window               3000109      1857
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1559403      1028
mmu_cache_miss            632126        20
mmu_flooded               390870        18
mmu_pde_zapped            583597         0
mmu_pte_updated          2865958       115
mmu_pte_write            3305636       124
mmu_recycled               11499         0
mmu_shadow_zapped         869397        19
mmu_unsync                 14604         2
mmu_unsync_global             52         3
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50061696      8219
pf_guest                19885071        44
remote_tlb_flush          396841       482
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25014198      5954


64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=653 ttl=64
time=0.544 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=654 ttl=64
time=0.528 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=655 ttl=64
time=0.559 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=656 ttl=64
time=0.510 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=657 ttl=64
time=646 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=658 ttl=64
time=280 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=659 ttl=64
time=314 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=660 ttl=64
time=426 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=661 ttl=64
time=279 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=662 ttl=64
time=909 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=663 ttl=64
time=50.1 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=664 ttl=64
time=1314 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=665 ttl=64
time=306 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=666 ttl=64
time=0.520 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=667 ttl=64
time=0.540 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=668 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=669 ttl=64
time=0.463 ms

Fri Jan  9 22:55:05 CST 2009
efer_reload                   36         0
exits                  259877054     21262
fpu_reload              10735085       904
halt_exits              15068364      2937
halt_wakeup              3140715       755
host_state_reload       61698616      9249
hypercalls                 90713        20
insn_emulation          21669992      3342
insn_emulation_fail          439         0
invlpg                  10299732       189
io_exits               102981490     10766
irq_exits                1121997       261
irq_injections          18987120      4384
irq_window               3131184      1378
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1589318        70
mmu_cache_miss            641035         6
mmu_flooded               397311         6
mmu_pde_zapped            589210         0
mmu_pte_updated          2878333        19
mmu_pte_write            3322010        22
mmu_recycled               11541         0
mmu_shadow_zapped         881104        12
mmu_unsync                 14980       -26
mmu_unsync_global             80       -26
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50348837      1654
pf_guest                19974876         9
remote_tlb_flush          406310         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25414312       705

Fri Jan  9 22:55:07 CST 2009
efer_reload                   36         0
exits                  259923167     22544
fpu_reload              10737416       999
halt_exits              15075348      2923
halt_wakeup              3142504       732
host_state_reload       61719360      9164
hypercalls                 90765        22
insn_emulation          21676868      3324
insn_emulation_fail          439         0
invlpg                  10299986       219
io_exits               103005825     10631
irq_exits                1122654       287
irq_injections          18996545      4286
irq_window               3133448      1258
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1589442       116
mmu_cache_miss            641047        12
mmu_flooded               397323        12
mmu_pde_zapped            589210         0
mmu_pte_updated          2878384        35
mmu_pte_write            3322067        41
mmu_recycled               11541         0
mmu_shadow_zapped         881116        12
mmu_unsync                 14993         3
mmu_unsync_global             95         4
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50352007      3057
pf_guest                19974896        18
remote_tlb_flush          406317         4
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25415857       904


64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=721 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=722 ttl=64
time=0.551 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=723 ttl=64
time=0.466 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=724 ttl=64
time=0.509 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=725 ttl=64
time=693 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=726 ttl=64
time=3397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=727 ttl=64
time=2397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=728 ttl=64
time=1397 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=729 ttl=64
time=398 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=730 ttl=64
time=941 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=731 ttl=64
time=1482 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=732 ttl=64
time=479 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=733 ttl=64
time=0.527 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=734 ttl=64
time=0.505 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=735 ttl=64
time=0.507 ms
64 bytes from hermes.wumi.org.au (192.168.0.249): icmp_seq=736 ttl=64
time=0.517 ms

Fri Jan  9 22:56:10 CST 2009
efer_reload                   36         0
exits                  261534571     17234
fpu_reload              10800236      1321
halt_exits              15286560      2917
halt_wakeup              3187952       733
host_state_reload       62313130      8560
hypercalls                 92031        21
insn_emulation          21876899      2682
insn_emulation_fail          439         0
invlpg                  10315553        37
io_exits               103688141      9916
irq_exits                1128041       310
irq_injections          19255926      3755
irq_window               3177005       788
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1598313        18
mmu_cache_miss            643199         0
mmu_flooded               398982         0
mmu_pde_zapped            590559         0
mmu_pte_updated          2881041         1
mmu_pte_write            3325787         1
mmu_recycled               11541         0
mmu_shadow_zapped         884135         0
mmu_unsync                 15024        -9
mmu_unsync_global             57       -10
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50419230        84
pf_guest                19997609         2
remote_tlb_flush          408036         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25507415       454

Fri Jan  9 22:56:13 CST 2009
efer_reload                   36         0
exits                  261583705     21909
fpu_reload              10804448      1674
halt_exits              15293967      2915
halt_wakeup              3189804       720
host_state_reload       62336274      9638
hypercalls                 92079        18
insn_emulation          21884951      3758
insn_emulation_fail          439         0
invlpg                  10316276       705
io_exits               103714609     10873
irq_exits                1128838       299
irq_injections          19266675      4707
irq_window               3180199      1728
kvm_request_irq                0         0
largepages                     0         0
mmio_exits               1598498       180
mmu_cache_miss            643200         0
mmu_flooded               398982         0
mmu_pde_zapped            590559         0
mmu_pte_updated          2881048         7
mmu_pte_write            3325794         7
mmu_recycled               11541         0
mmu_shadow_zapped         884144         9
mmu_unsync                 15006       -36
mmu_unsync_global             40       -34
nmi_injections                 0         0
nmi_window                     0         0
pf_fixed                50420142       802
pf_guest                19997630         0
remote_tlb_flush          408042         1
request_nmi                    0         0
signal_exits                  17         0
tlb_flush               25509336      1365


----------------------------------------------------------------------

Comment By: Amit Shah (amitshah)
Date: 2009-01-09 22:17

Message:
If you experience a stall, can you see what the kvm_stat output is at that
time and attach that?

(kvm_stat is included in the kvm tarball)

----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 22:12

Message:
I've done some more testing tonight and unfortunately it's not quite as
easy to reproduce as I had hoped. Here's what I got so far:

Test1 - Linux 2.6.27.10, kvm-82

$ ping -c 600 hermes-old

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599283ms
rtt min/avg/max/mdev = 0.211/0.407/7.145/0.345 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599423ms
rtt min/avg/max/mdev = 0.194/0.497/6.364/0.290 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599506ms
rtt min/avg/max/mdev = 0.254/0.577/22.347/0.932 ms

Test2 - Linux 2.6.28, kvm-82

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599367ms
rtt min/avg/max/mdev = 0.206/0.408/6.768/0.279 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599347ms
rtt min/avg/max/mdev = 0.254/0.582/77.566/3.166 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599681ms
rtt min/avg/max/mdev = 0.256/328.184/10335.007/1024.143 ms, pipe 11

So, the problem did surface again but only the one time. It could be a
2.6.28 specific thing, a bad interaction between the running guests,
something that triggers after a certain amount of guest/host uptime... who
knows. I need to find a more reliable way to trigger it before I will be
able to bisect. Any suggestions welcome.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 11:17

Message:
I forgot to give the command line for the Linux guest:

/usr/local/kvm/bin/qemu-system-x86_64 \
    -smp 2 \
    -localtime -m 2048 \
    -hda kvm-17-1.img \
    -hdb kvm-17-tmp.img \
    -net nic,vlan=0,macaddr=52:54:00:12:34:67,model=rtl8139 \
    -net tap,vlan=0,ifname=tap17,script=no \
    -vnc 127.0.0.1:17 -usbdevice tablet \
    -daemonize


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-09 11:47 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-09 11:47 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 04:59
Message generated for change (Comment added) made by amitshah
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

>Comment By: Amit Shah (amitshah)
Date: 2009-01-09 17:17

Message:
If you experience a stall, can you see what the kvm_stat output is at that
time and attach that?

(kvm_stat is included in the kvm tarball)

----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 17:12

Message:
I've done some more testing tonight and unfortunately it's not quite as
easy to reproduce as I had hoped. Here's what I got so far:

Test1 - Linux 2.6.27.10, kvm-82

$ ping -c 600 hermes-old

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599283ms
rtt min/avg/max/mdev = 0.211/0.407/7.145/0.345 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599423ms
rtt min/avg/max/mdev = 0.194/0.497/6.364/0.290 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599506ms
rtt min/avg/max/mdev = 0.254/0.577/22.347/0.932 ms

Test2 - Linux 2.6.28, kvm-82

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599367ms
rtt min/avg/max/mdev = 0.206/0.408/6.768/0.279 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599347ms
rtt min/avg/max/mdev = 0.254/0.582/77.566/3.166 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599681ms
rtt min/avg/max/mdev = 0.256/328.184/10335.007/1024.143 ms, pipe 11

So, the problem did surface again but only the one time. It could be a
2.6.28 specific thing, a bad interaction between the running guests,
something that triggers after a certain amount of guest/host uptime... who
knows. I need to find a more reliable way to trigger it before I will be
able to bisect. Any suggestions welcome.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 06:17

Message:
I forgot to give the command line for the Linux guest:

/usr/local/kvm/bin/qemu-system-x86_64 \
    -smp 2 \
    -localtime -m 2048 \
    -hda kvm-17-1.img \
    -hdb kvm-17-tmp.img \
    -net nic,vlan=0,macaddr=52:54:00:12:34:67,model=rtl8139 \
    -net tap,vlan=0,ifname=tap17,script=no \
    -vnc 127.0.0.1:17 -usbdevice tablet \
    -daemonize


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-09 11:42 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-09 11:42 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 09:59
Message generated for change (Comment added) made by kmshanah
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

>Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 22:12

Message:
I've done some more testing tonight and unfortunately it's not quite as
easy to reproduce as I had hoped. Here's what I got so far:

Test1 - Linux 2.6.27.10, kvm-82

$ ping -c 600 hermes-old

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599283ms
rtt min/avg/max/mdev = 0.211/0.407/7.145/0.345 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599423ms
rtt min/avg/max/mdev = 0.194/0.497/6.364/0.290 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599506ms
rtt min/avg/max/mdev = 0.254/0.577/22.347/0.932 ms

Test2 - Linux 2.6.28, kvm-82

(with no other guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599367ms
rtt min/avg/max/mdev = 0.206/0.408/6.768/0.279 ms

(three other guests)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599347ms
rtt min/avg/max/mdev = 0.254/0.582/77.566/3.166 ms

(all guests running)
--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599681ms
rtt min/avg/max/mdev = 0.256/328.184/10335.007/1024.143 ms, pipe 11

So, the problem did surface again but only the one time. It could be a
2.6.28 specific thing, a bad interaction between the running guests,
something that triggers after a certain amount of guest/host uptime... who
knows. I need to find a more reliable way to trigger it before I will be
able to bisect. Any suggestions welcome.


----------------------------------------------------------------------

Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 11:17

Message:
I forgot to give the command line for the Linux guest:

/usr/local/kvm/bin/qemu-system-x86_64 \
    -smp 2 \
    -localtime -m 2048 \
    -hda kvm-17-1.img \
    -hdb kvm-17-tmp.img \
    -net nic,vlan=0,macaddr=52:54:00:12:34:67,model=rtl8139 \
    -net tap,vlan=0,ifname=tap17,script=no \
    -vnc 127.0.0.1:17 -usbdevice tablet \
    -daemonize


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-09  0:47 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-09  0:47 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 09:59
Message generated for change (Comment added) made by kmshanah
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

>Comment By: Kevin Shanahan (kmshanah)
Date: 2009-01-09 11:17

Message:
I forgot to give the command line for the Linux guest:

/usr/local/kvm/bin/qemu-system-x86_64 \
    -smp 2 \
    -localtime -m 2048 \
    -hda kvm-17-1.img \
    -hdb kvm-17-tmp.img \
    -net nic,vlan=0,macaddr=52:54:00:12:34:67,model=rtl8139 \
    -net tap,vlan=0,ifname=tap17,script=no \
    -vnc 127.0.0.1:17 -usbdevice tablet \
    -daemonize


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82
@ 2009-01-08 23:29 SourceForge.net
  0 siblings, 0 replies; 7+ messages in thread
From: SourceForge.net @ 2009-01-08 23:29 UTC (permalink / raw)
  To: noreply

Bugs item #2494730, was opened at 2009-01-09 09:59
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Kevin Shanahan (kmshanah)
Assigned to: Nobody/Anonymous (nobody)
Summary: Guests "stalling" on kvm-82

Initial Comment:
I am seeing periodic stalls in Linux and Windows guests with kvm-82 on an IBM X3550 server with 2 x Xeon 5130 CPUs and 32GB RAM.

I am *reasonably* certain that this is a regression somewhere between kvm-72 and kvm-82. We had been running kvm-72 (actually, the debian kvm-source package) up until now and never noticed the problem. Now the stalls are very obvious. When the guest stalls, the at least one kvm process on the host gobbles up 100% CPU. I'll do my debugging with the Linux guest, as that's sure to be easier to deal with.

As a simple demostration that the guest is unresponsive, here is the result of me pinging the guest from another machine on the (very quiet) LAN:

--- hermes-old.wumi.org.au ping statistics ---
600 packets transmitted, 600 received, 0% packet loss, time 599659ms
rtt min/avg/max/mdev = 0.255/181.211/6291.871/558.706 ms, pipe 7

The latency varies pretty badly, with spikes up to several seconds as you can see.

The problem is not reproducable on other VT capable hardware that I have - e.g. my desktop has a E8400 CPU which runs the VMs just fine. Does knowing that make it any easier to guess where the problem might be?

The Xeon 5130 does not have the "smx", "est", "sse4_1", "xsave", "vnmi" and "flexpriority" CPU flags that the E8400 does.

Because this server is the only hardware I have which exhibits the problem and it's a production machine, I have limited times where I can do testing. However, I will try confirm that kvm-72 is okay and then bisect.

Currently the host is running a 2.6.28 kernel with the kvm-82 modules. I guess I'm likely to have problems compiling the older kvm releases against this kernel, so I'll have to drop back to 2.6.27.something to run the tests.

CPU Vendor: Intel
CPU Type: Xeon 5130
Number of CPUs: 2
Host distribution: Debain Lenny/Sid
KVM version: kvm-82
Host kernel: Linux 2.6.28 x86_64
Guest Distribution: Debian Etch
Guest kernel: Linux 2.6.27.10 i686

Host's /proc/cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3990.23
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips        : 3989.96
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping        : 6
cpu MHz         : 1995.117
cache size      : 4096 KB
physical id     : 3
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 15
model name	: Intel(R) Xeon(R) CPU            5130  @ 2.00GHz
stepping	: 6
cpu MHz		: 1995.117
cache size	: 4096 KB
physical id	: 3
siblings	: 2
core id		: 1
cpu cores	: 2
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 10
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow
bogomips	: 3990.01
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2494730&group_id=180599

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-01-17 11:26 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-01-12  0:47 [ kvm-Bugs-2494730 ] Guests "stalling" on kvm-82 SourceForge.net
  -- strict thread matches above, loose matches on Subject: below --
2009-01-17 11:26 SourceForge.net
2009-01-09 12:34 SourceForge.net
2009-01-09 11:47 SourceForge.net
2009-01-09 11:42 SourceForge.net
2009-01-09  0:47 SourceForge.net
2009-01-08 23:29 SourceForge.net

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).