* vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 9:36 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-11 9:36 UTC (permalink / raw)
To: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin
Cc: Luonengjun, Zanghongyong, Hanweidong, Huangweidong (C)
hi all,
I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
https://bugzilla.kernel.org/show_bug.cgi?id=58771
After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
oprofile report on this process in guest,
pre live migration:
CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples % app name symbol name
248 12.3016 no-vmlinux (no symbols)
78 3.8690 libc.so.6 memset
68 3.3730 libc.so.6 memcpy
30 1.4881 cscf.scu SipMmBufMemAlloc
29 1.4385 libpthread.so.0 pthread_mutex_lock
26 1.2897 cscf.scu SipApiGetNextIe
25 1.2401 cscf.scu DBFI_DATA_Search
20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
16 0.7937 cscf.scu DLM_FreeSlice
16 0.7937 cscf.scu receivemessage
15 0.7440 cscf.scu SipSmCopyString
14 0.6944 cscf.scu DLM_AllocSlice
post live migration:
CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples % app name symbol name
1586 42.2370 libc.so.6 memcpy
271 7.2170 no-vmlinux (no symbols)
83 2.2104 libc.so.6 memset
41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
35 0.9321 cscf.scu SipMmBufMemAlloc
29 0.7723 cscf.scu DLM_AllocSlice
28 0.7457 libpthread.so.0 pthread_mutex_lock
23 0.6125 cscf.scu SipApiGetNextIe
17 0.4527 cscf.scu SipSmCopyString
16 0.4261 cscf.scu receivemessage
15 0.3995 cscf.scu SipcMsgStatHandle
14 0.3728 cscf.scu Urilex
12 0.3196 cscf.scu DBFI_DATA_Search
12 0.3196 cscf.scu SipDsmGetHdrBitValInner
12 0.3196 cscf.scu SipSmGetDataFromRefString
So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
perf report on vcpu thread in host,
pre live migration:
Performance counter stats for thread id '21082':
0 page-faults
0 minor-faults
0 major-faults
31616 cs
506 migrations
0 alignment-faults
0 emulation-faults
5075957539 L1-dcache-loads [21.32%]
324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
3681777120 L1-dcache-stores [21.65%]
65251823 L1-dcache-store-misses # 1.77% [22.78%]
0 L1-dcache-prefetches [22.84%]
0 L1-dcache-prefetch-misses [22.32%]
9321652613 L1-icache-loads [22.60%]
1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
169126969 LLC-loads [21.87%]
12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
132853447 LLC-stores [ 6.61%]
10601171 LLC-store-misses #7.9% [ 5.01%]
25309497 LLC-prefetches #30% [ 4.96%]
7723198 LLC-prefetch-misses [ 6.04%]
4954075817 dTLB-loads [11.56%]
26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
3553702874 dTLB-stores [22.37%]
4720313 dTLB-store-misses #0.13% [21.46%]
<not counted> dTLB-prefetches
<not counted> dTLB-prefetch-misses
60.000920666 seconds time elapsed
post live migration:
Performance counter stats for thread id '1579':
0 page-faults [100.00%]
0 minor-faults [100.00%]
0 major-faults [100.00%]
34979 cs [100.00%]
441 migrations [100.00%]
0 alignment-faults [100.00%]
0 emulation-faults
6903585501 L1-dcache-loads [22.06%]
525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
5042552685 L1-dcache-stores [22.20%]
94493742 L1-dcache-store-misses #1.8% [22.06%]
0 L1-dcache-prefetches [22.39%]
0 L1-dcache-prefetch-misses [22.47%]
13022953030 L1-icache-loads [22.25%]
1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
348479792 LLC-loads [22.27%]
80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
198745620 LLC-stores [ 5.63%]
14236497 LLC-store-misses # 7.1% [ 5.41%]
20757435 LLC-prefetches [ 5.42%]
5361819 LLC-prefetch-misses # 25% [ 5.69%]
7235715124 dTLB-loads [11.26%]
49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
5168276218 dTLB-stores [22.44%]
6765983 dTLB-store-misses #0.13% [22.24%]
<not counted> dTLB-prefetches
<not counted> dTLB-prefetch-misses
The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
Performance counter stats for thread id '1579':
0 page-faults [100.00%]
0 minor-faults [100.00%]
0 major-faults [100.00%]
30594 cs [100.00%]
327 migrations [100.00%]
0 alignment-faults [100.00%]
0 emulation-faults
7707091948 L1-dcache-loads [22.10%]
559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
5976654983 L1-dcache-stores [23.22%]
160436114 L1-dcache-store-misses [22.80%]
0 L1-dcache-prefetches [22.51%]
0 L1-dcache-prefetch-misses [22.53%]
13798415672 L1-icache-loads [22.28%]
2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
254598008 LLC-loads [22.86%]
16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
307019606 LLC-stores [ 5.60%]
13665033 LLC-store-misses [ 5.43%]
17715554 LLC-prefetches [ 5.57%]
4187006 LLC-prefetch-misses [ 5.44%]
7811502895 dTLB-loads [10.72%]
40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
6144202516 dTLB-stores [21.58%]
6313363 dTLB-store-misses [21.91%]
<not counted> dTLB-prefetches
<not counted> dTLB-prefetch-misses
60.000812523 seconds time elapsed
If EPT disabled, this problem gone.
I suspect that kvm hypervisor has business with this problem.
Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
Any better ideas?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 9:36 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-11 9:36 UTC (permalink / raw)
To: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin
Cc: Huangweidong (C), Zanghongyong, Luonengjun, Hanweidong
hi all,
I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
https://bugzilla.kernel.org/show_bug.cgi?id=58771
After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
oprofile report on this process in guest,
pre live migration:
CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples % app name symbol name
248 12.3016 no-vmlinux (no symbols)
78 3.8690 libc.so.6 memset
68 3.3730 libc.so.6 memcpy
30 1.4881 cscf.scu SipMmBufMemAlloc
29 1.4385 libpthread.so.0 pthread_mutex_lock
26 1.2897 cscf.scu SipApiGetNextIe
25 1.2401 cscf.scu DBFI_DATA_Search
20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
16 0.7937 cscf.scu DLM_FreeSlice
16 0.7937 cscf.scu receivemessage
15 0.7440 cscf.scu SipSmCopyString
14 0.6944 cscf.scu DLM_AllocSlice
post live migration:
CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples % app name symbol name
1586 42.2370 libc.so.6 memcpy
271 7.2170 no-vmlinux (no symbols)
83 2.2104 libc.so.6 memset
41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
35 0.9321 cscf.scu SipMmBufMemAlloc
29 0.7723 cscf.scu DLM_AllocSlice
28 0.7457 libpthread.so.0 pthread_mutex_lock
23 0.6125 cscf.scu SipApiGetNextIe
17 0.4527 cscf.scu SipSmCopyString
16 0.4261 cscf.scu receivemessage
15 0.3995 cscf.scu SipcMsgStatHandle
14 0.3728 cscf.scu Urilex
12 0.3196 cscf.scu DBFI_DATA_Search
12 0.3196 cscf.scu SipDsmGetHdrBitValInner
12 0.3196 cscf.scu SipSmGetDataFromRefString
So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
perf report on vcpu thread in host,
pre live migration:
Performance counter stats for thread id '21082':
0 page-faults
0 minor-faults
0 major-faults
31616 cs
506 migrations
0 alignment-faults
0 emulation-faults
5075957539 L1-dcache-loads [21.32%]
324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
3681777120 L1-dcache-stores [21.65%]
65251823 L1-dcache-store-misses # 1.77% [22.78%]
0 L1-dcache-prefetches [22.84%]
0 L1-dcache-prefetch-misses [22.32%]
9321652613 L1-icache-loads [22.60%]
1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
169126969 LLC-loads [21.87%]
12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
132853447 LLC-stores [ 6.61%]
10601171 LLC-store-misses #7.9% [ 5.01%]
25309497 LLC-prefetches #30% [ 4.96%]
7723198 LLC-prefetch-misses [ 6.04%]
4954075817 dTLB-loads [11.56%]
26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
3553702874 dTLB-stores [22.37%]
4720313 dTLB-store-misses #0.13% [21.46%]
<not counted> dTLB-prefetches
<not counted> dTLB-prefetch-misses
60.000920666 seconds time elapsed
post live migration:
Performance counter stats for thread id '1579':
0 page-faults [100.00%]
0 minor-faults [100.00%]
0 major-faults [100.00%]
34979 cs [100.00%]
441 migrations [100.00%]
0 alignment-faults [100.00%]
0 emulation-faults
6903585501 L1-dcache-loads [22.06%]
525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
5042552685 L1-dcache-stores [22.20%]
94493742 L1-dcache-store-misses #1.8% [22.06%]
0 L1-dcache-prefetches [22.39%]
0 L1-dcache-prefetch-misses [22.47%]
13022953030 L1-icache-loads [22.25%]
1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
348479792 LLC-loads [22.27%]
80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
198745620 LLC-stores [ 5.63%]
14236497 LLC-store-misses # 7.1% [ 5.41%]
20757435 LLC-prefetches [ 5.42%]
5361819 LLC-prefetch-misses # 25% [ 5.69%]
7235715124 dTLB-loads [11.26%]
49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
5168276218 dTLB-stores [22.44%]
6765983 dTLB-store-misses #0.13% [22.24%]
<not counted> dTLB-prefetches
<not counted> dTLB-prefetch-misses
The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
Performance counter stats for thread id '1579':
0 page-faults [100.00%]
0 minor-faults [100.00%]
0 major-faults [100.00%]
30594 cs [100.00%]
327 migrations [100.00%]
0 alignment-faults [100.00%]
0 emulation-faults
7707091948 L1-dcache-loads [22.10%]
559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
5976654983 L1-dcache-stores [23.22%]
160436114 L1-dcache-store-misses [22.80%]
0 L1-dcache-prefetches [22.51%]
0 L1-dcache-prefetch-misses [22.53%]
13798415672 L1-icache-loads [22.28%]
2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
254598008 LLC-loads [22.86%]
16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
307019606 LLC-stores [ 5.60%]
13665033 LLC-store-misses [ 5.43%]
17715554 LLC-prefetches [ 5.57%]
4187006 LLC-prefetch-misses [ 5.44%]
7811502895 dTLB-loads [10.72%]
40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
6144202516 dTLB-stores [21.58%]
6313363 dTLB-store-misses [21.91%]
<not counted> dTLB-prefetches
<not counted> dTLB-prefetch-misses
60.000812523 seconds time elapsed
If EPT disabled, this problem gone.
I suspect that kvm hypervisor has business with this problem.
Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
Any better ideas?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 9:36 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-11 10:28 ` Michael S. Tsirkin
-1 siblings, 0 replies; 52+ messages in thread
From: Michael S. Tsirkin @ 2013-07-11 10:28 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Luonengjun, Zanghongyong, Hanweidong,
Huangweidong (C)
On Thu, Jul 11, 2013 at 09:36:47AM +0000, Zhanghaoyu (A) wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
>
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
> 3681777120 L1-dcache-stores [21.65%]
> 65251823 L1-dcache-store-misses # 1.77% [22.78%]
> 0 L1-dcache-prefetches [22.84%]
> 0 L1-dcache-prefetch-misses [22.32%]
> 9321652613 L1-icache-loads [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
> 169126969 LLC-loads [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
> 132853447 LLC-stores [ 6.61%]
> 10601171 LLC-store-misses #7.9% [ 5.01%]
> 25309497 LLC-prefetches #30% [ 4.96%]
> 7723198 LLC-prefetch-misses [ 6.04%]
> 4954075817 dTLB-loads [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
> 3553702874 dTLB-stores [22.37%]
> 4720313 dTLB-store-misses #0.13% [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 34979 cs [100.00%]
> 441 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
> 5042552685 L1-dcache-stores [22.20%]
> 94493742 L1-dcache-store-misses #1.8% [22.06%]
> 0 L1-dcache-prefetches [22.39%]
> 0 L1-dcache-prefetch-misses [22.47%]
> 13022953030 L1-icache-loads [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
> 348479792 LLC-loads [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
> 198745620 LLC-stores [ 5.63%]
> 14236497 LLC-store-misses # 7.1% [ 5.41%]
> 20757435 LLC-prefetches [ 5.42%]
> 5361819 LLC-prefetch-misses # 25% [ 5.69%]
> 7235715124 dTLB-loads [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
> 5168276218 dTLB-stores [22.44%]
> 6765983 dTLB-store-misses #0.13% [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 30594 cs [100.00%]
> 327 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
> 5976654983 L1-dcache-stores [23.22%]
> 160436114 L1-dcache-store-misses [22.80%]
> 0 L1-dcache-prefetches [22.51%]
> 0 L1-dcache-prefetch-misses [22.53%]
> 13798415672 L1-icache-loads [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
> 254598008 LLC-loads [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
> 307019606 LLC-stores [ 5.60%]
> 13665033 LLC-store-misses [ 5.43%]
> 17715554 LLC-prefetches [ 5.57%]
> 4187006 LLC-prefetch-misses [ 5.44%]
> 7811502895 dTLB-loads [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
> 6144202516 dTLB-stores [21.58%]
> 6313363 dTLB-store-misses [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
> Thanks,
> Zhang Haoyu
Does this happen even if you migrate between two qemu instances on the
same CPU?
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 10:28 ` Michael S. Tsirkin
0 siblings, 0 replies; 52+ messages in thread
From: Michael S. Tsirkin @ 2013-07-11 10:28 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: mpetersen, Shouta.Uehara, KVM, Hanweidong, Luonengjun,
paolo.bonzini, qemu-devel, Zanghongyong, cloudfantom,
Huangweidong (C)
On Thu, Jul 11, 2013 at 09:36:47AM +0000, Zhanghaoyu (A) wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
>
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
> 3681777120 L1-dcache-stores [21.65%]
> 65251823 L1-dcache-store-misses # 1.77% [22.78%]
> 0 L1-dcache-prefetches [22.84%]
> 0 L1-dcache-prefetch-misses [22.32%]
> 9321652613 L1-icache-loads [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
> 169126969 LLC-loads [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
> 132853447 LLC-stores [ 6.61%]
> 10601171 LLC-store-misses #7.9% [ 5.01%]
> 25309497 LLC-prefetches #30% [ 4.96%]
> 7723198 LLC-prefetch-misses [ 6.04%]
> 4954075817 dTLB-loads [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
> 3553702874 dTLB-stores [22.37%]
> 4720313 dTLB-store-misses #0.13% [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 34979 cs [100.00%]
> 441 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
> 5042552685 L1-dcache-stores [22.20%]
> 94493742 L1-dcache-store-misses #1.8% [22.06%]
> 0 L1-dcache-prefetches [22.39%]
> 0 L1-dcache-prefetch-misses [22.47%]
> 13022953030 L1-icache-loads [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
> 348479792 LLC-loads [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
> 198745620 LLC-stores [ 5.63%]
> 14236497 LLC-store-misses # 7.1% [ 5.41%]
> 20757435 LLC-prefetches [ 5.42%]
> 5361819 LLC-prefetch-misses # 25% [ 5.69%]
> 7235715124 dTLB-loads [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
> 5168276218 dTLB-stores [22.44%]
> 6765983 dTLB-store-misses #0.13% [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 30594 cs [100.00%]
> 327 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
> 5976654983 L1-dcache-stores [23.22%]
> 160436114 L1-dcache-store-misses [22.80%]
> 0 L1-dcache-prefetches [22.51%]
> 0 L1-dcache-prefetch-misses [22.53%]
> 13798415672 L1-icache-loads [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
> 254598008 LLC-loads [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
> 307019606 LLC-stores [ 5.60%]
> 13665033 LLC-store-misses [ 5.43%]
> 17715554 LLC-prefetches [ 5.57%]
> 4187006 LLC-prefetch-misses [ 5.44%]
> 7811502895 dTLB-loads [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
> 6144202516 dTLB-stores [21.58%]
> 6313363 dTLB-store-misses [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
> Thanks,
> Zhang Haoyu
Does this happen even if you migrate between two qemu instances on the
same CPU?
--
MST
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 9:36 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-11 10:39 ` Gleb Natapov
-1 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:39 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin, Luonengjun, Zanghongyong,
Hanweidong, Huangweidong (C)
On Thu, Jul 11, 2013 at 09:36:47AM +0000, Zhanghaoyu (A) wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
>
Does slowdown persist several minutes after restore? Can you check
how many hugepages is used by qemu process before/after save/restore.
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
> 3681777120 L1-dcache-stores [21.65%]
> 65251823 L1-dcache-store-misses # 1.77% [22.78%]
> 0 L1-dcache-prefetches [22.84%]
> 0 L1-dcache-prefetch-misses [22.32%]
> 9321652613 L1-icache-loads [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
> 169126969 LLC-loads [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
> 132853447 LLC-stores [ 6.61%]
> 10601171 LLC-store-misses #7.9% [ 5.01%]
> 25309497 LLC-prefetches #30% [ 4.96%]
> 7723198 LLC-prefetch-misses [ 6.04%]
> 4954075817 dTLB-loads [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
> 3553702874 dTLB-stores [22.37%]
> 4720313 dTLB-store-misses #0.13% [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 34979 cs [100.00%]
> 441 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
> 5042552685 L1-dcache-stores [22.20%]
> 94493742 L1-dcache-store-misses #1.8% [22.06%]
> 0 L1-dcache-prefetches [22.39%]
> 0 L1-dcache-prefetch-misses [22.47%]
> 13022953030 L1-icache-loads [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
> 348479792 LLC-loads [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
> 198745620 LLC-stores [ 5.63%]
> 14236497 LLC-store-misses # 7.1% [ 5.41%]
> 20757435 LLC-prefetches [ 5.42%]
> 5361819 LLC-prefetch-misses # 25% [ 5.69%]
> 7235715124 dTLB-loads [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
> 5168276218 dTLB-stores [22.44%]
> 6765983 dTLB-store-misses #0.13% [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
Amount of LLC-loads doubles too, so this can explain LLC-load-misses
increase.
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 30594 cs [100.00%]
> 327 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
> 5976654983 L1-dcache-stores [23.22%]
> 160436114 L1-dcache-store-misses [22.80%]
> 0 L1-dcache-prefetches [22.51%]
> 0 L1-dcache-prefetch-misses [22.53%]
> 13798415672 L1-icache-loads [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
> 254598008 LLC-loads [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
> 307019606 LLC-stores [ 5.60%]
> 13665033 LLC-store-misses [ 5.43%]
> 17715554 LLC-prefetches [ 5.57%]
> 4187006 LLC-prefetch-misses [ 5.44%]
> 7811502895 dTLB-loads [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
> 6144202516 dTLB-stores [21.58%]
> 6313363 dTLB-store-misses [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
So the performance is back to normal after process restart?
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
Provide "perf record -g" information before and after migration.
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 10:39 ` Gleb Natapov
0 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:39 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: mpetersen, Shouta.Uehara, KVM, Michael S. Tsirkin, Luonengjun,
Hanweidong, paolo.bonzini, qemu-devel, Zanghongyong, cloudfantom,
Huangweidong (C)
On Thu, Jul 11, 2013 at 09:36:47AM +0000, Zhanghaoyu (A) wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
>
Does slowdown persist several minutes after restore? Can you check
how many hugepages is used by qemu process before/after save/restore.
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
> 3681777120 L1-dcache-stores [21.65%]
> 65251823 L1-dcache-store-misses # 1.77% [22.78%]
> 0 L1-dcache-prefetches [22.84%]
> 0 L1-dcache-prefetch-misses [22.32%]
> 9321652613 L1-icache-loads [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
> 169126969 LLC-loads [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
> 132853447 LLC-stores [ 6.61%]
> 10601171 LLC-store-misses #7.9% [ 5.01%]
> 25309497 LLC-prefetches #30% [ 4.96%]
> 7723198 LLC-prefetch-misses [ 6.04%]
> 4954075817 dTLB-loads [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
> 3553702874 dTLB-stores [22.37%]
> 4720313 dTLB-store-misses #0.13% [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 34979 cs [100.00%]
> 441 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
> 5042552685 L1-dcache-stores [22.20%]
> 94493742 L1-dcache-store-misses #1.8% [22.06%]
> 0 L1-dcache-prefetches [22.39%]
> 0 L1-dcache-prefetch-misses [22.47%]
> 13022953030 L1-icache-loads [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
> 348479792 LLC-loads [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
> 198745620 LLC-stores [ 5.63%]
> 14236497 LLC-store-misses # 7.1% [ 5.41%]
> 20757435 LLC-prefetches [ 5.42%]
> 5361819 LLC-prefetch-misses # 25% [ 5.69%]
> 7235715124 dTLB-loads [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
> 5168276218 dTLB-stores [22.44%]
> 6765983 dTLB-store-misses #0.13% [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
Amount of LLC-loads doubles too, so this can explain LLC-load-misses
increase.
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 30594 cs [100.00%]
> 327 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
> 5976654983 L1-dcache-stores [23.22%]
> 160436114 L1-dcache-store-misses [22.80%]
> 0 L1-dcache-prefetches [22.51%]
> 0 L1-dcache-prefetch-misses [22.53%]
> 13798415672 L1-icache-loads [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
> 254598008 LLC-loads [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
> 307019606 LLC-stores [ 5.60%]
> 13665033 LLC-store-misses [ 5.43%]
> 17715554 LLC-prefetches [ 5.57%]
> 4187006 LLC-prefetch-misses [ 5.44%]
> 7811502895 dTLB-loads [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
> 6144202516 dTLB-stores [21.58%]
> 6313363 dTLB-store-misses [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
So the performance is back to normal after process restart?
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
Provide "perf record -g" information before and after migration.
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 9:36 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-11 10:39 ` Xiao Guangrong
-1 siblings, 0 replies; 52+ messages in thread
From: Xiao Guangrong @ 2013-07-11 10:39 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin, Luonengjun, Zanghongyong,
Hanweidong, Huangweidong (C)
Hi,
Could you please test this patch?
>From 48df7db2ec2721e35d024a8d9850dbb34b557c1c Mon Sep 17 00:00:00 2001
From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Date: Thu, 6 Sep 2012 16:56:01 +0800
Subject: [PATCH 10/11] using huge page on fast page fault path
---
arch/x86/kvm/mmu.c | 27 ++++++++++++++++++++-------
1 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6945ef4..7d177c7 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2663,6 +2663,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, pfn_t pfn)
return -EFAULT;
}
+static bool pfn_can_adjust(pfn_t pfn, int level)
+{
+ return !is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
+ level == PT_PAGE_TABLE_LEVEL &&
+ PageTransCompound(pfn_to_page(pfn));
+}
+
static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
gfn_t *gfnp, pfn_t *pfnp, int *levelp)
{
@@ -2676,10 +2683,8 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
* PT_PAGE_TABLE_LEVEL and there would be no adjustment done
* here.
*/
- if (!is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
- level == PT_PAGE_TABLE_LEVEL &&
- PageTransCompound(pfn_to_page(pfn)) &&
- !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
+ if (pfn_can_adjust(pfn, level) &&
+ !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
unsigned long mask;
/*
* mmu_notifier_retry was successful and we hold the
@@ -2768,7 +2773,7 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 spte)
* - false: let the real page fault path to fix it.
*/
static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
- u32 error_code)
+ u32 error_code, bool force_pt_level)
{
struct kvm_shadow_walk_iterator iterator;
bool ret = false;
@@ -2795,6 +2800,14 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
goto exit;
/*
+ * Let the real page fault path change the mapping if large
+ * mapping is allowed, for example, the memslot dirty log is
+ * disabled.
+ */
+ if (!force_pt_level && pfn_can_adjust(spte_to_pfn(spte), level))
+ goto exit;
+
+ /*
* Check if it is a spurious fault caused by TLB lazily flushed.
*
* Need not check the access of upper level table entries since
@@ -2854,7 +2867,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
} else
level = PT_PAGE_TABLE_LEVEL;
- if (fast_page_fault(vcpu, v, level, error_code))
+ if (fast_page_fault(vcpu, v, level, error_code, force_pt_level))
return 0;
mmu_seq = vcpu->kvm->mmu_notifier_seq;
@@ -3323,7 +3336,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
} else
level = PT_PAGE_TABLE_LEVEL;
- if (fast_page_fault(vcpu, gpa, level, error_code))
+ if (fast_page_fault(vcpu, gpa, level, error_code, force_pt_level))
return 0;
mmu_seq = vcpu->kvm->mmu_notifier_seq;
--
1.7.7.6
On 07/11/2013 05:36 PM, Zhanghaoyu (A) wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
>
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
> 3681777120 L1-dcache-stores [21.65%]
> 65251823 L1-dcache-store-misses # 1.77% [22.78%]
> 0 L1-dcache-prefetches [22.84%]
> 0 L1-dcache-prefetch-misses [22.32%]
> 9321652613 L1-icache-loads [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
> 169126969 LLC-loads [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
> 132853447 LLC-stores [ 6.61%]
> 10601171 LLC-store-misses #7.9% [ 5.01%]
> 25309497 LLC-prefetches #30% [ 4.96%]
> 7723198 LLC-prefetch-misses [ 6.04%]
> 4954075817 dTLB-loads [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
> 3553702874 dTLB-stores [22.37%]
> 4720313 dTLB-store-misses #0.13% [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 34979 cs [100.00%]
> 441 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
> 5042552685 L1-dcache-stores [22.20%]
> 94493742 L1-dcache-store-misses #1.8% [22.06%]
> 0 L1-dcache-prefetches [22.39%]
> 0 L1-dcache-prefetch-misses [22.47%]
> 13022953030 L1-icache-loads [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
> 348479792 LLC-loads [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
> 198745620 LLC-stores [ 5.63%]
> 14236497 LLC-store-misses # 7.1% [ 5.41%]
> 20757435 LLC-prefetches [ 5.42%]
> 5361819 LLC-prefetch-misses # 25% [ 5.69%]
> 7235715124 dTLB-loads [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
> 5168276218 dTLB-stores [22.44%]
> 6765983 dTLB-store-misses #0.13% [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 30594 cs [100.00%]
> 327 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
> 5976654983 L1-dcache-stores [23.22%]
> 160436114 L1-dcache-store-misses [22.80%]
> 0 L1-dcache-prefetches [22.51%]
> 0 L1-dcache-prefetch-misses [22.53%]
> 13798415672 L1-icache-loads [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
> 254598008 LLC-loads [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
> 307019606 LLC-stores [ 5.60%]
> 13665033 LLC-store-misses [ 5.43%]
> 17715554 LLC-prefetches [ 5.57%]
> 4187006 LLC-prefetch-misses [ 5.44%]
> 7811502895 dTLB-loads [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
> 6144202516 dTLB-stores [21.58%]
> 6313363 dTLB-store-misses [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
> Thanks,
> Zhang Haoyu
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 10:39 ` Xiao Guangrong
0 siblings, 0 replies; 52+ messages in thread
From: Xiao Guangrong @ 2013-07-11 10:39 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: mpetersen, Shouta.Uehara, KVM, Michael S. Tsirkin, Luonengjun,
Hanweidong, paolo.bonzini, qemu-devel, Zanghongyong, cloudfantom,
Huangweidong (C)
Hi,
Could you please test this patch?
>From 48df7db2ec2721e35d024a8d9850dbb34b557c1c Mon Sep 17 00:00:00 2001
From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Date: Thu, 6 Sep 2012 16:56:01 +0800
Subject: [PATCH 10/11] using huge page on fast page fault path
---
arch/x86/kvm/mmu.c | 27 ++++++++++++++++++++-------
1 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6945ef4..7d177c7 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2663,6 +2663,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, pfn_t pfn)
return -EFAULT;
}
+static bool pfn_can_adjust(pfn_t pfn, int level)
+{
+ return !is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
+ level == PT_PAGE_TABLE_LEVEL &&
+ PageTransCompound(pfn_to_page(pfn));
+}
+
static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
gfn_t *gfnp, pfn_t *pfnp, int *levelp)
{
@@ -2676,10 +2683,8 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
* PT_PAGE_TABLE_LEVEL and there would be no adjustment done
* here.
*/
- if (!is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
- level == PT_PAGE_TABLE_LEVEL &&
- PageTransCompound(pfn_to_page(pfn)) &&
- !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
+ if (pfn_can_adjust(pfn, level) &&
+ !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
unsigned long mask;
/*
* mmu_notifier_retry was successful and we hold the
@@ -2768,7 +2773,7 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 spte)
* - false: let the real page fault path to fix it.
*/
static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
- u32 error_code)
+ u32 error_code, bool force_pt_level)
{
struct kvm_shadow_walk_iterator iterator;
bool ret = false;
@@ -2795,6 +2800,14 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
goto exit;
/*
+ * Let the real page fault path change the mapping if large
+ * mapping is allowed, for example, the memslot dirty log is
+ * disabled.
+ */
+ if (!force_pt_level && pfn_can_adjust(spte_to_pfn(spte), level))
+ goto exit;
+
+ /*
* Check if it is a spurious fault caused by TLB lazily flushed.
*
* Need not check the access of upper level table entries since
@@ -2854,7 +2867,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
} else
level = PT_PAGE_TABLE_LEVEL;
- if (fast_page_fault(vcpu, v, level, error_code))
+ if (fast_page_fault(vcpu, v, level, error_code, force_pt_level))
return 0;
mmu_seq = vcpu->kvm->mmu_notifier_seq;
@@ -3323,7 +3336,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
} else
level = PT_PAGE_TABLE_LEVEL;
- if (fast_page_fault(vcpu, gpa, level, error_code))
+ if (fast_page_fault(vcpu, gpa, level, error_code, force_pt_level))
return 0;
mmu_seq = vcpu->kvm->mmu_notifier_seq;
--
1.7.7.6
On 07/11/2013 05:36 PM, Zhanghaoyu (A) wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart the process, this problem disappeared. save-restore has the similar problem.
>
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits [21.85%]
> 3681777120 L1-dcache-stores [21.65%]
> 65251823 L1-dcache-store-misses # 1.77% [22.78%]
> 0 L1-dcache-prefetches [22.84%]
> 0 L1-dcache-prefetch-misses [22.32%]
> 9321652613 L1-icache-loads [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits [21.92%]
> 169126969 LLC-loads [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits [ 5.84%]
> 132853447 LLC-stores [ 6.61%]
> 10601171 LLC-store-misses #7.9% [ 5.01%]
> 25309497 LLC-prefetches #30% [ 4.96%]
> 7723198 LLC-prefetch-misses [ 6.04%]
> 4954075817 dTLB-loads [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits [16.80%]
> 3553702874 dTLB-stores [22.37%]
> 4720313 dTLB-store-misses #0.13% [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 34979 cs [100.00%]
> 441 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits [21.97%]
> 5042552685 L1-dcache-stores [22.20%]
> 94493742 L1-dcache-store-misses #1.8% [22.06%]
> 0 L1-dcache-prefetches [22.39%]
> 0 L1-dcache-prefetch-misses [22.47%]
> 13022953030 L1-icache-loads [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits [22.47%]
> 348479792 LLC-loads [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits [ 5.64%]
> 198745620 LLC-stores [ 5.63%]
> 14236497 LLC-store-misses # 7.1% [ 5.41%]
> 20757435 LLC-prefetches [ 5.42%]
> 5361819 LLC-prefetch-misses # 25% [ 5.69%]
> 7235715124 dTLB-loads [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits [16.96%]
> 5168276218 dTLB-stores [22.44%]
> 6765983 dTLB-store-misses #0.13% [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in guest, the perf data back to normal,
> Performance counter stats for thread id '1579':
>
> 0 page-faults [100.00%]
> 0 minor-faults [100.00%]
> 0 major-faults [100.00%]
> 30594 cs [100.00%]
> 327 migrations [100.00%]
> 0 alignment-faults [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits [22.28%]
> 5976654983 L1-dcache-stores [23.22%]
> 160436114 L1-dcache-store-misses [22.80%]
> 0 L1-dcache-prefetches [22.51%]
> 0 L1-dcache-prefetch-misses [22.53%]
> 13798415672 L1-icache-loads [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits [22.49%]
> 254598008 LLC-loads [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits [ 5.36%]
> 307019606 LLC-stores [ 5.60%]
> 13665033 LLC-store-misses [ 5.43%]
> 17715554 LLC-prefetches [ 5.57%]
> 4187006 LLC-prefetch-misses [ 5.44%]
> 7811502895 dTLB-loads [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits [16.31%]
> 6144202516 dTLB-stores [21.58%]
> 6313363 dTLB-store-misses [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
> Thanks,
> Zhang Haoyu
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 9:36 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-11 10:51 ` Andreas Färber
-1 siblings, 0 replies; 52+ messages in thread
From: Andreas Färber @ 2013-07-11 10:51 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin, Huangweidong (C),
Zanghongyong, Luonengjun, Hanweidong
Hi,
Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
So far we've been unable to reproduce this with a pure qemu-kvm /
qemu-system-x86_64 command line on several EPT machines, whereas for
virsh it was reported as confirmed. Can you please share the resulting
QEMU command line from libvirt logs or process list?
Are both host and guest kernel at 3.0.80 (latest SLES updates)?
Thanks,
Andreas
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 10:51 ` Andreas Färber
0 siblings, 0 replies; 52+ messages in thread
From: Andreas Färber @ 2013-07-11 10:51 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: mpetersen, Shouta.Uehara, KVM, Michael S. Tsirkin, Luonengjun,
Hanweidong, paolo.bonzini, qemu-devel, Zanghongyong, cloudfantom,
Huangweidong (C)
Hi,
Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU utilization went up by about 30%, resulted in throughput degradation of this process.
> oprofile report on this process in guest,
> pre live migration:
So far we've been unable to reproduce this with a pure qemu-kvm /
qemu-system-x86_64 command line on several EPT machines, whereas for
virsh it was reported as confirmed. Can you please share the resulting
QEMU command line from libvirt logs or process list?
Are both host and guest kernel at 3.0.80 (latest SLES updates)?
Thanks,
Andreas
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 10:39 ` [Qemu-devel] " Xiao Guangrong
@ 2013-07-11 14:00 ` Zhang Haoyu
-1 siblings, 0 replies; 52+ messages in thread
From: Zhang Haoyu @ 2013-07-11 14:00 UTC (permalink / raw)
To: Xiao Guangrong
Cc: Zhanghaoyu (A),
KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin, Luonengjun, Zanghongyong,
Hanweidong, Huangweidong (C)
>Hi,
>
>Could you please test this patch?
>
I tried this patch, but the problem still be there.
Thanks,
Zhang Haoyu
>>From 48df7db2ec2721e35d024a8d9850dbb34b557c1c Mon Sep 17 00:00:00 2001
>From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
>Date: Thu, 6 Sep 2012 16:56:01 +0800
>Subject: [PATCH 10/11] using huge page on fast page fault path
>
>---
> arch/x86/kvm/mmu.c | 27 ++++++++++++++++++++-------
> 1 files changed, 20 insertions(+), 7 deletions(-)
>
>diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>index 6945ef4..7d177c7 100644
>--- a/arch/x86/kvm/mmu.c
>+++ b/arch/x86/kvm/mmu.c
>@@ -2663,6 +2663,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu
*vcpu, gfn_t gfn, pfn_t pfn)
> return -EFAULT;
> }
>
>+static bool pfn_can_adjust(pfn_t pfn, int level)
>+{
>+ return !is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
>+ level == PT_PAGE_TABLE_LEVEL &&
>+ PageTransCompound(pfn_to_page(pfn));
>+}
>+
> static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
> gfn_t *gfnp, pfn_t *pfnp, int *levelp)
> {
>@@ -2676,10 +2683,8 @@ static void transparent_hugepage_adjust(struct
kvm_vcpu *vcpu,
> * PT_PAGE_TABLE_LEVEL and there would be no adjustment done
> * here.
> */
>- if (!is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
>- level == PT_PAGE_TABLE_LEVEL &&
>- PageTransCompound(pfn_to_page(pfn)) &&
>- !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
>+ if (pfn_can_adjust(pfn, level) &&
>+ !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
> unsigned long mask;
> /*
> * mmu_notifier_retry was successful and we hold the
>@@ -2768,7 +2773,7 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu,
u64 *sptep, u64 spte)
> * - false: let the real page fault path to fix it.
> */
> static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
>- u32 error_code)
>+ u32 error_code, bool force_pt_level)
> {
> struct kvm_shadow_walk_iterator iterator;
> bool ret = false;
>@@ -2795,6 +2800,14 @@ static bool fast_page_fault(struct kvm_vcpu
*vcpu, gva_t gva, int level,
> goto exit;
>
> /*
>+ * Let the real page fault path change the mapping if large
>+ * mapping is allowed, for example, the memslot dirty log is
>+ * disabled.
>+ */
>+ if (!force_pt_level && pfn_can_adjust(spte_to_pfn(spte), level))
>+ goto exit;
>+
>+ /*
> * Check if it is a spurious fault caused by TLB lazily flushed.
> *
> * Need not check the access of upper level table entries since
>@@ -2854,7 +2867,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu,
gva_t v, u32 error_code,
> } else
> level = PT_PAGE_TABLE_LEVEL;
>
>- if (fast_page_fault(vcpu, v, level, error_code))
>+ if (fast_page_fault(vcpu, v, level, error_code, force_pt_level))
> return 0;
>
> mmu_seq = vcpu->kvm->mmu_notifier_seq;
>@@ -3323,7 +3336,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu,
gva_t gpa, u32 error_code,
> } else
> level = PT_PAGE_TABLE_LEVEL;
>
>- if (fast_page_fault(vcpu, gpa, level, error_code))
>+ if (fast_page_fault(vcpu, gpa, level, error_code, force_pt_level))
> return 0;
>
> mmu_seq = vcpu->kvm->mmu_notifier_seq;
>-- 1.7.7.6 On 07/11/2013 05:36 PM, Zhanghaoyu (A) wrote:
>> hi all,
>>
>> I met similar problem to these, while performing live migration or
save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
guest:suse11sp2), running tele-communication software suite in guest,
>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>
>> After live migration or virsh restore [savefile], one process's CPU
utilization went up by about 30%, resulted in throughput degradation of
this process.
>> oprofile report on this process in guest,
>> pre live migration:
>> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
>> Profiling through timer interrupt
>> samples % app name symbol name
>> 248 12.3016 no-vmlinux (no symbols)
>> 78 3.8690 libc.so.6 memset
>> 68 3.3730 libc.so.6 memcpy
>> 30 1.4881 cscf.scu SipMmBufMemAlloc
>> 29 1.4385 libpthread.so.0 pthread_mutex_lock
>> 26 1.2897 cscf.scu SipApiGetNextIe
>> 25 1.2401 cscf.scu DBFI_DATA_Search
>> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
>> 16 0.7937 cscf.scu DLM_FreeSlice
>> 16 0.7937 cscf.scu receivemessage
>> 15 0.7440 cscf.scu SipSmCopyString
>> 14 0.6944 cscf.scu DLM_AllocSlice
>>
>> post live migration:
>> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
>> Profiling through timer interrupt
>> samples % app name symbol name
>> 1586 42.2370 libc.so.6 memcpy
>> 271 7.2170 no-vmlinux (no symbols)
>> 83 2.2104 libc.so.6 memset
>> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
>> 35 0.9321 cscf.scu SipMmBufMemAlloc
>> 29 0.7723 cscf.scu DLM_AllocSlice
>> 28 0.7457 libpthread.so.0 pthread_mutex_lock
>> 23 0.6125 cscf.scu SipApiGetNextIe
>> 17 0.4527 cscf.scu SipSmCopyString
>> 16 0.4261 cscf.scu receivemessage
>> 15 0.3995 cscf.scu SipcMsgStatHandle
>> 14 0.3728 cscf.scu Urilex
>> 12 0.3196 cscf.scu DBFI_DATA_Search
>> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
>> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>>
>> So, memcpy costs much more cpu cycles after live migration. Then, I
restart the process, this problem disappeared. save-restore has the
similar problem.
>>
>> perf report on vcpu thread in host,
>> pre live migration:
>> Performance counter stats for thread id '21082':
>>
>> 0 page-faults
>> 0 minor-faults
>> 0 major-faults
>> 31616 cs
>> 506 migrations
>> 0 alignment-faults
>> 0 emulation-faults
>> 5075957539 L1-dcache-loads
[21.32%]
>> 324685106 L1-dcache-load-misses # 6.40% of all
L1-dcache hits [21.85%]
>> 3681777120 L1-dcache-stores
[21.65%]
>> 65251823 L1-dcache-store-misses # 1.77%
[22.78%]
>> 0 L1-dcache-prefetches
[22.84%]
>> 0 L1-dcache-prefetch-misses
[22.32%]
>> 9321652613 L1-icache-loads
[22.60%]
>> 1353418869 L1-icache-load-misses # 14.52% of all
L1-icache hits [21.92%]
>> 169126969 LLC-loads
[21.87%]
>> 12583605 LLC-load-misses # 7.44% of all
LL-cache hits [ 5.84%]
>> 132853447 LLC-stores
[ 6.61%]
>> 10601171 LLC-store-misses #7.9%
[ 5.01%]
>> 25309497 LLC-prefetches #30%
[ 4.96%]
>> 7723198 LLC-prefetch-misses
[ 6.04%]
>> 4954075817 dTLB-loads
[11.56%]
>> 26753106 dTLB-load-misses # 0.54% of all dTLB
cache hits [16.80%]
>> 3553702874 dTLB-stores
[22.37%]
>> 4720313 dTLB-store-misses #0.13%
[21.46%]
>> <not counted> dTLB-prefetches
>> <not counted> dTLB-prefetch-misses
>>
>> 60.000920666 seconds time elapsed
>>
>> post live migration:
>> Performance counter stats for thread id '1579':
>>
>> 0 page-faults
[100.00%]
>> 0 minor-faults
[100.00%]
>> 0 major-faults
[100.00%]
>> 34979 cs
[100.00%]
>> 441 migrations
[100.00%]
>> 0 alignment-faults
[100.00%]
>> 0 emulation-faults
>> 6903585501 L1-dcache-loads
[22.06%]
>> 525939560 L1-dcache-load-misses # 7.62% of all
L1-dcache hits [21.97%]
>> 5042552685 L1-dcache-stores
[22.20%]
>> 94493742 L1-dcache-store-misses #1.8%
[22.06%]
>> 0 L1-dcache-prefetches
[22.39%]
>> 0 L1-dcache-prefetch-misses
[22.47%]
>> 13022953030 L1-icache-loads
[22.25%]
>> 1957161101 L1-icache-load-misses # 15.03% of all
L1-icache hits [22.47%]
>> 348479792 LLC-loads
[22.27%]
>> 80662778 LLC-load-misses # 23.15% of all
LL-cache hits [ 5.64%]
>> 198745620 LLC-stores
[ 5.63%]
>> 14236497 LLC-store-misses # 7.1%
[ 5.41%]
>> 20757435 LLC-prefetches
[ 5.42%]
>> 5361819 LLC-prefetch-misses # 25%
[ 5.69%]
>> 7235715124 dTLB-loads
[11.26%]
>> 49895163 dTLB-load-misses # 0.69% of all dTLB
cache hits [16.96%]
>> 5168276218 dTLB-stores
[22.44%]
>> 6765983 dTLB-store-misses #0.13%
[22.24%]
>> <not counted> dTLB-prefetches
>> <not counted> dTLB-prefetch-misses
>>
>> The "LLC-load-misses" went up by about 16%. Then, I restarted the
process in guest, the perf data back to normal,
>> Performance counter stats for thread id '1579':
>>
>> 0 page-faults
[100.00%]
>> 0 minor-faults
[100.00%]
>> 0 major-faults
[100.00%]
>> 30594 cs
[100.00%]
>> 327 migrations
[100.00%]
>> 0 alignment-faults
[100.00%]
>> 0 emulation-faults
>> 7707091948 L1-dcache-loads
[22.10%]
>> 559829176 L1-dcache-load-misses # 7.26% of all
L1-dcache hits [22.28%]
>> 5976654983 L1-dcache-stores
[23.22%]
>> 160436114 L1-dcache-store-misses
[22.80%]
>> 0 L1-dcache-prefetches
[22.51%]
>> 0 L1-dcache-prefetch-misses
[22.53%]
>> 13798415672 L1-icache-loads
[22.28%]
>> 2017724676 L1-icache-load-misses # 14.62% of all
L1-icache hits [22.49%]
>> 254598008 LLC-loads
[22.86%]
>> 16035378 LLC-load-misses # 6.30% of all
LL-cache hits [ 5.36%]
>> 307019606 LLC-stores
[ 5.60%]
>> 13665033 LLC-store-misses
[ 5.43%]
>> 17715554 LLC-prefetches
[ 5.57%]
>> 4187006 LLC-prefetch-misses
[ 5.44%]
>> 7811502895 dTLB-loads
[10.72%]
>> 40547330 dTLB-load-misses # 0.52% of all dTLB
cache hits [16.31%]
>> 6144202516 dTLB-stores
[21.58%]
>> 6313363 dTLB-store-misses
[21.91%]
>> <not counted> dTLB-prefetches
>> <not counted> dTLB-prefetch-misses
>>
>> 60.000812523 seconds time elapsed
>>
>> If EPT disabled, this problem gone.
>>
>> I suspect that kvm hypervisor has business with this problem.
>> Based on above suspect, I want to find the two adjacent versions of
kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> and analyze the differences between this two versions, or apply the
patches between this two versions by bisection method, finally find the
key patches.
>>
>> Any better ideas?
>>
>> Thanks,
>> Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 14:00 ` Zhang Haoyu
0 siblings, 0 replies; 52+ messages in thread
From: Zhang Haoyu @ 2013-07-11 14:00 UTC (permalink / raw)
To: Xiao Guangrong
Cc: mpetersen, Shouta.Uehara, KVM, Michael S. Tsirkin, Luonengjun,
Zhanghaoyu (A),
Hanweidong, paolo.bonzini, qemu-devel, Zanghongyong, cloudfantom,
Huangweidong (C)
>Hi,
>
>Could you please test this patch?
>
I tried this patch, but the problem still be there.
Thanks,
Zhang Haoyu
>>From 48df7db2ec2721e35d024a8d9850dbb34b557c1c Mon Sep 17 00:00:00 2001
>From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
>Date: Thu, 6 Sep 2012 16:56:01 +0800
>Subject: [PATCH 10/11] using huge page on fast page fault path
>
>---
> arch/x86/kvm/mmu.c | 27 ++++++++++++++++++++-------
> 1 files changed, 20 insertions(+), 7 deletions(-)
>
>diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>index 6945ef4..7d177c7 100644
>--- a/arch/x86/kvm/mmu.c
>+++ b/arch/x86/kvm/mmu.c
>@@ -2663,6 +2663,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu
*vcpu, gfn_t gfn, pfn_t pfn)
> return -EFAULT;
> }
>
>+static bool pfn_can_adjust(pfn_t pfn, int level)
>+{
>+ return !is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
>+ level == PT_PAGE_TABLE_LEVEL &&
>+ PageTransCompound(pfn_to_page(pfn));
>+}
>+
> static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
> gfn_t *gfnp, pfn_t *pfnp, int *levelp)
> {
>@@ -2676,10 +2683,8 @@ static void transparent_hugepage_adjust(struct
kvm_vcpu *vcpu,
> * PT_PAGE_TABLE_LEVEL and there would be no adjustment done
> * here.
> */
>- if (!is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn) &&
>- level == PT_PAGE_TABLE_LEVEL &&
>- PageTransCompound(pfn_to_page(pfn)) &&
>- !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
>+ if (pfn_can_adjust(pfn, level) &&
>+ !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
> unsigned long mask;
> /*
> * mmu_notifier_retry was successful and we hold the
>@@ -2768,7 +2773,7 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu,
u64 *sptep, u64 spte)
> * - false: let the real page fault path to fix it.
> */
> static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
>- u32 error_code)
>+ u32 error_code, bool force_pt_level)
> {
> struct kvm_shadow_walk_iterator iterator;
> bool ret = false;
>@@ -2795,6 +2800,14 @@ static bool fast_page_fault(struct kvm_vcpu
*vcpu, gva_t gva, int level,
> goto exit;
>
> /*
>+ * Let the real page fault path change the mapping if large
>+ * mapping is allowed, for example, the memslot dirty log is
>+ * disabled.
>+ */
>+ if (!force_pt_level && pfn_can_adjust(spte_to_pfn(spte), level))
>+ goto exit;
>+
>+ /*
> * Check if it is a spurious fault caused by TLB lazily flushed.
> *
> * Need not check the access of upper level table entries since
>@@ -2854,7 +2867,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu,
gva_t v, u32 error_code,
> } else
> level = PT_PAGE_TABLE_LEVEL;
>
>- if (fast_page_fault(vcpu, v, level, error_code))
>+ if (fast_page_fault(vcpu, v, level, error_code, force_pt_level))
> return 0;
>
> mmu_seq = vcpu->kvm->mmu_notifier_seq;
>@@ -3323,7 +3336,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu,
gva_t gpa, u32 error_code,
> } else
> level = PT_PAGE_TABLE_LEVEL;
>
>- if (fast_page_fault(vcpu, gpa, level, error_code))
>+ if (fast_page_fault(vcpu, gpa, level, error_code, force_pt_level))
> return 0;
>
> mmu_seq = vcpu->kvm->mmu_notifier_seq;
>-- 1.7.7.6 On 07/11/2013 05:36 PM, Zhanghaoyu (A) wrote:
>> hi all,
>>
>> I met similar problem to these, while performing live migration or
save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
guest:suse11sp2), running tele-communication software suite in guest,
>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>
>> After live migration or virsh restore [savefile], one process's CPU
utilization went up by about 30%, resulted in throughput degradation of
this process.
>> oprofile report on this process in guest,
>> pre live migration:
>> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
>> Profiling through timer interrupt
>> samples % app name symbol name
>> 248 12.3016 no-vmlinux (no symbols)
>> 78 3.8690 libc.so.6 memset
>> 68 3.3730 libc.so.6 memcpy
>> 30 1.4881 cscf.scu SipMmBufMemAlloc
>> 29 1.4385 libpthread.so.0 pthread_mutex_lock
>> 26 1.2897 cscf.scu SipApiGetNextIe
>> 25 1.2401 cscf.scu DBFI_DATA_Search
>> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
>> 16 0.7937 cscf.scu DLM_FreeSlice
>> 16 0.7937 cscf.scu receivemessage
>> 15 0.7440 cscf.scu SipSmCopyString
>> 14 0.6944 cscf.scu DLM_AllocSlice
>>
>> post live migration:
>> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
>> Profiling through timer interrupt
>> samples % app name symbol name
>> 1586 42.2370 libc.so.6 memcpy
>> 271 7.2170 no-vmlinux (no symbols)
>> 83 2.2104 libc.so.6 memset
>> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
>> 35 0.9321 cscf.scu SipMmBufMemAlloc
>> 29 0.7723 cscf.scu DLM_AllocSlice
>> 28 0.7457 libpthread.so.0 pthread_mutex_lock
>> 23 0.6125 cscf.scu SipApiGetNextIe
>> 17 0.4527 cscf.scu SipSmCopyString
>> 16 0.4261 cscf.scu receivemessage
>> 15 0.3995 cscf.scu SipcMsgStatHandle
>> 14 0.3728 cscf.scu Urilex
>> 12 0.3196 cscf.scu DBFI_DATA_Search
>> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
>> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>>
>> So, memcpy costs much more cpu cycles after live migration. Then, I
restart the process, this problem disappeared. save-restore has the
similar problem.
>>
>> perf report on vcpu thread in host,
>> pre live migration:
>> Performance counter stats for thread id '21082':
>>
>> 0 page-faults
>> 0 minor-faults
>> 0 major-faults
>> 31616 cs
>> 506 migrations
>> 0 alignment-faults
>> 0 emulation-faults
>> 5075957539 L1-dcache-loads
[21.32%]
>> 324685106 L1-dcache-load-misses # 6.40% of all
L1-dcache hits [21.85%]
>> 3681777120 L1-dcache-stores
[21.65%]
>> 65251823 L1-dcache-store-misses # 1.77%
[22.78%]
>> 0 L1-dcache-prefetches
[22.84%]
>> 0 L1-dcache-prefetch-misses
[22.32%]
>> 9321652613 L1-icache-loads
[22.60%]
>> 1353418869 L1-icache-load-misses # 14.52% of all
L1-icache hits [21.92%]
>> 169126969 LLC-loads
[21.87%]
>> 12583605 LLC-load-misses # 7.44% of all
LL-cache hits [ 5.84%]
>> 132853447 LLC-stores
[ 6.61%]
>> 10601171 LLC-store-misses #7.9%
[ 5.01%]
>> 25309497 LLC-prefetches #30%
[ 4.96%]
>> 7723198 LLC-prefetch-misses
[ 6.04%]
>> 4954075817 dTLB-loads
[11.56%]
>> 26753106 dTLB-load-misses # 0.54% of all dTLB
cache hits [16.80%]
>> 3553702874 dTLB-stores
[22.37%]
>> 4720313 dTLB-store-misses #0.13%
[21.46%]
>> <not counted> dTLB-prefetches
>> <not counted> dTLB-prefetch-misses
>>
>> 60.000920666 seconds time elapsed
>>
>> post live migration:
>> Performance counter stats for thread id '1579':
>>
>> 0 page-faults
[100.00%]
>> 0 minor-faults
[100.00%]
>> 0 major-faults
[100.00%]
>> 34979 cs
[100.00%]
>> 441 migrations
[100.00%]
>> 0 alignment-faults
[100.00%]
>> 0 emulation-faults
>> 6903585501 L1-dcache-loads
[22.06%]
>> 525939560 L1-dcache-load-misses # 7.62% of all
L1-dcache hits [21.97%]
>> 5042552685 L1-dcache-stores
[22.20%]
>> 94493742 L1-dcache-store-misses #1.8%
[22.06%]
>> 0 L1-dcache-prefetches
[22.39%]
>> 0 L1-dcache-prefetch-misses
[22.47%]
>> 13022953030 L1-icache-loads
[22.25%]
>> 1957161101 L1-icache-load-misses # 15.03% of all
L1-icache hits [22.47%]
>> 348479792 LLC-loads
[22.27%]
>> 80662778 LLC-load-misses # 23.15% of all
LL-cache hits [ 5.64%]
>> 198745620 LLC-stores
[ 5.63%]
>> 14236497 LLC-store-misses # 7.1%
[ 5.41%]
>> 20757435 LLC-prefetches
[ 5.42%]
>> 5361819 LLC-prefetch-misses # 25%
[ 5.69%]
>> 7235715124 dTLB-loads
[11.26%]
>> 49895163 dTLB-load-misses # 0.69% of all dTLB
cache hits [16.96%]
>> 5168276218 dTLB-stores
[22.44%]
>> 6765983 dTLB-store-misses #0.13%
[22.24%]
>> <not counted> dTLB-prefetches
>> <not counted> dTLB-prefetch-misses
>>
>> The "LLC-load-misses" went up by about 16%. Then, I restarted the
process in guest, the perf data back to normal,
>> Performance counter stats for thread id '1579':
>>
>> 0 page-faults
[100.00%]
>> 0 minor-faults
[100.00%]
>> 0 major-faults
[100.00%]
>> 30594 cs
[100.00%]
>> 327 migrations
[100.00%]
>> 0 alignment-faults
[100.00%]
>> 0 emulation-faults
>> 7707091948 L1-dcache-loads
[22.10%]
>> 559829176 L1-dcache-load-misses # 7.26% of all
L1-dcache hits [22.28%]
>> 5976654983 L1-dcache-stores
[23.22%]
>> 160436114 L1-dcache-store-misses
[22.80%]
>> 0 L1-dcache-prefetches
[22.51%]
>> 0 L1-dcache-prefetch-misses
[22.53%]
>> 13798415672 L1-icache-loads
[22.28%]
>> 2017724676 L1-icache-load-misses # 14.62% of all
L1-icache hits [22.49%]
>> 254598008 LLC-loads
[22.86%]
>> 16035378 LLC-load-misses # 6.30% of all
LL-cache hits [ 5.36%]
>> 307019606 LLC-stores
[ 5.60%]
>> 13665033 LLC-store-misses
[ 5.43%]
>> 17715554 LLC-prefetches
[ 5.57%]
>> 4187006 LLC-prefetch-misses
[ 5.44%]
>> 7811502895 dTLB-loads
[10.72%]
>> 40547330 dTLB-load-misses # 0.52% of all dTLB
cache hits [16.31%]
>> 6144202516 dTLB-stores
[21.58%]
>> 6313363 dTLB-store-misses
[21.91%]
>> <not counted> dTLB-prefetches
>> <not counted> dTLB-prefetch-misses
>>
>> 60.000812523 seconds time elapsed
>>
>> If EPT disabled, this problem gone.
>>
>> I suspect that kvm hypervisor has business with this problem.
>> Based on above suspect, I want to find the two adjacent versions of
kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> and analyze the differences between this two versions, or apply the
patches between this two versions by bisection method, finally find the
key patches.
>>
>> Any better ideas?
>>
>> Thanks,
>> Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 9:36 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-11 18:20 ` Bruce Rogers
-1 siblings, 0 replies; 52+ messages in thread
From: Bruce Rogers @ 2013-07-11 18:20 UTC (permalink / raw)
To: cloudfantom, paolo.bonzini, Zhanghaoyu (A),
Shouta.Uehara, qemu-devel, mpetersen, Michael S. Tsirkin, KVM
Cc: Huangweidong (C), Zanghongyong, Luonengjun, Hanweidong
>>> On 7/11/2013 at 03:36 AM, "Zhanghaoyu (A)" <haoyu.zhang@huawei.com> wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU
> utilization went up by about 30%, resulted in throughput degradation of this
> process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart
> the process, this problem disappeared. save-restore has the similar problem.
>
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads
> [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits
> [21.85%]
> 3681777120 L1-dcache-stores
> [21.65%]
> 65251823 L1-dcache-store-misses # 1.77%
> [22.78%]
> 0 L1-dcache-prefetches
> [22.84%]
> 0 L1-dcache-prefetch-misses
> [22.32%]
> 9321652613 L1-icache-loads
> [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits
> [21.92%]
> 169126969 LLC-loads
> [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits
> [ 5.84%]
> 132853447 LLC-stores
> [ 6.61%]
> 10601171 LLC-store-misses #7.9%
> [ 5.01%]
> 25309497 LLC-prefetches #30%
> [ 4.96%]
> 7723198 LLC-prefetch-misses
> [ 6.04%]
> 4954075817 dTLB-loads
> [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits
> [16.80%]
> 3553702874 dTLB-stores
> [22.37%]
> 4720313 dTLB-store-misses #0.13%
> [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults
> [100.00%]
> 0 minor-faults
> [100.00%]
> 0 major-faults
> [100.00%]
> 34979 cs
> [100.00%]
> 441 migrations
> [100.00%]
> 0 alignment-faults
> [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads
> [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits
> [21.97%]
> 5042552685 L1-dcache-stores
> [22.20%]
> 94493742 L1-dcache-store-misses #1.8%
> [22.06%]
> 0 L1-dcache-prefetches
> [22.39%]
> 0 L1-dcache-prefetch-misses
> [22.47%]
> 13022953030 L1-icache-loads
> [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits
> [22.47%]
> 348479792 LLC-loads
> [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits
> [ 5.64%]
> 198745620 LLC-stores
> [ 5.63%]
> 14236497 LLC-store-misses # 7.1%
> [ 5.41%]
> 20757435 LLC-prefetches
> [ 5.42%]
> 5361819 LLC-prefetch-misses # 25%
> [ 5.69%]
> 7235715124 dTLB-loads
> [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits
> [16.96%]
> 5168276218 dTLB-stores
> [22.44%]
> 6765983 dTLB-store-misses #0.13%
> [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in
> guest, the perf data back to normal,
> Performance counter stats for thread id '1579':
>
> 0 page-faults
> [100.00%]
> 0 minor-faults
> [100.00%]
> 0 major-faults
> [100.00%]
> 30594 cs
> [100.00%]
> 327 migrations
> [100.00%]
> 0 alignment-faults
> [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads
> [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits
> [22.28%]
> 5976654983 L1-dcache-stores
> [23.22%]
> 160436114 L1-dcache-store-misses
> [22.80%]
> 0 L1-dcache-prefetches
> [22.51%]
> 0 L1-dcache-prefetch-misses
> [22.53%]
> 13798415672 L1-icache-loads
> [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits
> [22.49%]
> 254598008 LLC-loads
> [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits
> [ 5.36%]
> 307019606 LLC-stores
> [ 5.60%]
> 13665033 LLC-store-misses
> [ 5.43%]
> 17715554 LLC-prefetches
> [ 5.57%]
> 4187006 LLC-prefetch-misses
> [ 5.44%]
> 7811502895 dTLB-loads
> [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits
> [16.31%]
> 6144202516 dTLB-stores
> [21.58%]
> 6313363 dTLB-store-misses
> [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod
> which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches
> between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
> Thanks,
> Zhang Haoyu
I've attempted to duplicate this on a number of machines that are as similar
to yours as I am able to get my hands on, and so far have not been able to see
any performance degradation. And from what I've read in the above links, huge
pages do not seem to be part of the problem.
So, if you are in a position to bisect the kernel changes, that would probably be
the best avenue to pursue in my opinion.
Bruce
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-11 18:20 ` Bruce Rogers
0 siblings, 0 replies; 52+ messages in thread
From: Bruce Rogers @ 2013-07-11 18:20 UTC (permalink / raw)
To: cloudfantom, paolo.bonzini, Zhanghaoyu (A),
Shouta.Uehara, qemu-devel, mpetersen, Michael S. Tsirkin, KVM
Cc: Huangweidong (C), Zanghongyong, Luonengjun, Hanweidong
>>> On 7/11/2013 at 03:36 AM, "Zhanghaoyu (A)" <haoyu.zhang@huawei.com> wrote:
> hi all,
>
> I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>
> After live migration or virsh restore [savefile], one process's CPU
> utilization went up by about 30%, resulted in throughput degradation of this
> process.
> oprofile report on this process in guest,
> pre live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 248 12.3016 no-vmlinux (no symbols)
> 78 3.8690 libc.so.6 memset
> 68 3.3730 libc.so.6 memcpy
> 30 1.4881 cscf.scu SipMmBufMemAlloc
> 29 1.4385 libpthread.so.0 pthread_mutex_lock
> 26 1.2897 cscf.scu SipApiGetNextIe
> 25 1.2401 cscf.scu DBFI_DATA_Search
> 20 0.9921 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 16 0.7937 cscf.scu DLM_FreeSlice
> 16 0.7937 cscf.scu receivemessage
> 15 0.7440 cscf.scu SipSmCopyString
> 14 0.6944 cscf.scu DLM_AllocSlice
>
> post live migration:
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % app name symbol name
> 1586 42.2370 libc.so.6 memcpy
> 271 7.2170 no-vmlinux (no symbols)
> 83 2.2104 libc.so.6 memset
> 41 1.0919 libpthread.so.0 __pthread_mutex_unlock_usercnt
> 35 0.9321 cscf.scu SipMmBufMemAlloc
> 29 0.7723 cscf.scu DLM_AllocSlice
> 28 0.7457 libpthread.so.0 pthread_mutex_lock
> 23 0.6125 cscf.scu SipApiGetNextIe
> 17 0.4527 cscf.scu SipSmCopyString
> 16 0.4261 cscf.scu receivemessage
> 15 0.3995 cscf.scu SipcMsgStatHandle
> 14 0.3728 cscf.scu Urilex
> 12 0.3196 cscf.scu DBFI_DATA_Search
> 12 0.3196 cscf.scu SipDsmGetHdrBitValInner
> 12 0.3196 cscf.scu SipSmGetDataFromRefString
>
> So, memcpy costs much more cpu cycles after live migration. Then, I restart
> the process, this problem disappeared. save-restore has the similar problem.
>
> perf report on vcpu thread in host,
> pre live migration:
> Performance counter stats for thread id '21082':
>
> 0 page-faults
> 0 minor-faults
> 0 major-faults
> 31616 cs
> 506 migrations
> 0 alignment-faults
> 0 emulation-faults
> 5075957539 L1-dcache-loads
> [21.32%]
> 324685106 L1-dcache-load-misses # 6.40% of all L1-dcache hits
> [21.85%]
> 3681777120 L1-dcache-stores
> [21.65%]
> 65251823 L1-dcache-store-misses # 1.77%
> [22.78%]
> 0 L1-dcache-prefetches
> [22.84%]
> 0 L1-dcache-prefetch-misses
> [22.32%]
> 9321652613 L1-icache-loads
> [22.60%]
> 1353418869 L1-icache-load-misses # 14.52% of all L1-icache hits
> [21.92%]
> 169126969 LLC-loads
> [21.87%]
> 12583605 LLC-load-misses # 7.44% of all LL-cache hits
> [ 5.84%]
> 132853447 LLC-stores
> [ 6.61%]
> 10601171 LLC-store-misses #7.9%
> [ 5.01%]
> 25309497 LLC-prefetches #30%
> [ 4.96%]
> 7723198 LLC-prefetch-misses
> [ 6.04%]
> 4954075817 dTLB-loads
> [11.56%]
> 26753106 dTLB-load-misses # 0.54% of all dTLB cache hits
> [16.80%]
> 3553702874 dTLB-stores
> [22.37%]
> 4720313 dTLB-store-misses #0.13%
> [21.46%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000920666 seconds time elapsed
>
> post live migration:
> Performance counter stats for thread id '1579':
>
> 0 page-faults
> [100.00%]
> 0 minor-faults
> [100.00%]
> 0 major-faults
> [100.00%]
> 34979 cs
> [100.00%]
> 441 migrations
> [100.00%]
> 0 alignment-faults
> [100.00%]
> 0 emulation-faults
> 6903585501 L1-dcache-loads
> [22.06%]
> 525939560 L1-dcache-load-misses # 7.62% of all L1-dcache hits
> [21.97%]
> 5042552685 L1-dcache-stores
> [22.20%]
> 94493742 L1-dcache-store-misses #1.8%
> [22.06%]
> 0 L1-dcache-prefetches
> [22.39%]
> 0 L1-dcache-prefetch-misses
> [22.47%]
> 13022953030 L1-icache-loads
> [22.25%]
> 1957161101 L1-icache-load-misses # 15.03% of all L1-icache hits
> [22.47%]
> 348479792 LLC-loads
> [22.27%]
> 80662778 LLC-load-misses # 23.15% of all LL-cache hits
> [ 5.64%]
> 198745620 LLC-stores
> [ 5.63%]
> 14236497 LLC-store-misses # 7.1%
> [ 5.41%]
> 20757435 LLC-prefetches
> [ 5.42%]
> 5361819 LLC-prefetch-misses # 25%
> [ 5.69%]
> 7235715124 dTLB-loads
> [11.26%]
> 49895163 dTLB-load-misses # 0.69% of all dTLB cache hits
> [16.96%]
> 5168276218 dTLB-stores
> [22.44%]
> 6765983 dTLB-store-misses #0.13%
> [22.24%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> The "LLC-load-misses" went up by about 16%. Then, I restarted the process in
> guest, the perf data back to normal,
> Performance counter stats for thread id '1579':
>
> 0 page-faults
> [100.00%]
> 0 minor-faults
> [100.00%]
> 0 major-faults
> [100.00%]
> 30594 cs
> [100.00%]
> 327 migrations
> [100.00%]
> 0 alignment-faults
> [100.00%]
> 0 emulation-faults
> 7707091948 L1-dcache-loads
> [22.10%]
> 559829176 L1-dcache-load-misses # 7.26% of all L1-dcache hits
> [22.28%]
> 5976654983 L1-dcache-stores
> [23.22%]
> 160436114 L1-dcache-store-misses
> [22.80%]
> 0 L1-dcache-prefetches
> [22.51%]
> 0 L1-dcache-prefetch-misses
> [22.53%]
> 13798415672 L1-icache-loads
> [22.28%]
> 2017724676 L1-icache-load-misses # 14.62% of all L1-icache hits
> [22.49%]
> 254598008 LLC-loads
> [22.86%]
> 16035378 LLC-load-misses # 6.30% of all LL-cache hits
> [ 5.36%]
> 307019606 LLC-stores
> [ 5.60%]
> 13665033 LLC-store-misses
> [ 5.43%]
> 17715554 LLC-prefetches
> [ 5.57%]
> 4187006 LLC-prefetch-misses
> [ 5.44%]
> 7811502895 dTLB-loads
> [10.72%]
> 40547330 dTLB-load-misses # 0.52% of all dTLB cache hits
> [16.31%]
> 6144202516 dTLB-stores
> [21.58%]
> 6313363 dTLB-store-misses
> [21.91%]
> <not counted> dTLB-prefetches
> <not counted> dTLB-prefetch-misses
>
> 60.000812523 seconds time elapsed
>
> If EPT disabled, this problem gone.
>
> I suspect that kvm hypervisor has business with this problem.
> Based on above suspect, I want to find the two adjacent versions of kvm-kmod
> which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> and analyze the differences between this two versions, or apply the patches
> between this two versions by bisection method, finally find the key patches.
>
> Any better ideas?
>
> Thanks,
> Zhang Haoyu
I've attempted to duplicate this on a number of machines that are as similar
to yours as I am able to get my hands on, and so far have not been able to see
any performance degradation. And from what I've read in the above links, huge
pages do not seem to be part of the problem.
So, if you are in a position to bisect the kernel changes, that would probably be
the best avenue to pursue in my opinion.
Bruce
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 10:51 ` Andreas Färber
@ 2013-07-12 3:21 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-12 3:21 UTC (permalink / raw)
To: Andreas Färber
Cc: KVM, qemu-devel, cloudfantom, mpetersen, Shouta.Uehara,
paolo.bonzini, Michael S. Tsirkin, Huangweidong (C),
Zanghongyong, Luonengjun, Hanweidong, Xiejunyong, Yi Li,
Xin Rong Fu, Xiahai
> Hi,
>
> Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> > I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
> > https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> > http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> > http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> > https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >
> > After live migration or virsh restore [savefile], one process's CPU
> utilization went up by about 30%, resulted in throughput degradation of
> this process.
> > oprofile report on this process in guest,
> > pre live migration:
>
> So far we've been unable to reproduce this with a pure qemu-kvm /
> qemu-system-x86_64 command line on several EPT machines, whereas for
> virsh it was reported as confirmed. Can you please share the resulting
> QEMU command line from libvirt logs or process list?
qemu command line from /var/log/libvirt/qemu/[domain].log,
LC_ALL=C PATH=/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name CSC2 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 76e03575-a3ad-589a-e039-40160274bb97 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/CSC2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/CSC2.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=22 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:01,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=23,id=hostnet1,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:01,bus=pci.0,addr=0x4 -netdev tap,fd=25,id=hostnet2,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:01,bus=pci.0,addr=0x5 -netdev tap,fd=27,id=hostnet3,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:01,bus=pci.0,addr=0x6 -netdev tap,fd=29,id=hostnet4,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:01,bus=pci.0,addr=0x7 -netdev tap,fd=31,id=hostnet5,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:01,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:1 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>
> Are both host and guest kernel at 3.0.80 (latest SLES updates)?
No, both host and guest are just raw sles11-sp2-64-GM, kernel version: 3.0.13-0.27.
Thanks,
Zhang Haoyu
>
> Thanks,
> Andreas
>
> --
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-12 3:21 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-12 3:21 UTC (permalink / raw)
To: Andreas Färber
Cc: mpetersen, Xiejunyong, Shouta.Uehara, KVM, Michael S. Tsirkin,
Luonengjun, Hanweidong, paolo.bonzini, qemu-devel, Xiahai,
Zanghongyong, Xin Rong Fu, Yi Li, cloudfantom, Huangweidong (C)
> Hi,
>
> Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> > I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
> > https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> > http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> > http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> > https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >
> > After live migration or virsh restore [savefile], one process's CPU
> utilization went up by about 30%, resulted in throughput degradation of
> this process.
> > oprofile report on this process in guest,
> > pre live migration:
>
> So far we've been unable to reproduce this with a pure qemu-kvm /
> qemu-system-x86_64 command line on several EPT machines, whereas for
> virsh it was reported as confirmed. Can you please share the resulting
> QEMU command line from libvirt logs or process list?
qemu command line from /var/log/libvirt/qemu/[domain].log,
LC_ALL=C PATH=/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name CSC2 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 76e03575-a3ad-589a-e039-40160274bb97 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/CSC2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/CSC2.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=22 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:01,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=23,id=hostnet1,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:01,bus=pci.0,addr=0x4 -netdev tap,fd=25,id=hostnet2,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:01,bus=pci.0,addr=0x5 -netdev tap,fd=27,id=hostnet3,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:01,bus=pci.0,addr=0x6 -netdev tap,fd=29,id=hostnet4,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:01,bus=pci.0,addr=0x7 -netdev tap,fd=31,id=hostnet5,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:01,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:1 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>
> Are both host and guest kernel at 3.0.80 (latest SLES updates)?
No, both host and guest are just raw sles11-sp2-64-GM, kernel version: 3.0.13-0.27.
Thanks,
Zhang Haoyu
>
> Thanks,
> Andreas
>
> --
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-11 18:20 ` [Qemu-devel] " Bruce Rogers
@ 2013-07-27 7:47 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-27 7:47 UTC (permalink / raw)
To: Bruce Rogers, paolo.bonzini, qemu-devel, Michael S. Tsirkin, KVM,
Marcelo Tosatti, Avi Kivity, xiaoguangrong, Gleb Natapov,
Andreas Färber
Cc: Xin Rong Fu, Huangweidong (C),
Hanweidong, Xiejunyong, Luonengjun, Xiahai, Zanghongyong, Yi Li
>> hi all,
>>
>> I met similar problem to these, while performing live migration or
>> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> guest:suse11sp2), running tele-communication software suite in guest,
>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>
>> After live migration or virsh restore [savefile], one process's CPU
>> utilization went up by about 30%, resulted in throughput degradation
>> of this process.
>>
>> If EPT disabled, this problem gone.
>>
>> I suspect that kvm hypervisor has business with this problem.
>> Based on above suspect, I want to find the two adjacent versions of
>> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> and analyze the differences between this two versions, or apply the
>> patches between this two versions by bisection method, finally find the key patches.
>>
>> Any better ideas?
>>
>> Thanks,
>> Zhang Haoyu
>
>I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>
>So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>
>Bruce
I found the first bad commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem
by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
And,
git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
git diff 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
came to a conclusion that all of the differences between 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-27 7:47 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-27 7:47 UTC (permalink / raw)
To: Bruce Rogers, paolo.bonzini, qemu-devel, Michael S. Tsirkin, KVM,
Marcelo Tosatti, Avi Kivity, xiaoguangrong, Gleb Natapov,
Andreas Färber
Cc: Xin Rong Fu, Huangweidong (C),
Hanweidong, Xiejunyong, Luonengjun, Xiahai, Zanghongyong, Yi Li
>> hi all,
>>
>> I met similar problem to these, while performing live migration or
>> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> guest:suse11sp2), running tele-communication software suite in guest,
>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>
>> After live migration or virsh restore [savefile], one process's CPU
>> utilization went up by about 30%, resulted in throughput degradation
>> of this process.
>>
>> If EPT disabled, this problem gone.
>>
>> I suspect that kvm hypervisor has business with this problem.
>> Based on above suspect, I want to find the two adjacent versions of
>> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> and analyze the differences between this two versions, or apply the
>> patches between this two versions by bisection method, finally find the key patches.
>>
>> Any better ideas?
>>
>> Thanks,
>> Zhang Haoyu
>
>I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>
>So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>
>Bruce
I found the first bad commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem
by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
And,
git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
git diff 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
came to a conclusion that all of the differences between 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-27 7:47 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-29 22:14 ` Andrea Arcangeli
-1 siblings, 0 replies; 52+ messages in thread
From: Andrea Arcangeli @ 2013-07-29 22:14 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Bruce Rogers, paolo.bonzini, qemu-devel, Michael S. Tsirkin, KVM,
Marcelo Tosatti, Avi Kivity, xiaoguangrong, Gleb Natapov,
Andreas Färber, Xin Rong Fu, Huangweidong (C),
Hanweidong, Xiejunyong, Luonengjun, Xiahai, Zanghongyong, Yi Li
Hi,
On Sat, Jul 27, 2013 at 07:47:49AM +0000, Zhanghaoyu (A) wrote:
> >> hi all,
> >>
> >> I met similar problem to these, while performing live migration or
> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> guest:suse11sp2), running tele-communication software suite in guest,
> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >>
> >> After live migration or virsh restore [savefile], one process's CPU
> >> utilization went up by about 30%, resulted in throughput degradation
> >> of this process.
> >>
> >> If EPT disabled, this problem gone.
> >>
> >> I suspect that kvm hypervisor has business with this problem.
> >> Based on above suspect, I want to find the two adjacent versions of
> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> and analyze the differences between this two versions, or apply the
> >> patches between this two versions by bisection method, finally find the key patches.
> >>
> >> Any better ideas?
> >>
> >> Thanks,
> >> Zhang Haoyu
> >
> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >
> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >
> >Bruce
>
> I found the first bad commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem
> by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>
> And,
> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> git diff 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>
> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> came to a conclusion that all of the differences between 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
Something is generating readonly host ptes for this to make a
difference. Considering live migrate or startup actions are involved
the most likely culprit is fork() to start some script or something.
forks would mark all the pte readonly and invalidate the spte with the
mmu notifier.
So then with all spte dropped, and the whole guest address space
mapped readonly, depending on the app, sometime we could have a vmexit
to establish a readonly spte on the readonly pte, and then another
vmexit to execute the COW at the first write fault that follows.
But it won't run a COW unless the child is still there (and normally
child does fork() + quick stuff + exec(), so child is unlikely to be
still there).
But it's still 2 vmexits when before there was just 1 vmexit.
The same overhead should happen for both EPT and no-EPT, there would
be two vmexits in no-EPT case, there's no way spte can be marked
writable if the host pte is still readonly.
If you get an massive overhead and CPU loop in host kernel mode, maybe
a global tlb flush is missing that get rid of the readonly copy of the
spte in the CPU and all CPUs tends to exit on the same spte at the
same time. Or we may lack the tlb flush even for the current CPU but
we should really flush them all (in the old days the current CPU TLB
flush was implicit in the vmexit but CPU got more features)?
I don't know exactly which kind of overhead we're talking about but
the double number of vmexit would probably not be measurable. If you
monitor the number of vmexits if it's a missing TLB flush you'll see a
flood, otherwise you'll just the double amount before/after that commit.
If the readonly pte generator is fork and it's just the double number
of vmexit the only thing you need is the patch I posted a few days ago
that adds the missing madvise(MADV_DONTFORK).
If instead the overhead is massive and it's a vmexit flood, we also
have a missing tlb flush. In that case let's fix the tlb flush first,
and then you can still apply the MADV_DONTFORK. This kind of fault
activity also happens after a swapin from readonly swapcache so if
there's a vmexit flood we need to fix it before applying
MADV_DONTFORK.
Thanks,
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-29 22:14 ` Andrea Arcangeli
0 siblings, 0 replies; 52+ messages in thread
From: Andrea Arcangeli @ 2013-07-29 22:14 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xin Rong Fu, Huangweidong (C),
Gleb Natapov, KVM, Michael S. Tsirkin, Luonengjun, Xiahai,
Marcelo Tosatti, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Avi Kivity, Xiejunyong, xiaoguangrong, Hanweidong,
Andreas Färber, Yi Li
Hi,
On Sat, Jul 27, 2013 at 07:47:49AM +0000, Zhanghaoyu (A) wrote:
> >> hi all,
> >>
> >> I met similar problem to these, while performing live migration or
> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> guest:suse11sp2), running tele-communication software suite in guest,
> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >>
> >> After live migration or virsh restore [savefile], one process's CPU
> >> utilization went up by about 30%, resulted in throughput degradation
> >> of this process.
> >>
> >> If EPT disabled, this problem gone.
> >>
> >> I suspect that kvm hypervisor has business with this problem.
> >> Based on above suspect, I want to find the two adjacent versions of
> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> and analyze the differences between this two versions, or apply the
> >> patches between this two versions by bisection method, finally find the key patches.
> >>
> >> Any better ideas?
> >>
> >> Thanks,
> >> Zhang Haoyu
> >
> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >
> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >
> >Bruce
>
> I found the first bad commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem
> by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>
> And,
> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> git diff 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>
> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> came to a conclusion that all of the differences between 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
Something is generating readonly host ptes for this to make a
difference. Considering live migrate or startup actions are involved
the most likely culprit is fork() to start some script or something.
forks would mark all the pte readonly and invalidate the spte with the
mmu notifier.
So then with all spte dropped, and the whole guest address space
mapped readonly, depending on the app, sometime we could have a vmexit
to establish a readonly spte on the readonly pte, and then another
vmexit to execute the COW at the first write fault that follows.
But it won't run a COW unless the child is still there (and normally
child does fork() + quick stuff + exec(), so child is unlikely to be
still there).
But it's still 2 vmexits when before there was just 1 vmexit.
The same overhead should happen for both EPT and no-EPT, there would
be two vmexits in no-EPT case, there's no way spte can be marked
writable if the host pte is still readonly.
If you get an massive overhead and CPU loop in host kernel mode, maybe
a global tlb flush is missing that get rid of the readonly copy of the
spte in the CPU and all CPUs tends to exit on the same spte at the
same time. Or we may lack the tlb flush even for the current CPU but
we should really flush them all (in the old days the current CPU TLB
flush was implicit in the vmexit but CPU got more features)?
I don't know exactly which kind of overhead we're talking about but
the double number of vmexit would probably not be measurable. If you
monitor the number of vmexits if it's a missing TLB flush you'll see a
flood, otherwise you'll just the double amount before/after that commit.
If the readonly pte generator is fork and it's just the double number
of vmexit the only thing you need is the patch I posted a few days ago
that adds the missing madvise(MADV_DONTFORK).
If instead the overhead is massive and it's a vmexit flood, we also
have a missing tlb flush. In that case let's fix the tlb flush first,
and then you can still apply the MADV_DONTFORK. This kind of fault
activity also happens after a swapin from readonly swapcache so if
there's a vmexit flood we need to fix it before applying
MADV_DONTFORK.
Thanks,
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-27 7:47 ` [Qemu-devel] " Zhanghaoyu (A)
@ 2013-07-29 23:47 ` Marcelo Tosatti
-1 siblings, 0 replies; 52+ messages in thread
From: Marcelo Tosatti @ 2013-07-29 23:47 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Bruce Rogers, paolo.bonzini, qemu-devel, Michael S. Tsirkin, KVM,
Avi Kivity, xiaoguangrong, Gleb Natapov, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
On Sat, Jul 27, 2013 at 07:47:49AM +0000, Zhanghaoyu (A) wrote:
> >> hi all,
> >>
> >> I met similar problem to these, while performing live migration or
> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> guest:suse11sp2), running tele-communication software suite in guest,
> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >>
> >> After live migration or virsh restore [savefile], one process's CPU
> >> utilization went up by about 30%, resulted in throughput degradation
> >> of this process.
> >>
> >> If EPT disabled, this problem gone.
> >>
> >> I suspect that kvm hypervisor has business with this problem.
> >> Based on above suspect, I want to find the two adjacent versions of
> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> and analyze the differences between this two versions, or apply the
> >> patches between this two versions by bisection method, finally find the key patches.
> >>
> >> Any better ideas?
> >>
> >> Thanks,
> >> Zhang Haoyu
> >
> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >
> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >
> >Bruce
>
> I found the first bad commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem
> by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>
> And,
> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> git diff 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>
> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> came to a conclusion that all of the differences between 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>
> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>
> Thanks,
> Zhang Haoyu
>
There should be no read-only memory maps backing guest RAM.
Can you confirm map_writable = false is being passed
to __direct_map? (this should not happen, for guest RAM).
And if it is false, please capture the associated GFN.
Its probably an issue with an older get_user_pages variant
(either in kvm-kmod or the older kernel). Is there any
indication of a similar issue with upstream kernel?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-29 23:47 ` Marcelo Tosatti
0 siblings, 0 replies; 52+ messages in thread
From: Marcelo Tosatti @ 2013-07-29 23:47 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
Gleb Natapov, KVM, Michael S. Tsirkin, Luonengjun, Xiahai,
Hanweidong, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Xin Rong Fu, Avi Kivity, xiaoguangrong,
Andreas Färber, Yi Li
On Sat, Jul 27, 2013 at 07:47:49AM +0000, Zhanghaoyu (A) wrote:
> >> hi all,
> >>
> >> I met similar problem to these, while performing live migration or
> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> guest:suse11sp2), running tele-communication software suite in guest,
> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >>
> >> After live migration or virsh restore [savefile], one process's CPU
> >> utilization went up by about 30%, resulted in throughput degradation
> >> of this process.
> >>
> >> If EPT disabled, this problem gone.
> >>
> >> I suspect that kvm hypervisor has business with this problem.
> >> Based on above suspect, I want to find the two adjacent versions of
> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> and analyze the differences between this two versions, or apply the
> >> patches between this two versions by bisection method, finally find the key patches.
> >>
> >> Any better ideas?
> >>
> >> Thanks,
> >> Zhang Haoyu
> >
> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >
> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >
> >Bruce
>
> I found the first bad commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem
> by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>
> And,
> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> git diff 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>
> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> came to a conclusion that all of the differences between 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>
> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>
> Thanks,
> Zhang Haoyu
>
There should be no read-only memory maps backing guest RAM.
Can you confirm map_writable = false is being passed
to __direct_map? (this should not happen, for guest RAM).
And if it is false, please capture the associated GFN.
Its probably an issue with an older get_user_pages variant
(either in kvm-kmod or the older kernel). Is there any
indication of a similar issue with upstream kernel?
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-29 23:47 ` Marcelo Tosatti
@ 2013-07-30 9:04 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-30 9:04 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: Bruce Rogers, paolo.bonzini, qemu-devel, Michael S. Tsirkin, KVM,
Avi Kivity, xiaoguangrong, Gleb Natapov, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
>> >> hi all,
>> >>
>> >> I met similar problem to these, while performing live migration or
>> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> guest:suse11sp2), running tele-communication software suite in
>> >> guest,
>> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >>
>> >> After live migration or virsh restore [savefile], one process's CPU
>> >> utilization went up by about 30%, resulted in throughput
>> >> degradation of this process.
>> >>
>> >> If EPT disabled, this problem gone.
>> >>
>> >> I suspect that kvm hypervisor has business with this problem.
>> >> Based on above suspect, I want to find the two adjacent versions of
>> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> and analyze the differences between this two versions, or apply the
>> >> patches between this two versions by bisection method, finally find the key patches.
>> >>
>> >> Any better ideas?
>> >>
>> >> Thanks,
>> >> Zhang Haoyu
>> >
>> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >
>> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >
>> >Bruce
>>
>> I found the first bad
>> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>>
>> And,
>> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> git diff
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>>
>> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> came to a conclusion that all of the differences between
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>>
>> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>>
>> Thanks,
>> Zhang Haoyu
>>
>
>There should be no read-only memory maps backing guest RAM.
>
>Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>And if it is false, please capture the associated GFN.
>
I added below check and printk at the start of __direct_map() at the fist bad commit version,
--- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
+++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
@@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
int pt_write = 0;
gfn_t pseudo_gfn;
+ if (!map_writable)
+ printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>Its probably an issue with an older get_user_pages variant (either in kvm-kmod or the older kernel). Is there any indication of a similar issue with upstream kernel?
I will test the upstream kvm host(https://git.kernel.org/pub/scm/virt/kvm/kvm.git) later, if the problem is still there,
I will revert the first bad commit patch: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 on the upstream, then test it again.
And, I collected the VMEXITs statistics in pre-save and post-restore period at first bad commit version,
pre-save:
COTS-F10S03:~ # perf stat -e "kvm:*" -a sleep 30
Performance counter stats for 'sleep 30':
1222318 kvm:kvm_entry
0 kvm:kvm_hypercall
0 kvm:kvm_hv_hypercall
351755 kvm:kvm_pio
6703 kvm:kvm_cpuid
692502 kvm:kvm_apic
1234173 kvm:kvm_exit
223956 kvm:kvm_inj_virq
0 kvm:kvm_inj_exception
16028 kvm:kvm_page_fault
59872 kvm:kvm_msr
0 kvm:kvm_cr
169596 kvm:kvm_pic_set_irq
81455 kvm:kvm_apic_ipi
245103 kvm:kvm_apic_accept_irq
0 kvm:kvm_nested_vmrun
0 kvm:kvm_nested_intercepts
0 kvm:kvm_nested_vmexit
0 kvm:kvm_nested_vmexit_inject
0 kvm:kvm_nested_intr_vmexit
0 kvm:kvm_invlpga
0 kvm:kvm_skinit
853020 kvm:kvm_emulate_insn
171140 kvm:kvm_set_irq
171534 kvm:kvm_ioapic_set_irq
0 kvm:kvm_msi_set_irq
99276 kvm:kvm_ack_irq
971166 kvm:kvm_mmio
33722 kvm:kvm_fpu
0 kvm:kvm_age_page
0 kvm:kvm_try_async_get_page
0 kvm:kvm_async_pf_not_present
0 kvm:kvm_async_pf_ready
0 kvm:kvm_async_pf_completed
0 kvm:kvm_async_pf_doublefault
30.019069018 seconds time elapsed
post-restore:
COTS-F10S03:~ # perf stat -e "kvm:*" -a sleep 30
Performance counter stats for 'sleep 30':
1327880 kvm:kvm_entry
0 kvm:kvm_hypercall
0 kvm:kvm_hv_hypercall
375189 kvm:kvm_pio
6925 kvm:kvm_cpuid
804414 kvm:kvm_apic
1339352 kvm:kvm_exit
245922 kvm:kvm_inj_virq
0 kvm:kvm_inj_exception
15856 kvm:kvm_page_fault
39500 kvm:kvm_msr
1 kvm:kvm_cr
179150 kvm:kvm_pic_set_irq
98436 kvm:kvm_apic_ipi
247430 kvm:kvm_apic_accept_irq
0 kvm:kvm_nested_vmrun
0 kvm:kvm_nested_intercepts
0 kvm:kvm_nested_vmexit
0 kvm:kvm_nested_vmexit_inject
0 kvm:kvm_nested_intr_vmexit
0 kvm:kvm_invlpga
0 kvm:kvm_skinit
955410 kvm:kvm_emulate_insn
182240 kvm:kvm_set_irq
182562 kvm:kvm_ioapic_set_irq
0 kvm:kvm_msi_set_irq
105267 kvm:kvm_ack_irq
1113999 kvm:kvm_mmio
37789 kvm:kvm_fpu
0 kvm:kvm_age_page
0 kvm:kvm_try_async_get_page
0 kvm:kvm_async_pf_not_present
0 kvm:kvm_async_pf_ready
0 kvm:kvm_async_pf_completed
0 kvm:kvm_async_pf_doublefault
30.000779718 seconds time elapsed
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-07-30 9:04 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-07-30 9:04 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: Xiejunyong, Huangweidong (C),
Gleb Natapov, KVM, Michael S. Tsirkin, Luonengjun, Xiahai,
Hanweidong, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Xin Rong Fu, Avi Kivity, xiaoguangrong,
Andreas Färber, Yi Li
>> >> hi all,
>> >>
>> >> I met similar problem to these, while performing live migration or
>> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> guest:suse11sp2), running tele-communication software suite in
>> >> guest,
>> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >>
>> >> After live migration or virsh restore [savefile], one process's CPU
>> >> utilization went up by about 30%, resulted in throughput
>> >> degradation of this process.
>> >>
>> >> If EPT disabled, this problem gone.
>> >>
>> >> I suspect that kvm hypervisor has business with this problem.
>> >> Based on above suspect, I want to find the two adjacent versions of
>> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> and analyze the differences between this two versions, or apply the
>> >> patches between this two versions by bisection method, finally find the key patches.
>> >>
>> >> Any better ideas?
>> >>
>> >> Thanks,
>> >> Zhang Haoyu
>> >
>> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >
>> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >
>> >Bruce
>>
>> I found the first bad
>> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>>
>> And,
>> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> git diff
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>>
>> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> came to a conclusion that all of the differences between
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>>
>> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>>
>> Thanks,
>> Zhang Haoyu
>>
>
>There should be no read-only memory maps backing guest RAM.
>
>Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>And if it is false, please capture the associated GFN.
>
I added below check and printk at the start of __direct_map() at the fist bad commit version,
--- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
+++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
@@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
int pt_write = 0;
gfn_t pseudo_gfn;
+ if (!map_writable)
+ printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>Its probably an issue with an older get_user_pages variant (either in kvm-kmod or the older kernel). Is there any indication of a similar issue with upstream kernel?
I will test the upstream kvm host(https://git.kernel.org/pub/scm/virt/kvm/kvm.git) later, if the problem is still there,
I will revert the first bad commit patch: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 on the upstream, then test it again.
And, I collected the VMEXITs statistics in pre-save and post-restore period at first bad commit version,
pre-save:
COTS-F10S03:~ # perf stat -e "kvm:*" -a sleep 30
Performance counter stats for 'sleep 30':
1222318 kvm:kvm_entry
0 kvm:kvm_hypercall
0 kvm:kvm_hv_hypercall
351755 kvm:kvm_pio
6703 kvm:kvm_cpuid
692502 kvm:kvm_apic
1234173 kvm:kvm_exit
223956 kvm:kvm_inj_virq
0 kvm:kvm_inj_exception
16028 kvm:kvm_page_fault
59872 kvm:kvm_msr
0 kvm:kvm_cr
169596 kvm:kvm_pic_set_irq
81455 kvm:kvm_apic_ipi
245103 kvm:kvm_apic_accept_irq
0 kvm:kvm_nested_vmrun
0 kvm:kvm_nested_intercepts
0 kvm:kvm_nested_vmexit
0 kvm:kvm_nested_vmexit_inject
0 kvm:kvm_nested_intr_vmexit
0 kvm:kvm_invlpga
0 kvm:kvm_skinit
853020 kvm:kvm_emulate_insn
171140 kvm:kvm_set_irq
171534 kvm:kvm_ioapic_set_irq
0 kvm:kvm_msi_set_irq
99276 kvm:kvm_ack_irq
971166 kvm:kvm_mmio
33722 kvm:kvm_fpu
0 kvm:kvm_age_page
0 kvm:kvm_try_async_get_page
0 kvm:kvm_async_pf_not_present
0 kvm:kvm_async_pf_ready
0 kvm:kvm_async_pf_completed
0 kvm:kvm_async_pf_doublefault
30.019069018 seconds time elapsed
post-restore:
COTS-F10S03:~ # perf stat -e "kvm:*" -a sleep 30
Performance counter stats for 'sleep 30':
1327880 kvm:kvm_entry
0 kvm:kvm_hypercall
0 kvm:kvm_hv_hypercall
375189 kvm:kvm_pio
6925 kvm:kvm_cpuid
804414 kvm:kvm_apic
1339352 kvm:kvm_exit
245922 kvm:kvm_inj_virq
0 kvm:kvm_inj_exception
15856 kvm:kvm_page_fault
39500 kvm:kvm_msr
1 kvm:kvm_cr
179150 kvm:kvm_pic_set_irq
98436 kvm:kvm_apic_ipi
247430 kvm:kvm_apic_accept_irq
0 kvm:kvm_nested_vmrun
0 kvm:kvm_nested_intercepts
0 kvm:kvm_nested_vmexit
0 kvm:kvm_nested_vmexit_inject
0 kvm:kvm_nested_intr_vmexit
0 kvm:kvm_invlpga
0 kvm:kvm_skinit
955410 kvm:kvm_emulate_insn
182240 kvm:kvm_set_irq
182562 kvm:kvm_ioapic_set_irq
0 kvm:kvm_msi_set_irq
105267 kvm:kvm_ack_irq
1113999 kvm:kvm_mmio
37789 kvm:kvm_fpu
0 kvm:kvm_age_page
0 kvm:kvm_try_async_get_page
0 kvm:kvm_async_pf_not_present
0 kvm:kvm_async_pf_ready
0 kvm:kvm_async_pf_completed
0 kvm:kvm_async_pf_doublefault
30.000779718 seconds time elapsed
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
2013-07-30 9:04 ` Zhanghaoyu (A)
@ 2013-08-01 6:16 ` Gleb Natapov
-1 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-01 6:16 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Marcelo Tosatti, Bruce Rogers, paolo.bonzini, qemu-devel,
Michael S. Tsirkin, KVM, xiaoguangrong, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
On Tue, Jul 30, 2013 at 09:04:56AM +0000, Zhanghaoyu (A) wrote:
>
> >> >> hi all,
> >> >>
> >> >> I met similar problem to these, while performing live migration or
> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> >> guest:suse11sp2), running tele-communication software suite in
> >> >> guest,
> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >> >>
> >> >> After live migration or virsh restore [savefile], one process's CPU
> >> >> utilization went up by about 30%, resulted in throughput
> >> >> degradation of this process.
> >> >>
> >> >> If EPT disabled, this problem gone.
> >> >>
> >> >> I suspect that kvm hypervisor has business with this problem.
> >> >> Based on above suspect, I want to find the two adjacent versions of
> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> >> and analyze the differences between this two versions, or apply the
> >> >> patches between this two versions by bisection method, finally find the key patches.
> >> >>
> >> >> Any better ideas?
> >> >>
> >> >> Thanks,
> >> >> Zhang Haoyu
> >> >
> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >> >
> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >> >
> >> >Bruce
> >>
> >> I found the first bad
> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
> >>
> >> And,
> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> >> git diff
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
> >>
> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> >> came to a conclusion that all of the differences between
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
> >>
> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
> >>
> >> Thanks,
> >> Zhang Haoyu
> >>
> >
> >There should be no read-only memory maps backing guest RAM.
> >
> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
> >And if it is false, please capture the associated GFN.
> >
> I added below check and printk at the start of __direct_map() at the fist bad commit version,
> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
> int pt_write = 0;
> gfn_t pseudo_gfn;
>
> + if (!map_writable)
> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
> +
> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> if (iterator.level == level) {
> unsigned pte_access = ACC_ALL;
>
> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>
The flooding you see happens during migrate to file stage because of dirty
page tracking. If you clear dmesg after virsh-save you should not see any
flooding after virsh-restore. I just checked with latest tree, I do not.
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled
@ 2013-08-01 6:16 ` Gleb Natapov
0 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-01 6:16 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
On Tue, Jul 30, 2013 at 09:04:56AM +0000, Zhanghaoyu (A) wrote:
>
> >> >> hi all,
> >> >>
> >> >> I met similar problem to these, while performing live migration or
> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> >> guest:suse11sp2), running tele-communication software suite in
> >> >> guest,
> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >> >>
> >> >> After live migration or virsh restore [savefile], one process's CPU
> >> >> utilization went up by about 30%, resulted in throughput
> >> >> degradation of this process.
> >> >>
> >> >> If EPT disabled, this problem gone.
> >> >>
> >> >> I suspect that kvm hypervisor has business with this problem.
> >> >> Based on above suspect, I want to find the two adjacent versions of
> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> >> and analyze the differences between this two versions, or apply the
> >> >> patches between this two versions by bisection method, finally find the key patches.
> >> >>
> >> >> Any better ideas?
> >> >>
> >> >> Thanks,
> >> >> Zhang Haoyu
> >> >
> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >> >
> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >> >
> >> >Bruce
> >>
> >> I found the first bad
> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
> >>
> >> And,
> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> >> git diff
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
> >>
> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> >> came to a conclusion that all of the differences between
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
> >>
> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
> >>
> >> Thanks,
> >> Zhang Haoyu
> >>
> >
> >There should be no read-only memory maps backing guest RAM.
> >
> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
> >And if it is false, please capture the associated GFN.
> >
> I added below check and printk at the start of __direct_map() at the fist bad commit version,
> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
> int pt_write = 0;
> gfn_t pseudo_gfn;
>
> + if (!map_writable)
> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
> +
> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> if (iterator.level == level) {
> unsigned pte_access = ACC_ALL;
>
> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>
The flooding you see happens during migrate to file stage because of dirty
page tracking. If you clear dmesg after virsh-save you should not see any
flooding after virsh-restore. I just checked with latest tree, I do not.
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-01 6:16 ` Gleb Natapov
@ 2013-08-05 8:35 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 8:35 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, Bruce Rogers, paolo.bonzini, qemu-devel,
Michael S. Tsirkin, KVM, xiaoguangrong, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
>> >> >> hi all,
>> >> >>
>> >> >> I met similar problem to these, while performing live migration or
>> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> guest,
>> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >> >>
>> >> >> After live migration or virsh restore [savefile], one process's CPU
>> >> >> utilization went up by about 30%, resulted in throughput
>> >> >> degradation of this process.
>> >> >>
>> >> >> If EPT disabled, this problem gone.
>> >> >>
>> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> and analyze the differences between this two versions, or apply the
>> >> >> patches between this two versions by bisection method, finally find the key patches.
>> >> >>
>> >> >> Any better ideas?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >
>> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >> >
>> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >> >
>> >> >Bruce
>> >>
>> >> I found the first bad
>> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>> >>
>> >> And,
>> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> >> git diff
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>> >>
>> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> >> came to a conclusion that all of the differences between
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>> >>
>> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>> >>
>> >> Thanks,
>> >> Zhang Haoyu
>> >>
>> >
>> >There should be no read-only memory maps backing guest RAM.
>> >
>> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>> >And if it is false, please capture the associated GFN.
>> >
>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + if (!map_writable)
>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL;
>>
>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>
>The flooding you see happens during migrate to file stage because of dirty
>page tracking. If you clear dmesg after virsh-save you should not see any
>flooding after virsh-restore. I just checked with latest tree, I do not.
I made a verification again.
I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
no guest-write happens during this switching period.
After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
On VM's normal starting stage, only very few GFNs are printed, shown as below
gfn = 16
gfn = 604
gfn = 605
gfn = 606
gfn = 607
gfn = 608
gfn = 609
but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
2042600
2797777
2797778
2797779
2797780
2797781
2797782
2797783
2797784
2797785
2042602
2846482
2042603
2846483
2042606
2846485
2042607
2846486
2042610
2042611
2846489
2846490
2042614
2042615
2846493
2846494
2042617
2042618
2846497
2042621
2846498
2042622
2042625
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 8:35 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 8:35 UTC (permalink / raw)
To: Gleb Natapov
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
>> >> >> hi all,
>> >> >>
>> >> >> I met similar problem to these, while performing live migration or
>> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> guest,
>> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >> >>
>> >> >> After live migration or virsh restore [savefile], one process's CPU
>> >> >> utilization went up by about 30%, resulted in throughput
>> >> >> degradation of this process.
>> >> >>
>> >> >> If EPT disabled, this problem gone.
>> >> >>
>> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> and analyze the differences between this two versions, or apply the
>> >> >> patches between this two versions by bisection method, finally find the key patches.
>> >> >>
>> >> >> Any better ideas?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >
>> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >> >
>> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >> >
>> >> >Bruce
>> >>
>> >> I found the first bad
>> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>> >>
>> >> And,
>> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> >> git diff
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>> >>
>> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> >> came to a conclusion that all of the differences between
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>> >>
>> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>> >>
>> >> Thanks,
>> >> Zhang Haoyu
>> >>
>> >
>> >There should be no read-only memory maps backing guest RAM.
>> >
>> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>> >And if it is false, please capture the associated GFN.
>> >
>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + if (!map_writable)
>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL;
>>
>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>
>The flooding you see happens during migrate to file stage because of dirty
>page tracking. If you clear dmesg after virsh-save you should not see any
>flooding after virsh-restore. I just checked with latest tree, I do not.
I made a verification again.
I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
no guest-write happens during this switching period.
After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
On VM's normal starting stage, only very few GFNs are printed, shown as below
gfn = 16
gfn = 604
gfn = 605
gfn = 606
gfn = 607
gfn = 608
gfn = 609
but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
2042600
2797777
2797778
2797779
2797780
2797781
2797782
2797783
2797784
2797785
2042602
2846482
2042603
2846483
2042606
2846485
2042607
2846486
2042610
2042611
2846489
2846490
2042614
2042615
2846493
2846494
2042617
2042618
2846497
2042621
2846498
2042622
2042625
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 8:35 ` Zhanghaoyu (A)
@ 2013-08-05 8:43 ` Gleb Natapov
-1 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-05 8:43 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Marcelo Tosatti, Bruce Rogers, paolo.bonzini, qemu-devel,
Michael S. Tsirkin, KVM, xiaoguangrong, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
On Mon, Aug 05, 2013 at 08:35:09AM +0000, Zhanghaoyu (A) wrote:
> >> >> >> hi all,
> >> >> >>
> >> >> >> I met similar problem to these, while performing live migration or
> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> >> >> guest:suse11sp2), running tele-communication software suite in
> >> >> >> guest,
> >> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >> >> >>
> >> >> >> After live migration or virsh restore [savefile], one process's CPU
> >> >> >> utilization went up by about 30%, resulted in throughput
> >> >> >> degradation of this process.
> >> >> >>
> >> >> >> If EPT disabled, this problem gone.
> >> >> >>
> >> >> >> I suspect that kvm hypervisor has business with this problem.
> >> >> >> Based on above suspect, I want to find the two adjacent versions of
> >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> >> >> and analyze the differences between this two versions, or apply the
> >> >> >> patches between this two versions by bisection method, finally find the key patches.
> >> >> >>
> >> >> >> Any better ideas?
> >> >> >>
> >> >> >> Thanks,
> >> >> >> Zhang Haoyu
> >> >> >
> >> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >> >> >
> >> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >> >> >
> >> >> >Bruce
> >> >>
> >> >> I found the first bad
> >> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
> >> >>
> >> >> And,
> >> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> >> >> git diff
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
> >> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
> >> >>
> >> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> >> >> came to a conclusion that all of the differences between
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> >> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
> >> >>
> >> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
> >> >>
> >> >> Thanks,
> >> >> Zhang Haoyu
> >> >>
> >> >
> >> >There should be no read-only memory maps backing guest RAM.
> >> >
> >> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
> >> >And if it is false, please capture the associated GFN.
> >> >
> >> I added below check and printk at the start of __direct_map() at the fist bad commit version,
> >> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
> >> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
> >> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
> >> int pt_write = 0;
> >> gfn_t pseudo_gfn;
> >>
> >> + if (!map_writable)
> >> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
> >> +
> >> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> >> if (iterator.level == level) {
> >> unsigned pte_access = ACC_ALL;
> >>
> >> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
> >>
> >The flooding you see happens during migrate to file stage because of dirty
> >page tracking. If you clear dmesg after virsh-save you should not see any
> >flooding after virsh-restore. I just checked with latest tree, I do not.
>
> I made a verification again.
> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
> no guest-write happens during this switching period.
> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>
Interesting, is this with upstream kernel? For me the situation is
exactly the opposite. What is your command line?
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 8:43 ` Gleb Natapov
0 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-05 8:43 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
On Mon, Aug 05, 2013 at 08:35:09AM +0000, Zhanghaoyu (A) wrote:
> >> >> >> hi all,
> >> >> >>
> >> >> >> I met similar problem to these, while performing live migration or
> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> >> >> guest:suse11sp2), running tele-communication software suite in
> >> >> >> guest,
> >> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >> >> >>
> >> >> >> After live migration or virsh restore [savefile], one process's CPU
> >> >> >> utilization went up by about 30%, resulted in throughput
> >> >> >> degradation of this process.
> >> >> >>
> >> >> >> If EPT disabled, this problem gone.
> >> >> >>
> >> >> >> I suspect that kvm hypervisor has business with this problem.
> >> >> >> Based on above suspect, I want to find the two adjacent versions of
> >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> >> >> and analyze the differences between this two versions, or apply the
> >> >> >> patches between this two versions by bisection method, finally find the key patches.
> >> >> >>
> >> >> >> Any better ideas?
> >> >> >>
> >> >> >> Thanks,
> >> >> >> Zhang Haoyu
> >> >> >
> >> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >> >> >
> >> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >> >> >
> >> >> >Bruce
> >> >>
> >> >> I found the first bad
> >> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
> >> >>
> >> >> And,
> >> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> >> >> git diff
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
> >> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
> >> >>
> >> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> >> >> came to a conclusion that all of the differences between
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> >> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
> >> >>
> >> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
> >> >>
> >> >> Thanks,
> >> >> Zhang Haoyu
> >> >>
> >> >
> >> >There should be no read-only memory maps backing guest RAM.
> >> >
> >> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
> >> >And if it is false, please capture the associated GFN.
> >> >
> >> I added below check and printk at the start of __direct_map() at the fist bad commit version,
> >> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
> >> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
> >> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
> >> int pt_write = 0;
> >> gfn_t pseudo_gfn;
> >>
> >> + if (!map_writable)
> >> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
> >> +
> >> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> >> if (iterator.level == level) {
> >> unsigned pte_access = ACC_ALL;
> >>
> >> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
> >>
> >The flooding you see happens during migrate to file stage because of dirty
> >page tracking. If you clear dmesg after virsh-save you should not see any
> >flooding after virsh-restore. I just checked with latest tree, I do not.
>
> I made a verification again.
> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
> no guest-write happens during this switching period.
> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>
Interesting, is this with upstream kernel? For me the situation is
exactly the opposite. What is your command line?
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 8:43 ` Gleb Natapov
@ 2013-08-05 9:09 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 9:09 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, Bruce Rogers, paolo.bonzini, qemu-devel,
Michael S. Tsirkin, KVM, xiaoguangrong, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
>> >> >> >> hi all,
>> >> >> >>
>> >> >> >> I met similar problem to these, while performing live migration or
>> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> >> guest,
>> >> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >> >> >>
>> >> >> >> After live migration or virsh restore [savefile], one process's CPU
>> >> >> >> utilization went up by about 30%, resulted in throughput
>> >> >> >> degradation of this process.
>> >> >> >>
>> >> >> >> If EPT disabled, this problem gone.
>> >> >> >>
>> >> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> >> and analyze the differences between this two versions, or apply the
>> >> >> >> patches between this two versions by bisection method, finally find the key patches.
>> >> >> >>
>> >> >> >> Any better ideas?
>> >> >> >>
>> >> >> >> Thanks,
>> >> >> >> Zhang Haoyu
>> >> >> >
>> >> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >> >> >
>> >> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >> >> >
>> >> >> >Bruce
>> >> >>
>> >> >> I found the first bad
>> >> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>> >> >>
>> >> >> And,
>> >> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> >> >> git diff
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> >> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>> >> >>
>> >> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> >> >> came to a conclusion that all of the differences between
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> >> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>> >> >>
>> >> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >>
>> >> >
>> >> >There should be no read-only memory maps backing guest RAM.
>> >> >
>> >> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>> >> >And if it is false, please capture the associated GFN.
>> >> >
>> >> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>> >> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>> >> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>> >> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>> >> int pt_write = 0;
>> >> gfn_t pseudo_gfn;
>> >>
>> >> + if (!map_writable)
>> >> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>> >> +
>> >> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> >> if (iterator.level == level) {
>> >> unsigned pte_access = ACC_ALL;
>> >>
>> >> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>> >>
>> >The flooding you see happens during migrate to file stage because of dirty
>> >page tracking. If you clear dmesg after virsh-save you should not see any
>> >flooding after virsh-restore. I just checked with latest tree, I do not.
>>
>> I made a verification again.
>> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
>> no guest-write happens during this switching period.
>> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
>> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
>> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>>
>Interesting, is this with upstream kernel? For me the situation is
>exactly the opposite. What is your command line?
>
I made the verification on the first bad commit 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, not the upstream.
When I build the upstream, encounter a problem that I compile and install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) on sles11sp2 environment via below command
cp /boot/config-3.0.13-0.27-default ./.config
yes "" | make oldconfig
make && make modules_install && make install
then, I reboot the host, and select the upstream kernel, but during the starting stage, below problem happened,
Could not find /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
I'm trying to resolve it.
The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 9:09 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 9:09 UTC (permalink / raw)
To: Gleb Natapov
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
>> >> >> >> hi all,
>> >> >> >>
>> >> >> >> I met similar problem to these, while performing live migration or
>> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> >> guest,
>> >> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >> >> >>
>> >> >> >> After live migration or virsh restore [savefile], one process's CPU
>> >> >> >> utilization went up by about 30%, resulted in throughput
>> >> >> >> degradation of this process.
>> >> >> >>
>> >> >> >> If EPT disabled, this problem gone.
>> >> >> >>
>> >> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> >> and analyze the differences between this two versions, or apply the
>> >> >> >> patches between this two versions by bisection method, finally find the key patches.
>> >> >> >>
>> >> >> >> Any better ideas?
>> >> >> >>
>> >> >> >> Thanks,
>> >> >> >> Zhang Haoyu
>> >> >> >
>> >> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >> >> >
>> >> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >> >> >
>> >> >> >Bruce
>> >> >>
>> >> >> I found the first bad
>> >> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>> >> >>
>> >> >> And,
>> >> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> >> >> git diff
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> >> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>> >> >>
>> >> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> >> >> came to a conclusion that all of the differences between
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> >> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>> >> >>
>> >> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >>
>> >> >
>> >> >There should be no read-only memory maps backing guest RAM.
>> >> >
>> >> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>> >> >And if it is false, please capture the associated GFN.
>> >> >
>> >> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>> >> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>> >> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>> >> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>> >> int pt_write = 0;
>> >> gfn_t pseudo_gfn;
>> >>
>> >> + if (!map_writable)
>> >> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>> >> +
>> >> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> >> if (iterator.level == level) {
>> >> unsigned pte_access = ACC_ALL;
>> >>
>> >> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>> >>
>> >The flooding you see happens during migrate to file stage because of dirty
>> >page tracking. If you clear dmesg after virsh-save you should not see any
>> >flooding after virsh-restore. I just checked with latest tree, I do not.
>>
>> I made a verification again.
>> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
>> no guest-write happens during this switching period.
>> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
>> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
>> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>>
>Interesting, is this with upstream kernel? For me the situation is
>exactly the opposite. What is your command line?
>
I made the verification on the first bad commit 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, not the upstream.
When I build the upstream, encounter a problem that I compile and install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) on sles11sp2 environment via below command
cp /boot/config-3.0.13-0.27-default ./.config
yes "" | make oldconfig
make && make modules_install && make install
then, I reboot the host, and select the upstream kernel, but during the starting stage, below problem happened,
Could not find /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
I'm trying to resolve it.
The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:09 ` Zhanghaoyu (A)
@ 2013-08-05 9:15 ` Andreas Färber
-1 siblings, 0 replies; 52+ messages in thread
From: Andreas Färber @ 2013-08-05 9:15 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Gleb Natapov, Marcelo Tosatti, Bruce Rogers, paolo.bonzini,
qemu-devel, Michael S. Tsirkin, KVM, xiaoguangrong, Hanweidong,
Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
Hi,
Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
> When I build the upstream, encounter a problem that I compile and install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) on sles11sp2 environment via below command
> cp /boot/config-3.0.13-0.27-default ./.config
> yes "" | make oldconfig
> make && make modules_install && make install
> then, I reboot the host, and select the upstream kernel, but during the starting stage, below problem happened,
> Could not find /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
>
> I'm trying to resolve it.
Possibly you need to enable loading unsupported kernel modules?
At least that's needed when testing a kmod with a SUSE kernel.
Regards,
Andreas
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 9:15 ` Andreas Färber
0 siblings, 0 replies; 52+ messages in thread
From: Andreas Färber @ 2013-08-05 9:15 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Gleb Natapov, Michael S. Tsirkin, Luonengjun, Xiahai,
Marcelo Tosatti, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong
Hi,
Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
> When I build the upstream, encounter a problem that I compile and install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) on sles11sp2 environment via below command
> cp /boot/config-3.0.13-0.27-default ./.config
> yes "" | make oldconfig
> make && make modules_install && make install
> then, I reboot the host, and select the upstream kernel, but during the starting stage, below problem happened,
> Could not find /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
>
> I'm trying to resolve it.
Possibly you need to enable loading unsupported kernel modules?
At least that's needed when testing a kmod with a SUSE kernel.
Regards,
Andreas
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:15 ` Andreas Färber
@ 2013-08-05 9:22 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 9:22 UTC (permalink / raw)
To: Andreas Färber
Cc: Gleb Natapov, Marcelo Tosatti, Bruce Rogers, paolo.bonzini,
qemu-devel, Michael S. Tsirkin, KVM, xiaoguangrong, Hanweidong,
Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
>Hi,
>
>Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
>> When I build the upstream, encounter a problem that I compile and
>> install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29)
>> on sles11sp2 environment via below command cp
>> /boot/config-3.0.13-0.27-default ./.config yes "" | make oldconfig
>> make && make modules_install && make install then, I reboot the host,
>> and select the upstream kernel, but during the starting stage, below
>> problem happened, Could not find
>> /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
>>
>> I'm trying to resolve it.
>
>Possibly you need to enable loading unsupported kernel modules?
>At least that's needed when testing a kmod with a SUSE kernel.
>
I have tried to set " allow_unsupported_modules 1" in /etc/modprobe.d/unsupported-modules, but the problem still happened.
I replace the whole kernel with the kvm kernel, not only the kvm modules.
>Regards,
>Andreas
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 9:22 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 9:22 UTC (permalink / raw)
To: Andreas Färber
Cc: Xiejunyong, Huangweidong (C),
KVM, Gleb Natapov, Michael S. Tsirkin, Luonengjun, Xiahai,
Marcelo Tosatti, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong
>Hi,
>
>Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
>> When I build the upstream, encounter a problem that I compile and
>> install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29)
>> on sles11sp2 environment via below command cp
>> /boot/config-3.0.13-0.27-default ./.config yes "" | make oldconfig
>> make && make modules_install && make install then, I reboot the host,
>> and select the upstream kernel, but during the starting stage, below
>> problem happened, Could not find
>> /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
>>
>> I'm trying to resolve it.
>
>Possibly you need to enable loading unsupported kernel modules?
>At least that's needed when testing a kmod with a SUSE kernel.
>
I have tried to set " allow_unsupported_modules 1" in /etc/modprobe.d/unsupported-modules, but the problem still happened.
I replace the whole kernel with the kvm kernel, not only the kvm modules.
>Regards,
>Andreas
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:09 ` Zhanghaoyu (A)
@ 2013-08-05 9:37 ` Gleb Natapov
-1 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-05 9:37 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Marcelo Tosatti, Bruce Rogers, paolo.bonzini, qemu-devel,
Michael S. Tsirkin, KVM, xiaoguangrong, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
On Mon, Aug 05, 2013 at 09:09:56AM +0000, Zhanghaoyu (A) wrote:
> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bu
s=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -
vnc *:0 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>
Which QEMU version is this? Can you try with e1000 NICs instead of
virtio?
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 9:37 ` Gleb Natapov
0 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-05 9:37 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
On Mon, Aug 05, 2013 at 09:09:56AM +0000, Zhanghaoyu (A) wrote:
> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>
Which QEMU version is this? Can you try with e1000 NICs instead of
virtio?
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 8:35 ` Zhanghaoyu (A)
@ 2013-08-05 18:27 ` Xiao Guangrong
-1 siblings, 0 replies; 52+ messages in thread
From: Xiao Guangrong @ 2013-08-05 18:27 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Gleb Natapov, Michael S. Tsirkin, Luonengjun, Xiahai,
Marcelo Tosatti, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 5707 bytes --]
On Aug 5, 2013, at 4:35 PM, "Zhanghaoyu (A)" <haoyu.zhang@huawei.com> wrote:
>>>>>>> hi all,
>>>>>>>
>>>>>>> I met similar problem to these, while performing live migration or
>>>>>>> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>>>>>>> guest:suse11sp2), running tele-communication software suite in
>>>>>>> guest,
>>>>>>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>>>>>>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>>>>>>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>>>>>>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>>>>>>
>>>>>>> After live migration or virsh restore [savefile], one process's CPU
>>>>>>> utilization went up by about 30%, resulted in throughput
>>>>>>> degradation of this process.
>>>>>>>
>>>>>>> If EPT disabled, this problem gone.
>>>>>>>
>>>>>>> I suspect that kvm hypervisor has business with this problem.
>>>>>>> Based on above suspect, I want to find the two adjacent versions of
>>>>>>> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>>>>>>> and analyze the differences between this two versions, or apply the
>>>>>>> patches between this two versions by bisection method, finally find the key patches.
>>>>>>>
>>>>>>> Any better ideas?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Zhang Haoyu
>>>>>>
>>>>>> I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>>>>>>
>>>>>> So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>>>>>>
>>>>>> Bruce
>>>>>
>>>>> I found the first bad
>>>>> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>>>>>
>>>>> And,
>>>>> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>>>>> git diff
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>>>>> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>>>>>
>>>>> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>>>>> came to a conclusion that all of the differences between
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>>>>> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>>>>>
>>>>> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>>>>>
>>>>> Thanks,
>>>>> Zhang Haoyu
>>>>>
>>>>
>>>> There should be no read-only memory maps backing guest RAM.
>>>>
>>>> Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>>>> And if it is false, please capture the associated GFN.
>>>>
>>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>>> int pt_write = 0;
>>> gfn_t pseudo_gfn;
>>>
>>> + if (!map_writable)
>>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>>> +
>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>> if (iterator.level == level) {
>>> unsigned pte_access = ACC_ALL;
>>>
>>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>>
>> The flooding you see happens during migrate to file stage because of dirty
>> page tracking. If you clear dmesg after virsh-save you should not see any
>> flooding after virsh-restore. I just checked with latest tree, I do not.
>
> I made a verification again.
> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
> no guest-write happens during this switching period.
> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>
> On VM's normal starting stage, only very few GFNs are printed, shown as below
> gfn = 16
> gfn = 604
> gfn = 605
> gfn = 606
> gfn = 607
> gfn = 608
> gfn = 609
>
> but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
That's really strange. Could you please disable ept and add your trace code to FNAME(fetch)( ), then
test again to see what will happen?
If there is still have many !rmap_writable cases, please measure the performance to see if it still has
regression.
Many thanks!
[-- Attachment #2: Type: text/html, Size: 6851 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-05 18:27 ` Xiao Guangrong
0 siblings, 0 replies; 52+ messages in thread
From: Xiao Guangrong @ 2013-08-05 18:27 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Gleb Natapov, Michael S. Tsirkin, Luonengjun, Xiahai,
Marcelo Tosatti, paolo.bonzini, qemu-devel, Bruce Rogers,
Zanghongyong, Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 5707 bytes --]
On Aug 5, 2013, at 4:35 PM, "Zhanghaoyu (A)" <haoyu.zhang@huawei.com> wrote:
>>>>>>> hi all,
>>>>>>>
>>>>>>> I met similar problem to these, while performing live migration or
>>>>>>> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>>>>>>> guest:suse11sp2), running tele-communication software suite in
>>>>>>> guest,
>>>>>>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>>>>>>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>>>>>>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>>>>>>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>>>>>>
>>>>>>> After live migration or virsh restore [savefile], one process's CPU
>>>>>>> utilization went up by about 30%, resulted in throughput
>>>>>>> degradation of this process.
>>>>>>>
>>>>>>> If EPT disabled, this problem gone.
>>>>>>>
>>>>>>> I suspect that kvm hypervisor has business with this problem.
>>>>>>> Based on above suspect, I want to find the two adjacent versions of
>>>>>>> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>>>>>>> and analyze the differences between this two versions, or apply the
>>>>>>> patches between this two versions by bisection method, finally find the key patches.
>>>>>>>
>>>>>>> Any better ideas?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Zhang Haoyu
>>>>>>
>>>>>> I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>>>>>>
>>>>>> So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>>>>>>
>>>>>> Bruce
>>>>>
>>>>> I found the first bad
>>>>> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>>>>>
>>>>> And,
>>>>> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>>>>> git diff
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>>>>> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>>>>>
>>>>> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>>>>> came to a conclusion that all of the differences between
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>>>>> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>>>>>
>>>>> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>>>>>
>>>>> Thanks,
>>>>> Zhang Haoyu
>>>>>
>>>>
>>>> There should be no read-only memory maps backing guest RAM.
>>>>
>>>> Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>>>> And if it is false, please capture the associated GFN.
>>>>
>>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>>> int pt_write = 0;
>>> gfn_t pseudo_gfn;
>>>
>>> + if (!map_writable)
>>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>>> +
>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>> if (iterator.level == level) {
>>> unsigned pte_access = ACC_ALL;
>>>
>>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>>
>> The flooding you see happens during migrate to file stage because of dirty
>> page tracking. If you clear dmesg after virsh-save you should not see any
>> flooding after virsh-restore. I just checked with latest tree, I do not.
>
> I made a verification again.
> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
> no guest-write happens during this switching period.
> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>
> On VM's normal starting stage, only very few GFNs are printed, shown as below
> gfn = 16
> gfn = 604
> gfn = 605
> gfn = 606
> gfn = 607
> gfn = 608
> gfn = 609
>
> but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
That's really strange. Could you please disable ept and add your trace code to FNAME(fetch)( ), then
test again to see what will happen?
If there is still have many !rmap_writable cases, please measure the performance to see if it still has
regression.
Many thanks!
[-- Attachment #2: Type: text/html, Size: 6851 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:37 ` Gleb Natapov
@ 2013-08-06 10:47 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-06 10:47 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, Bruce Rogers, paolo.bonzini, qemu-devel,
Michael S. Tsirkin, KVM, xiaoguangrong, Andreas Färber,
Hanweidong, Luonengjun, Huangweidong (C),
Zanghongyong, Xiejunyong, Xiahai, Yi Li, Xin Rong Fu
>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none
>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32
>> -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,n
>> owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=localtime -no-shutdown -device
>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cach
>> e=none -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id
>> =virtio-disk0,bootindex=1 -netdev
>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0
>> ,addr=0x3,bootindex=2 -netdev
>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0
>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0
>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0
>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0
>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0
>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> -watchdog-action poweroff -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>
>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>
This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
Thanks,
Zhang Haoyu
>--
> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-06 10:47 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-06 10:47 UTC (permalink / raw)
To: Gleb Natapov
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none
>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32
>> -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,n
>> owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=localtime -no-shutdown -device
>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cach
>> e=none -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id
>> =virtio-disk0,bootindex=1 -netdev
>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0
>> ,addr=0x3,bootindex=2 -netdev
>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0
>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0
>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0
>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0
>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0
>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> -watchdog-action poweroff -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>
>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>
This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
Thanks,
Zhang Haoyu
>--
> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-06 10:47 ` Zhanghaoyu (A)
@ 2013-08-07 1:34 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-07 1:34 UTC (permalink / raw)
To: Zhanghaoyu (A), Gleb Natapov
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
>>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> QEMU_AUDIO_DRV=none
>>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>>> -chardev
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
>>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=localtime -no-shutdown -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
>>> h
>>> e=none -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
>>> d
>>> =virtio-disk0,bootindex=1 -netdev
>>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>>> 0
>>> ,addr=0x3,bootindex=2 -netdev
>>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>>> 0
>>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>>> 0
>>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>>> 0
>>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>>> 0
>>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>>> 0
>>> ,addr=0x9 -chardev pty,id=charserial0 -device
>>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>>> -watchdog-action poweroff -device
>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>>
>>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>
>This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>
>Thanks,
>Zhang Haoyu
>
>>--
>> Gleb.
Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
I applied below patch to __direct_map(),
@@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
int pt_write = 0;
gfn_t pseudo_gfn;
+ map_writable = true;
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
and rebuild the kvm-kmod, then re-insmod it.
After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-07 1:34 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-07 1:34 UTC (permalink / raw)
To: Zhanghaoyu (A), Gleb Natapov
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, paolo.bonzini, Xiejunyong, Luonengjun,
qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong, Bruce Rogers, Hanweidong, Andreas Färber
>>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> QEMU_AUDIO_DRV=none
>>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>>> -chardev
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
>>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=localtime -no-shutdown -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
>>> h
>>> e=none -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
>>> d
>>> =virtio-disk0,bootindex=1 -netdev
>>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>>> 0
>>> ,addr=0x3,bootindex=2 -netdev
>>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>>> 0
>>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>>> 0
>>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>>> 0
>>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>>> 0
>>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>>> 0
>>> ,addr=0x9 -chardev pty,id=charserial0 -device
>>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>>> -watchdog-action poweroff -device
>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>>
>>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>
>This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>
>Thanks,
>Zhang Haoyu
>
>>--
>> Gleb.
Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
I applied below patch to __direct_map(),
@@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
int pt_write = 0;
gfn_t pseudo_gfn;
+ map_writable = true;
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
and rebuild the kvm-kmod, then re-insmod it.
After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 1:34 ` Zhanghaoyu (A)
@ 2013-08-07 5:52 ` Gleb Natapov
-1 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-07 5:52 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C),
KVM, Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong, Hanweidong,
Andreas Färber
On Wed, Aug 07, 2013 at 01:34:41AM +0000, Zhanghaoyu (A) wrote:
> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
> >>> QEMU_AUDIO_DRV=none
> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
> >>> -chardev
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >>> base=localtime -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
> >>> h
> >>> e=none -device
> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
> >>> d
> >>> =virtio-disk0,bootindex=1 -netdev
> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x3,bootindex=2 -netdev
> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
> >>> -watchdog-action poweroff -device
> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
> >>>
> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
> >>
> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
> >
> >Thanks,
> >Zhang Haoyu
> >
> >>--
> >> Gleb.
>
> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>
Not really. There is no point in debugging very old version compiled
with kvm-kmod, there are to many variables in the environment. I cannot
reproduce the GFN flooding on upstream, so the problem may be gone, may
be a result of kvm-kmod problem or something different in how I invoke
qemu. So the best way to proceed is for you to reproduce with upstream
version then at least I will be sure that we are using the same code.
> I applied below patch to __direct_map(),
> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
> int pt_write = 0;
> gfn_t pseudo_gfn;
>
> + map_writable = true;
> +
> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> if (iterator.level == level) {
> unsigned pte_access = ACC_ALL;
> and rebuild the kvm-kmod, then re-insmod it.
> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>
If hva_to_pfn() returns map_writable == false it means that page is
mapped as read only on primary MMU, so it should not be mapped writable
on secondary MMU either. This should not happen usually.
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-07 5:52 ` Gleb Natapov
0 siblings, 0 replies; 52+ messages in thread
From: Gleb Natapov @ 2013-08-07 5:52 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, paolo.bonzini, Xiejunyong, Luonengjun,
qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong, Bruce Rogers, Hanweidong, Andreas Färber
On Wed, Aug 07, 2013 at 01:34:41AM +0000, Zhanghaoyu (A) wrote:
> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
> >>> QEMU_AUDIO_DRV=none
> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
> >>> -chardev
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >>> base=localtime -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
> >>> h
> >>> e=none -device
> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
> >>> d
> >>> =virtio-disk0,bootindex=1 -netdev
> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x3,bootindex=2 -netdev
> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
> >>> -watchdog-action poweroff -device
> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
> >>>
> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
> >>
> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
> >
> >Thanks,
> >Zhang Haoyu
> >
> >>--
> >> Gleb.
>
> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>
Not really. There is no point in debugging very old version compiled
with kvm-kmod, there are to many variables in the environment. I cannot
reproduce the GFN flooding on upstream, so the problem may be gone, may
be a result of kvm-kmod problem or something different in how I invoke
qemu. So the best way to proceed is for you to reproduce with upstream
version then at least I will be sure that we are using the same code.
> I applied below patch to __direct_map(),
> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
> int pt_write = 0;
> gfn_t pseudo_gfn;
>
> + map_writable = true;
> +
> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> if (iterator.level == level) {
> unsigned pte_access = ACC_ALL;
> and rebuild the kvm-kmod, then re-insmod it.
> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>
If hva_to_pfn() returns map_writable == false it means that page is
mapped as read only on primary MMU, so it should not be mapped writable
on secondary MMU either. This should not happen usually.
--
Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 5:52 ` Gleb Natapov
@ 2013-08-14 9:05 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-14 9:05 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, paolo.bonzini, Xiejunyong, Luonengjun,
qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong, Bruce Rogers, Hanweidong, Andreas Färber
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> >>> -chardev
>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
>> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> >>> base=localtime -no-shutdown -device
>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
>> >>> h
>> >>> e=none -device
>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
>> >>> d
>> >>> =virtio-disk0,bootindex=1 -netdev
>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x3,bootindex=2 -netdev
>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> >>> -watchdog-action poweroff -device
>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>> >>>
>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>> >
>> >Thanks,
>> >Zhang Haoyu
>> >
>> >>--
>> >> Gleb.
>>
>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>
>Not really. There is no point in debugging very old version compiled
>with kvm-kmod, there are to many variables in the environment. I cannot
>reproduce the GFN flooding on upstream, so the problem may be gone, may
>be a result of kvm-kmod problem or something different in how I invoke
>qemu. So the best way to proceed is for you to reproduce with upstream
>version then at least I will be sure that we are using the same code.
>
Thanks, I will test the combos of upstream kvm kernel and upstream qemu.
And, the guest os version above I said was wrong, current running guest os is SLES10SP4.
Thanks,
Zhang Haoyu
>> I applied below patch to __direct_map(),
>> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + map_writable = true;
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL;
>> and rebuild the kvm-kmod, then re-insmod it.
>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>
>If hva_to_pfn() returns map_writable == false it means that page is
>mapped as read only on primary MMU, so it should not be mapped writable
>on secondary MMU either. This should not happen usually.
>
>--
> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-14 9:05 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-14 9:05 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, paolo.bonzini, Xiejunyong, Luonengjun,
qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong, Bruce Rogers, Hanweidong, Andreas Färber
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> >>> -chardev
>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
>> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> >>> base=localtime -no-shutdown -device
>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
>> >>> h
>> >>> e=none -device
>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
>> >>> d
>> >>> =virtio-disk0,bootindex=1 -netdev
>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x3,bootindex=2 -netdev
>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> >>> -watchdog-action poweroff -device
>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>> >>>
>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>> >
>> >Thanks,
>> >Zhang Haoyu
>> >
>> >>--
>> >> Gleb.
>>
>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>
>Not really. There is no point in debugging very old version compiled
>with kvm-kmod, there are to many variables in the environment. I cannot
>reproduce the GFN flooding on upstream, so the problem may be gone, may
>be a result of kvm-kmod problem or something different in how I invoke
>qemu. So the best way to proceed is for you to reproduce with upstream
>version then at least I will be sure that we are using the same code.
>
Thanks, I will test the combos of upstream kvm kernel and upstream qemu.
And, the guest os version above I said was wrong, current running guest os is SLES10SP4.
Thanks,
Zhang Haoyu
>> I applied below patch to __direct_map(),
>> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + map_writable = true;
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL;
>> and rebuild the kvm-kmod, then re-insmod it.
>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>
>If hva_to_pfn() returns map_writable == false it means that page is
>mapped as read only on primary MMU, so it should not be mapped writable
>on secondary MMU either. This should not happen usually.
>
>--
> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 5:52 ` Gleb Natapov
@ 2013-08-20 13:33 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-20 13:33 UTC (permalink / raw)
To: Zhanghaoyu (A), Gleb Natapov, Eric Blake, pl, Paolo Bonzini
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, paolo.bonzini, Xiejunyong, Luonengjun,
qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong, Bruce Rogers, Hanweidong, Andreas Färber
>>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> >>> QEMU_AUDIO_DRV=none
>>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1
>>> >>> -uuid
>>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>>> >>> -chardev
>>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,ser
>>> >>> ver, n owait -mon chardev=charmonitor,id=monitor,mode=control
>>> >>> -rtc base=localtime -no-shutdown -device
>>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw
>>> >>> ,cac
>>> >>> h
>>> >>> e=none -device
>>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-dis
>>> >>> k0,i
>>> >>> d
>>> >>> =virtio-disk0,bootindex=1 -netdev
>>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x3,bootindex=2 -netdev
>>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25
>>> >>> -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27
>>> >>> -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29
>>> >>> -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31
>>> >>> -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>>> >>> -watchdog-action poweroff -device
>>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>> >>>
>>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>> >>
>>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>>> >
>>> >Thanks,
>>> >Zhang Haoyu
>>> >
>>> >>--
>>> >> Gleb.
>>>
>>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>>
>>Not really. There is no point in debugging very old version compiled
>>with kvm-kmod, there are to many variables in the environment. I cannot
>>reproduce the GFN flooding on upstream, so the problem may be gone, may
>>be a result of kvm-kmod problem or something different in how I invoke
>>qemu. So the best way to proceed is for you to reproduce with upstream
>>version then at least I will be sure that we are using the same code.
>>
>Thanks, I will test the combos of upstream kvm kernel and upstream qemu.
>And, the guest os version above I said was wrong, current running guest os is SLES10SP4.
>
I tested below combos of qemu and kernel,
+-----------------+-----------------+-----------------+
| kvm kernel | QEMU | test result |
+-----------------+-----------------+-----------------+
| kvm-3.11-2 | qemu-1.5.2 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.0.0 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.4.0 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.4.2 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.0-rc0 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.0 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.1 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.2 | GOOD |
+-----------------+-----------------+-----------------+
NOTE:
1. above kvm-3.11-2 in the table is the whole tag kernel download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git
2. SLES11SP2's kernel version is 3.0.13-0.27
Then I git bisect the qemu changes between qemu-1.4.2 and qemu-1.5.0-rc0 by marking the good version as bad, and the bad version as good,
so the first bad commit is just the patch which fixes the degradation problem.
+------------+-------------------------------------------+-----------------+-----------------+
| bisect No. | commit | save-restore | migration |
+------------+-------------------------------------------+-----------------+-----------------+
| 1 | 03e94e39ce5259efdbdeefa1f249ddb499d57321 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 2 | 99835e00849369bab726a4dc4ceed1f6f9ed967c | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 3 | 62e1aeaee4d0450222a0ea43c713b59526e3e0fe | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 4 | 9d9801cf803cdceaa4845fe27150b24d5ab083e6 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 5 | d76bb73549fcac07524aea5135280ea533a94fd6 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 6 | d913829f0fd8451abcb1fd9d6dfce5586d9d7e10 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 7 | d2f38a0acb0a1c5b7ab7621a32d603d08d513bea | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 8 | e344b8a16de429ada3d9126f26e2a96d71348356 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 9 | 56ded708ec38e4cb75a7c7357480ca34c0dc6875 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 10 | 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 11 | 70c8652bf3c1fea79b7b68864e86926715c49261 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 12 | f1c72795af573b24a7da5eb52375c9aba8a37972 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
NOTE: above tests were made on SLES11SP2.
So, the commit f1c72795af573b24a7da5eb52375c9aba8a37972 is just the patch which fixes the degradation.
Then, I replace SLES11SP2's default kvm-kmod with kvm-kmod-3.6, and applied below patch to __direct_map(),
@@ -2599,6 +2599,9 @@ static int __direct_map(struct kvm_vcpu
int emulate = 0;
gfn_t pseudo_gfn;
+ if (!map_writable)
+ printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
and, I rebuild the kvm-kmod, then re-insmod it, test the adjacent commits again, test results shown as below,
+------------+-------------------------------------------+-----------------+-----------------+
| bisect No. | commit | save-restore | migration |
+------------+-------------------------------------------+-----------------+-----------------+
| 10 | 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 12 | f1c72795af573b24a7da5eb52375c9aba8a37972 | GOOD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
While testing commit 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d, as soon as the restoration/migration complete, the GFNs flooding is starting,
take some examples shown as below,
2073462
2857203
2073463
2073464
2073465
3218751
2073466
2857206
2857207
2073467
2073468
2857210
2857211
3218752
2857214
2857215
3218753
2857217
2857218
2857221
2857222
3218754
2857225
2857226
3218755
2857229
2857230
2857232
2857233
3218756
2780393
2780394
2857236
2780395
2857237
2780396
2780397
2780398
2780399
2780400
2780401
3218757
2857240
2857241
2857244
3218758
2857247
2857248
2857251
2857252
3218759
2857255
2857256
3218760
2857289
2857290
2857293
2857294
3218761
2857297
2857298
3218762
3218763
3218764
3218765
3218766
3218767
3218768
3218769
3218770
3218771
3218772
but, after a period of time, the flooding rate slowed down.
while testing commit f1c72795af573b24a7da5eb52375c9aba8a37972, after restoration, no GFN was printed, and no performance degradation.
but as soon as live migration complete, GFNs flooding is starting, and performance degradation also happened.
NOTE: The test results of commit f1c72795af573b24a7da5eb52375c9aba8a37972 seemed to be unstable, I will make verification again.
>Thanks,
>Zhang Haoyu
>
>>> I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@
>>> static int __direct_map(struct kvm_vcpu
>>> int pt_write = 0;
>>> gfn_t pseudo_gfn;
>>>
>>> + map_writable = true;
>>> +
>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>> if (iterator.level == level) {
>>> unsigned pte_access = ACC_ALL; and rebuild
>>> the kvm-kmod, then re-insmod it.
>>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>>
>>If hva_to_pfn() returns map_writable == false it means that page is
>>mapped as read only on primary MMU, so it should not be mapped writable
>>on secondary MMU either. This should not happen usually.
>>
>>--
>> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-20 13:33 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-20 13:33 UTC (permalink / raw)
To: Zhanghaoyu (A), Gleb Natapov, Eric Blake, pl, Paolo Bonzini
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, paolo.bonzini, Xiejunyong, Luonengjun,
qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong, Bruce Rogers, Hanweidong, Andreas Färber
>>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> >>> QEMU_AUDIO_DRV=none
>>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1
>>> >>> -uuid
>>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>>> >>> -chardev
>>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,ser
>>> >>> ver, n owait -mon chardev=charmonitor,id=monitor,mode=control
>>> >>> -rtc base=localtime -no-shutdown -device
>>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw
>>> >>> ,cac
>>> >>> h
>>> >>> e=none -device
>>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-dis
>>> >>> k0,i
>>> >>> d
>>> >>> =virtio-disk0,bootindex=1 -netdev
>>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x3,bootindex=2 -netdev
>>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25
>>> >>> -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27
>>> >>> -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29
>>> >>> -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31
>>> >>> -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>>> >>> -watchdog-action poweroff -device
>>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>> >>>
>>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>> >>
>>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>>> >
>>> >Thanks,
>>> >Zhang Haoyu
>>> >
>>> >>--
>>> >> Gleb.
>>>
>>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>>
>>Not really. There is no point in debugging very old version compiled
>>with kvm-kmod, there are to many variables in the environment. I cannot
>>reproduce the GFN flooding on upstream, so the problem may be gone, may
>>be a result of kvm-kmod problem or something different in how I invoke
>>qemu. So the best way to proceed is for you to reproduce with upstream
>>version then at least I will be sure that we are using the same code.
>>
>Thanks, I will test the combos of upstream kvm kernel and upstream qemu.
>And, the guest os version above I said was wrong, current running guest os is SLES10SP4.
>
I tested below combos of qemu and kernel,
+-----------------+-----------------+-----------------+
| kvm kernel | QEMU | test result |
+-----------------+-----------------+-----------------+
| kvm-3.11-2 | qemu-1.5.2 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.0.0 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.4.0 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.4.2 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.0-rc0 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.0 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.1 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.2 | GOOD |
+-----------------+-----------------+-----------------+
NOTE:
1. above kvm-3.11-2 in the table is the whole tag kernel download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git
2. SLES11SP2's kernel version is 3.0.13-0.27
Then I git bisect the qemu changes between qemu-1.4.2 and qemu-1.5.0-rc0 by marking the good version as bad, and the bad version as good,
so the first bad commit is just the patch which fixes the degradation problem.
+------------+-------------------------------------------+-----------------+-----------------+
| bisect No. | commit | save-restore | migration |
+------------+-------------------------------------------+-----------------+-----------------+
| 1 | 03e94e39ce5259efdbdeefa1f249ddb499d57321 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 2 | 99835e00849369bab726a4dc4ceed1f6f9ed967c | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 3 | 62e1aeaee4d0450222a0ea43c713b59526e3e0fe | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 4 | 9d9801cf803cdceaa4845fe27150b24d5ab083e6 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 5 | d76bb73549fcac07524aea5135280ea533a94fd6 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 6 | d913829f0fd8451abcb1fd9d6dfce5586d9d7e10 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 7 | d2f38a0acb0a1c5b7ab7621a32d603d08d513bea | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 8 | e344b8a16de429ada3d9126f26e2a96d71348356 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 9 | 56ded708ec38e4cb75a7c7357480ca34c0dc6875 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 10 | 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 11 | 70c8652bf3c1fea79b7b68864e86926715c49261 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 12 | f1c72795af573b24a7da5eb52375c9aba8a37972 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
NOTE: above tests were made on SLES11SP2.
So, the commit f1c72795af573b24a7da5eb52375c9aba8a37972 is just the patch which fixes the degradation.
Then, I replace SLES11SP2's default kvm-kmod with kvm-kmod-3.6, and applied below patch to __direct_map(),
@@ -2599,6 +2599,9 @@ static int __direct_map(struct kvm_vcpu
int emulate = 0;
gfn_t pseudo_gfn;
+ if (!map_writable)
+ printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
and, I rebuild the kvm-kmod, then re-insmod it, test the adjacent commits again, test results shown as below,
+------------+-------------------------------------------+-----------------+-----------------+
| bisect No. | commit | save-restore | migration |
+------------+-------------------------------------------+-----------------+-----------------+
| 10 | 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 12 | f1c72795af573b24a7da5eb52375c9aba8a37972 | GOOD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
While testing commit 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d, as soon as the restoration/migration complete, the GFNs flooding is starting,
take some examples shown as below,
2073462
2857203
2073463
2073464
2073465
3218751
2073466
2857206
2857207
2073467
2073468
2857210
2857211
3218752
2857214
2857215
3218753
2857217
2857218
2857221
2857222
3218754
2857225
2857226
3218755
2857229
2857230
2857232
2857233
3218756
2780393
2780394
2857236
2780395
2857237
2780396
2780397
2780398
2780399
2780400
2780401
3218757
2857240
2857241
2857244
3218758
2857247
2857248
2857251
2857252
3218759
2857255
2857256
3218760
2857289
2857290
2857293
2857294
3218761
2857297
2857298
3218762
3218763
3218764
3218765
3218766
3218767
3218768
3218769
3218770
3218771
3218772
but, after a period of time, the flooding rate slowed down.
while testing commit f1c72795af573b24a7da5eb52375c9aba8a37972, after restoration, no GFN was printed, and no performance degradation.
but as soon as live migration complete, GFNs flooding is starting, and performance degradation also happened.
NOTE: The test results of commit f1c72795af573b24a7da5eb52375c9aba8a37972 seemed to be unstable, I will make verification again.
>Thanks,
>Zhang Haoyu
>
>>> I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@
>>> static int __direct_map(struct kvm_vcpu
>>> int pt_write = 0;
>>> gfn_t pseudo_gfn;
>>>
>>> + map_writable = true;
>>> +
>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>> if (iterator.level == level) {
>>> unsigned pte_access = ACC_ALL; and rebuild
>>> the kvm-kmod, then re-insmod it.
>>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>>
>>If hva_to_pfn() returns map_writable == false it means that page is
>>mapped as read only on primary MMU, so it should not be mapped writable
>>on secondary MMU either. This should not happen usually.
>>
>>--
>> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 5:52 ` Gleb Natapov
@ 2013-08-31 7:45 ` Zhanghaoyu (A)
-1 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-31 7:45 UTC (permalink / raw)
To: Gleb Natapov, pl, Eric Blake, quintela, Paolo Bonzini,
Andreas Färber, xiaoguangrong
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, Xiejunyong, Luonengjun, qemu-devel,
Xiahai, Zanghongyong, Xin Rong Fu, Yi Li, Bruce Rogers,
Hanweidong
I tested below combos of qemu and kernel,
+------------------------+-----------------+-------------+
| kernel | QEMU | migration |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0 | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0* | BAD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.1 | BAD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6*| qemu-1.5.1 | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.1* | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.2 | BAD |
+------------------------+-----------------+-------------+
| kvm-3.11-2 | qemu-1.5.1 | BAD |
+------------------------+-----------------+-------------+
NOTE:
1. kvm-3.11-2 : the whole tag kernel downloaded from https://git.kernel.org/pub/scm/virt/kvm/kvm.git
2. SLES11SP2+kvm-kmod-3.6 : our release kernel, replace the SLES11SP2's default kvm-kmod with kvm-kmod-3.6, SLES11SP2's kernel version is 3.0.13-0.27
3. qemu-1.6.0* : revert the commit 211ea74022f51164a7729030b28eec90b6c99a08 on qemu-1.6.0
4. kvm-kmod-3.6* : kvm-kmod-3.6 with EPT disabled
5. qemu-1.5.1* : apply below patch to qemu-1.5.1 to delete qemu_madvise() statement in ram_load() function
--- qemu-1.5.1/arch_init.c 2013-06-27 05:47:29.000000000 +0800
+++ qemu-1.5.1_fix3/arch_init.c 2013-08-28 19:43:42.000000000 +0800
@@ -842,7 +842,6 @@ static int ram_load(QEMUFile *f, void *o
if (ch == 0 &&
(!kvm_enabled() || kvm_has_sync_mmu()) &&
getpagesize() <= TARGET_PAGE_SIZE) {
- qemu_madvise(host, TARGET_PAGE_SIZE, QEMU_MADV_DONTNEED);
}
#endif
} else if (flags & RAM_SAVE_FLAG_PAGE) {
If I apply above patch to qemu-1.5.1 to delete the qemu_madvise() statement, the test result of the combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.5.1 is good.
Why do we perform the qemu_madvise(QEMU_MADV_DONTNEED) for those zero pages?
Does the qemu_madvise() have sustained effect on the range of virtual address? In other words, does qemu_madvise() have sustained effect on the VM performance?
If later frequently read/write the range of virtual address which have been advised to DONTNEED, could performance degradation happen?
The reason why the combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.6.0 is good, is because of commit 211ea74022f51164a7729030b28eec90b6c99a08,
if I revert the commit 211ea74022f51164a7729030b28eec90b6c99a08 on qemu-1.6.0, the test result of combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.6.0 is bad, performance degradation happened, too.
Thanks,
Zhang Haoyu
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1
>> >>> -uuid
>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> >>> -chardev
>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,serv
>> >>> er, n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> >>> base=localtime -no-shutdown -device
>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,
>> >>> cac
>> >>> h
>> >>> e=none -device
>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk
>> >>> 0,i
>> >>> d
>> >>> =virtio-disk0,bootindex=1 -netdev
>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x3,bootindex=2 -netdev
>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25
>> >>> -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27
>> >>> -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29
>> >>> -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31
>> >>> -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> >>> -watchdog-action poweroff -device
>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>> >>>
>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>> >
>> >Thanks,
>> >Zhang Haoyu
>> >
>> >>--
>> >> Gleb.
>>
>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>
>Not really. There is no point in debugging very old version compiled with kvm-kmod, there are to many variables in the environment. I cannot reproduce the GFN flooding on upstream, so the problem may be gone, may be a result of kvm-kmod problem or something different in how I invoke qemu. So the best way to proceed is for you to reproduce with upstream version then at least I will be sure that we are using the same code.
>
>> I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@ static
>> int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + map_writable = true;
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL; and rebuild the
>> kvm-kmod, then re-insmod it.
>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>
>If hva_to_pfn() returns map_writable == false it means that page is mapped as read only on primary MMU, so it should not be mapped writable on secondary MMU either. This should not happen usually.
>
>--
> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-31 7:45 ` Zhanghaoyu (A)
0 siblings, 0 replies; 52+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-31 7:45 UTC (permalink / raw)
To: Gleb Natapov, pl, Eric Blake, quintela, Paolo Bonzini,
Andreas Färber, xiaoguangrong
Cc: Marcelo Tosatti, Huangweidong (C),
KVM, Michael S. Tsirkin, Xiejunyong, Luonengjun, qemu-devel,
Xiahai, Zanghongyong, Xin Rong Fu, Yi Li, Bruce Rogers,
Hanweidong
I tested below combos of qemu and kernel,
+------------------------+-----------------+-------------+
| kernel | QEMU | migration |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0 | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0* | BAD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.1 | BAD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6*| qemu-1.5.1 | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.1* | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.2 | BAD |
+------------------------+-----------------+-------------+
| kvm-3.11-2 | qemu-1.5.1 | BAD |
+------------------------+-----------------+-------------+
NOTE:
1. kvm-3.11-2 : the whole tag kernel downloaded from https://git.kernel.org/pub/scm/virt/kvm/kvm.git
2. SLES11SP2+kvm-kmod-3.6 : our release kernel, replace the SLES11SP2's default kvm-kmod with kvm-kmod-3.6, SLES11SP2's kernel version is 3.0.13-0.27
3. qemu-1.6.0* : revert the commit 211ea74022f51164a7729030b28eec90b6c99a08 on qemu-1.6.0
4. kvm-kmod-3.6* : kvm-kmod-3.6 with EPT disabled
5. qemu-1.5.1* : apply below patch to qemu-1.5.1 to delete qemu_madvise() statement in ram_load() function
--- qemu-1.5.1/arch_init.c 2013-06-27 05:47:29.000000000 +0800
+++ qemu-1.5.1_fix3/arch_init.c 2013-08-28 19:43:42.000000000 +0800
@@ -842,7 +842,6 @@ static int ram_load(QEMUFile *f, void *o
if (ch == 0 &&
(!kvm_enabled() || kvm_has_sync_mmu()) &&
getpagesize() <= TARGET_PAGE_SIZE) {
- qemu_madvise(host, TARGET_PAGE_SIZE, QEMU_MADV_DONTNEED);
}
#endif
} else if (flags & RAM_SAVE_FLAG_PAGE) {
If I apply above patch to qemu-1.5.1 to delete the qemu_madvise() statement, the test result of the combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.5.1 is good.
Why do we perform the qemu_madvise(QEMU_MADV_DONTNEED) for those zero pages?
Does the qemu_madvise() have sustained effect on the range of virtual address? In other words, does qemu_madvise() have sustained effect on the VM performance?
If later frequently read/write the range of virtual address which have been advised to DONTNEED, could performance degradation happen?
The reason why the combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.6.0 is good, is because of commit 211ea74022f51164a7729030b28eec90b6c99a08,
if I revert the commit 211ea74022f51164a7729030b28eec90b6c99a08 on qemu-1.6.0, the test result of combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.6.0 is bad, performance degradation happened, too.
Thanks,
Zhang Haoyu
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1
>> >>> -uuid
>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> >>> -chardev
>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,serv
>> >>> er, n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> >>> base=localtime -no-shutdown -device
>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,
>> >>> cac
>> >>> h
>> >>> e=none -device
>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk
>> >>> 0,i
>> >>> d
>> >>> =virtio-disk0,bootindex=1 -netdev
>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x3,bootindex=2 -netdev
>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25
>> >>> -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27
>> >>> -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29
>> >>> -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31
>> >>> -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> >>> -watchdog-action poweroff -device
>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>> >>>
>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>> >
>> >Thanks,
>> >Zhang Haoyu
>> >
>> >>--
>> >> Gleb.
>>
>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>
>Not really. There is no point in debugging very old version compiled with kvm-kmod, there are to many variables in the environment. I cannot reproduce the GFN flooding on upstream, so the problem may be gone, may be a result of kvm-kmod problem or something different in how I invoke qemu. So the best way to proceed is for you to reproduce with upstream version then at least I will be sure that we are using the same code.
>
>> I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@ static
>> int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + map_writable = true;
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL; and rebuild the
>> kvm-kmod, then re-insmod it.
>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>
>If hva_to_pfn() returns map_writable == false it means that page is mapped as read only on primary MMU, so it should not be mapped writable on secondary MMU either. This should not happen usually.
>
>--
> Gleb.
^ permalink raw reply [flat|nested] 52+ messages in thread
end of thread, other threads:[~2013-08-31 7:45 UTC | newest]
Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-11 9:36 vm performance degradation after kvm live migration or save-restore with ETP enabled Zhanghaoyu (A)
2013-07-11 9:36 ` [Qemu-devel] " Zhanghaoyu (A)
2013-07-11 10:28 ` Michael S. Tsirkin
2013-07-11 10:28 ` [Qemu-devel] " Michael S. Tsirkin
2013-07-11 10:39 ` Gleb Natapov
2013-07-11 10:39 ` [Qemu-devel] " Gleb Natapov
2013-07-11 10:39 ` Xiao Guangrong
2013-07-11 10:39 ` [Qemu-devel] " Xiao Guangrong
2013-07-11 14:00 ` Zhang Haoyu
2013-07-11 14:00 ` [Qemu-devel] " Zhang Haoyu
2013-07-11 10:51 ` Andreas Färber
2013-07-11 10:51 ` Andreas Färber
2013-07-12 3:21 ` Zhanghaoyu (A)
2013-07-12 3:21 ` Zhanghaoyu (A)
2013-07-11 18:20 ` Bruce Rogers
2013-07-11 18:20 ` [Qemu-devel] " Bruce Rogers
2013-07-27 7:47 ` Zhanghaoyu (A)
2013-07-27 7:47 ` [Qemu-devel] " Zhanghaoyu (A)
2013-07-29 22:14 ` Andrea Arcangeli
2013-07-29 22:14 ` Andrea Arcangeli
2013-07-29 23:47 ` Marcelo Tosatti
2013-07-29 23:47 ` Marcelo Tosatti
2013-07-30 9:04 ` Zhanghaoyu (A)
2013-07-30 9:04 ` Zhanghaoyu (A)
2013-08-01 6:16 ` Gleb Natapov
2013-08-01 6:16 ` Gleb Natapov
2013-08-05 8:35 ` [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled Zhanghaoyu (A)
2013-08-05 8:35 ` Zhanghaoyu (A)
2013-08-05 8:43 ` Gleb Natapov
2013-08-05 8:43 ` Gleb Natapov
2013-08-05 9:09 ` Zhanghaoyu (A)
2013-08-05 9:09 ` Zhanghaoyu (A)
2013-08-05 9:15 ` Andreas Färber
2013-08-05 9:15 ` Andreas Färber
2013-08-05 9:22 ` Zhanghaoyu (A)
2013-08-05 9:22 ` Zhanghaoyu (A)
2013-08-05 9:37 ` Gleb Natapov
2013-08-05 9:37 ` Gleb Natapov
2013-08-06 10:47 ` Zhanghaoyu (A)
2013-08-06 10:47 ` Zhanghaoyu (A)
2013-08-07 1:34 ` Zhanghaoyu (A)
2013-08-07 1:34 ` Zhanghaoyu (A)
2013-08-07 5:52 ` Gleb Natapov
2013-08-07 5:52 ` Gleb Natapov
2013-08-14 9:05 ` Zhanghaoyu (A)
2013-08-14 9:05 ` [Qemu-devel] " Zhanghaoyu (A)
2013-08-20 13:33 ` Zhanghaoyu (A)
2013-08-20 13:33 ` [Qemu-devel] " Zhanghaoyu (A)
2013-08-31 7:45 ` Zhanghaoyu (A)
2013-08-31 7:45 ` [Qemu-devel] " Zhanghaoyu (A)
2013-08-05 18:27 ` Xiao Guangrong
2013-08-05 18:27 ` [Qemu-devel] " Xiao Guangrong
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.