All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration
@ 2015-09-10 14:38 c sights
  2015-09-17 14:37 ` [Qemu-devel] [Bug 1494350] " Alexandre Derumier
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: c sights @ 2015-09-10 14:38 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

I'm pasting in text from Debian Bug 785557
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
b/c I couldn't find this issue reported.

It is present in QEMU 2.3, but I haven't tested later versions.  Perhaps
someone else will find this bug and confirm for later versions.  (Or I
will when I have time!)

--------------------------------------------------------------------------------------------

Hi,

I'm trying to debug an issue we're having with some debian.org machines 
running in QEMU 2.1.2 instances (see [1] for more background). In short, 
after a live migration guests running Debian Jessie (linux 3.16) stop 
accounting CPU time properly. /proc/stat in the guest shows no increase 
in user and system time anymore (regardless of workload) and what stands 
out are extremely large values for steal time:

 % cat /proc/stat
 cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
 cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
 cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
 cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
 cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
 intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 ctxt 837862829
 btime 1431642967
 processes 8529939
 procs_running 1
 procs_blocked 0
 softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
 
Reading the memory pointed to by the steal time MSRs pre- and 
post-migration, I can see that post-migration the high bytes are set to 
0xff:

(qemu) xp /8b 0x1fc0cfc0
000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

The "jump" in steal time happens when the guest is resumed on the 
receiving side.

I've also been able to consistently reproduce this on a Ganeti cluster 
at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
issue goes away if I disable the steal time MSR using `-cpu 
qemu64,-kvm_steal_time`.

So, it looks to me as if the steal time MSR is not set/copied properly 
during live migration, although AFAICT this should be the case after 
917367aa968fd4fef29d340e0c7ec8c608dffaab.

After investigating a bit more, it looks like the issue comes from an overflow
in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

  static void accumulate_steal_time(struct kvm_vcpu *vcpu)
  {
          u64 delta;
  
          if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                  return;
  
          delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
  
Using systemtap with the attached script to trace KVM execution on the 
receiving host kernel, we can see that shortly before marking the vCPUs 
as runnable on a migrated KVM instance with 2 vCPUs, the following 
happens (** marks lines of interest):

 **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
     0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
     0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
     5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
    23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
     0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
     2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
     0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
     2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
     3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
     4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
 **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
     0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
     1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
     0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
     1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
     0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
     1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
     2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
     3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
 **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
     0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
 **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
    10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
 **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
     4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
     6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
     ...
     0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
     0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
 ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
    40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
 **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
     5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

Now, what's really interesting is that current->sched_info.run_delay 
gets reset because the tasks (threads) using the vCPUs change, and thus 
have a different current->sched_info: it looks like task 18446 created 
the two vCPUs, and then they were handed over to 18448 and 18449 
respectively. This is also verified by the fact that during the 
overflow, both vCPUs have the old steal time of the last vcpu_load of 
task 18446. However, according to Documentation/virtual/kvm/api.txt:

 - vcpu ioctls: These query and set attributes that control the operation
   of a single virtual cpu.

   Only run vcpu ioctls from the same thread that was used to create the
vcpu.


 
So it seems qemu is doing something that it shouldn't: calling vCPU 
ioctls from a thread that didn't create the vCPU. Note that this 
probably happens on every QEMU startup, but is not visible because the 
guest kernel zeroes out the steal time on boot.

There are at least two ways to mitigate the issue without a kernel
recompilation:

 - The first one is to disable the steal time propagation from host to 
   guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
   short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
   KVM_MSR_ENABLED) and will completely disable steal time reporting in 
   the guest, which may not be desired if people rely on it to detect 
   CPU congestion.

 - The other one is using the following systemtap script to prevent the 
   steal time counter from overflowing by dropping the problematic 
   samples (WARNING: systemtap guru mode required, use at your own 
   risk):

      probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
        if (@defined($delta) && $delta < 0) {
          printk(4, "kvm: steal time delta < 0, dropping")
          $delta = 0
        }
      }

Note that not all *guests* handle this condition in the same way: 3.2 
guests still get the overflow in /proc/stat, but their scheduler 
continues to work as expected. 3.16 guests OTOH go nuts once steal time 
overflows and stop accumulating system & user time, while entering an 
erratic state where steal time in /proc/stat is *decreasing* on every 
clock tick.
-------------------------------------------- Revised statement:
> Now, what's really interesting is that current->sched_info.run_delay 
> gets reset because the tasks (threads) using the vCPUs change, and 
> thus have a different current->sched_info: it looks like task 18446 
> created the two vCPUs, and then they were handed over to 18448 and 
> 18449 respectively. This is also verified by the fact that during the 
> overflow, both vCPUs have the old steal time of the last vcpu_load of 
> task 18446. However, according to Documentation/virtual/kvm/api.txt:

The above is not entirely accurate: the vCPUs were created by the 
threads that are used to run them (18448 and 18449 respectively), it's 
just that the main thread is issuing ioctls during initialization, as 
illustrated by the strace output on a different process:

 [ vCPU #0 thread creating vCPU #0 (fd 20) ]
 [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
 [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
 [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
 [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
 
 [ vCPU #1 thread creating vCPU #1 (fd 21) ]
 [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
 [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
 [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
 [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
 
 [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
 [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
 [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
 [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
 [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
 [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
 [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
 [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
 [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
 [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
 [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
 
 [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
 [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
 [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
 [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
 [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
 [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
 [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
 [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
 [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
 [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
 [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
 
Using systemtap again, I noticed that the main thread's run_delay is copied to
last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
MSR is issued by the main thread (see linux 
3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
reverted the following qemu commits:

 commit 0e5035776df31380a44a1a851850d110b551ecb6
 Author: Marcelo Tosatti <mtosatti@redhat.com>
 Date:   Tue Sep 3 18:55:16 2013 -0300
 
     fix steal time MSR vmsd callback to proper opaque type
     
     Convert steal time MSR vmsd callback pointer to proper X86CPU type.
     
     Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
     Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
 
 commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
 Author: Marcelo Tosatti <mtosatti@redhat.com>
 Date:   Tue Feb 19 23:27:20 2013 -0300
 
     target-i386: kvm: save/restore steal time MSR
     
     Read and write steal time MSR, so that reporting is functional across
     migration.
     
     Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
     Signed-off-by: Gleb Natapov <gleb@redhat.com>
 
and the steal time jump on migration went away. However, steal time was 
not reported at all after migration, which is expected after reverting 
917367aa.

So it seems that after 917367aa, the steal time MSR is correctly saved 
and copied to the receiving side, but then it is restored by the main 
thread (probably during cpu_synchronize_all_post_init()), causing the 
overflow when the vCPU threads are unpaused.

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
@ 2015-09-17 14:37 ` Alexandre Derumier
  2015-10-06  9:50 ` Mário Reis
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Derumier @ 2015-09-17 14:37 UTC (permalink / raw)
  To: qemu-devel

Hi,

I confirm this bug,

I have seen this a lot of time with debian jessie (kernel 3.16) and
ubuntu (kernel 4.X) with qemu 2.2 and qemu 2.3

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
  2015-09-17 14:37 ` [Qemu-devel] [Bug 1494350] " Alexandre Derumier
@ 2015-10-06  9:50 ` Mário Reis
  2015-10-15 10:16 ` Markus Breitegger
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Mário Reis @ 2015-10-06  9:50 UTC (permalink / raw)
  To: qemu-devel

Hi,

Same issue here: gentoo kernel 3.18 and 4.0, qemu 2.2, 2.3 and 2.4

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
  2015-09-17 14:37 ` [Qemu-devel] [Bug 1494350] " Alexandre Derumier
  2015-10-06  9:50 ` Mário Reis
@ 2015-10-15 10:16 ` Markus Breitegger
  2015-10-15 11:15   ` Alexandre DERUMIER
  2015-10-15 11:44 ` Markus Breitegger
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 13+ messages in thread
From: Markus Breitegger @ 2015-10-15 10:16 UTC (permalink / raw)
  To: qemu-devel

Hi,

I've seen the same issue with debian jessie.

Compiled 4.2.3 from kernel.org with "make localyesconfig",
no problem any more

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-10-15 10:16 ` Markus Breitegger
@ 2015-10-15 11:15   ` Alexandre DERUMIER
  0 siblings, 0 replies; 13+ messages in thread
From: Alexandre DERUMIER @ 2015-10-15 11:15 UTC (permalink / raw)
  To: Bug 1494350; +Cc: qemu-devel

>>Hi,
Hi

>>
>>I've seen the same issue with debian jessie.
>>
>>Compiled 4.2.3 from kernel.org with "make localyesconfig",
>>no problem any more

host kernel or guest kernel ?



----- Mail original -----
De: "Markus Breitegger" <1494350@bugs.launchpad.net>
À: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Jeudi 15 Octobre 2015 12:16:07
Objet: [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration

Hi, 

I've seen the same issue with debian jessie. 

Compiled 4.2.3 from kernel.org with "make localyesconfig", 
no problem any more 

-- 
You received this bug notification because you are a member of qemu- 
devel-ml, which is subscribed to QEMU. 
https://bugs.launchpad.net/bugs/1494350 

Title: 
QEMU: causes vCPU steal time overflow on live migration 

Status in QEMU: 
New 

Bug description: 
I'm pasting in text from Debian Bug 785557 
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 
b/c I couldn't find this issue reported. 

It is present in QEMU 2.3, but I haven't tested later versions. 
Perhaps someone else will find this bug and confirm for later 
versions. (Or I will when I have time!) 

-------------------------------------------------------------------------------------------- 

Hi, 

I'm trying to debug an issue we're having with some debian.org machines 
running in QEMU 2.1.2 instances (see [1] for more background). In short, 
after a live migration guests running Debian Jessie (linux 3.16) stop 
accounting CPU time properly. /proc/stat in the guest shows no increase 
in user and system time anymore (regardless of workload) and what stands 
out are extremely large values for steal time: 

% cat /proc/stat 
cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0 
cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0 
cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0 
cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0 
cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0 
intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
ctxt 837862829 
btime 1431642967 
processes 8529939 
procs_running 1 
procs_blocked 0 
softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 

Reading the memory pointed to by the steal time MSRs pre- and 
post-migration, I can see that post-migration the high bytes are set to 
0xff: 

(qemu) xp /8b 0x1fc0cfc0 
000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff 

The "jump" in steal time happens when the guest is resumed on the 
receiving side. 

I've also been able to consistently reproduce this on a Ganeti cluster 
at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
issue goes away if I disable the steal time MSR using `-cpu 
qemu64,-kvm_steal_time`. 

So, it looks to me as if the steal time MSR is not set/copied properly 
during live migration, although AFAICT this should be the case after 
917367aa968fd4fef29d340e0c7ec8c608dffaab. 

After investigating a bit more, it looks like the issue comes from an overflow 
in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023): 

static void accumulate_steal_time(struct kvm_vcpu *vcpu) 
{ 
u64 delta; 

if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) 
return; 

delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; 

Using systemtap with the attached script to trace KVM execution on the 
receiving host kernel, we can see that shortly before marking the vCPUs 
as runnable on a migrated KVM instance with 2 vCPUs, the following 
happens (** marks lines of interest): 

** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns 
0 qemu-system-x86(18446): -> kvm_arch_vcpu_load 
0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick 
5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick 
23 qemu-system-x86(18446): <- kvm_arch_vcpu_load 
0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl 
2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl 
0 qemu-system-x86(18446): -> kvm_arch_vcpu_put 
2 qemu-system-x86(18446): -> kvm_put_guest_fpu 
3 qemu-system-x86(18446): <- kvm_put_guest_fpu 
4 qemu-system-x86(18446): <- kvm_arch_vcpu_put 
** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns 
0 qemu-system-x86(18446): -> kvm_arch_vcpu_load 
1 qemu-system-x86(18446): <- kvm_arch_vcpu_load 
0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl 
1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl 
0 qemu-system-x86(18446): -> kvm_arch_vcpu_put 
1 qemu-system-x86(18446): -> kvm_put_guest_fpu 
2 qemu-system-x86(18446): <- kvm_put_guest_fpu 
3 qemu-system-x86(18446): <- kvm_arch_vcpu_put 
** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns 
0 qemu-system-x86(18449): -> kvm_arch_vcpu_load 
** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns 
10 qemu-system-x86(18449): <- kvm_arch_vcpu_load 
** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run 
4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable 
6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable 
... 
0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns 
0 qemu-system-x86(18448): -> kvm_arch_vcpu_load 
** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns 
40 qemu-system-x86(18448): <- kvm_arch_vcpu_load 
** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run 
5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable 

Now, what's really interesting is that current->sched_info.run_delay 
gets reset because the tasks (threads) using the vCPUs change, and thus 
have a different current->sched_info: it looks like task 18446 created 
the two vCPUs, and then they were handed over to 18448 and 18449 
respectively. This is also verified by the fact that during the 
overflow, both vCPUs have the old steal time of the last vcpu_load of 
task 18446. However, according to Documentation/virtual/kvm/api.txt: 

- vcpu ioctls: These query and set attributes that control the operation 
of a single virtual cpu. 

Only run vcpu ioctls from the same thread that was used to create 
the vcpu. 



So it seems qemu is doing something that it shouldn't: calling vCPU 
ioctls from a thread that didn't create the vCPU. Note that this 
probably happens on every QEMU startup, but is not visible because the 
guest kernel zeroes out the steal time on boot. 

There are at least two ways to mitigate the issue without a kernel 
recompilation: 

- The first one is to disable the steal time propagation from host to 
guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
KVM_MSR_ENABLED) and will completely disable steal time reporting in 
the guest, which may not be desired if people rely on it to detect 
CPU congestion. 

- The other one is using the following systemtap script to prevent the 
steal time counter from overflowing by dropping the problematic 
samples (WARNING: systemtap guru mode required, use at your own 
risk): 

probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") { 
if (@defined($delta) && $delta < 0) { 
printk(4, "kvm: steal time delta < 0, dropping") 
$delta = 0 
} 
} 

Note that not all *guests* handle this condition in the same way: 3.2 
guests still get the overflow in /proc/stat, but their scheduler 
continues to work as expected. 3.16 guests OTOH go nuts once steal time 
overflows and stop accumulating system & user time, while entering an 
erratic state where steal time in /proc/stat is *decreasing* on every 
clock tick. 
-------------------------------------------- Revised statement: 
> Now, what's really interesting is that current->sched_info.run_delay 
> gets reset because the tasks (threads) using the vCPUs change, and 
> thus have a different current->sched_info: it looks like task 18446 
> created the two vCPUs, and then they were handed over to 18448 and 
> 18449 respectively. This is also verified by the fact that during the 
> overflow, both vCPUs have the old steal time of the last vcpu_load of 
> task 18446. However, according to Documentation/virtual/kvm/api.txt: 

The above is not entirely accurate: the vCPUs were created by the 
threads that are used to run them (18448 and 18449 respectively), it's 
just that the main thread is issuing ioctls during initialization, as 
illustrated by the strace output on a different process: 

[ vCPU #0 thread creating vCPU #0 (fd 20) ] 
[pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20 
[pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0 
[pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0 
[pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0 

[ vCPU #1 thread creating vCPU #1 (fd 21) ] 
[pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21 
[pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0 
[pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0 
[pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0 

[ Main thread calling kvm_arch_put_registers() on vCPU #0 ] 
[pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0 
[pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0 
[pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 
[pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0 
[pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87 
[pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 
[pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 
[pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 
[pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 
[pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 

[ Main thread calling kvm_arch_put_registers() on vCPU #1 ] 
[pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0 
[pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0 
[pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 
[pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0 
[pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87 
[pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 
[pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 
[pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 
[pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 
[pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 

Using systemtap again, I noticed that the main thread's run_delay is copied to 
last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
MSR is issued by the main thread (see linux 
3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
reverted the following qemu commits: 

commit 0e5035776df31380a44a1a851850d110b551ecb6 
Author: Marcelo Tosatti <mtosatti@redhat.com> 
Date: Tue Sep 3 18:55:16 2013 -0300 

fix steal time MSR vmsd callback to proper opaque type 

Convert steal time MSR vmsd callback pointer to proper X86CPU type. 

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> 
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> 

commit 917367aa968fd4fef29d340e0c7ec8c608dffaab 
Author: Marcelo Tosatti <mtosatti@redhat.com> 
Date: Tue Feb 19 23:27:20 2013 -0300 

target-i386: kvm: save/restore steal time MSR 

Read and write steal time MSR, so that reporting is functional across 
migration. 

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> 
Signed-off-by: Gleb Natapov <gleb@redhat.com> 

and the steal time jump on migration went away. However, steal time was 
not reported at all after migration, which is expected after reverting 
917367aa. 

So it seems that after 917367aa, the steal time MSR is correctly saved 
and copied to the receiving side, but then it is restored by the main 
thread (probably during cpu_synchronize_all_post_init()), causing the 
overflow when the vCPU threads are unpaused. 

To manage notifications about this bug go to: 
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
                   ` (2 preceding siblings ...)
  2015-10-15 10:16 ` Markus Breitegger
@ 2015-10-15 11:44 ` Markus Breitegger
  2015-10-16  8:41 ` Dr. David Alan Gilbert
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Markus Breitegger @ 2015-10-15 11:44 UTC (permalink / raw)
  To: qemu-devel

Sorry but I think I've pasted my answer to the wrong bugreport. Sorry about that! I'm not using live migration.
https://lists.debian.org/debian-kernel/2014/11/msg00093.html

but seems something Debian related.

Following scenario:

Host Ubuntu 14.04
Guest Debian Jessie

Debian with 3.16.0-4-amd64 kernel has high cpu load on one of our
webserver:

output from top

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND 
  18 root      20   0                       S  11,0       50:10.35 ksoftirqd/2                         
   28 root      20   0                       S  11,0       49:45.90 ksoftirqd/4                         
   13 root      20   0                       S  10,1       51:25.18 ksoftirqd/1                         
   23 root      20   0                       S  10,1       55:42.26 ksoftirqd/3                         
   33 root      20   0                       S   8,3       43:12.53 ksoftirqd/5                         
    3 root      20   0                       S   7,4       43:19.93 ksoftirqd/0 

With backports kernel 4.2.0-0.bpo.1-amd64 or 4.2.3 from kernel.org the
cpu usage is back normal.

I ran into this problem on multiple debian jessie kvm guest's. I think
this is nothing qemu related. Sorry.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
                   ` (3 preceding siblings ...)
  2015-10-15 11:44 ` Markus Breitegger
@ 2015-10-16  8:41 ` Dr. David Alan Gilbert
  2015-11-19  3:41 ` lickdragon
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2015-10-16  8:41 UTC (permalink / raw)
  To: qemu-devel

See the kvm list post here; Marcelo has a fix:

http://www.spinics.net/lists/kvm/msg122175.html

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
                   ` (4 preceding siblings ...)
  2015-10-16  8:41 ` Dr. David Alan Gilbert
@ 2015-11-19  3:41 ` lickdragon
  2015-11-19  9:09 ` Dr. David Alan Gilbert
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: lickdragon @ 2015-11-19  3:41 UTC (permalink / raw)
  To: qemu-devel

Hello,
   I read in the thread
"
Applied to kvm/queue.  Thanks Marcelo, and thanks David for the review.

Paolo
"
But I cannot find where the patch enters the qemu git repo:
http://git.qemu.org/?p=qemu.git&a=search&h=HEAD&st=author&s=Tosatti

Is it not there yet?
Thanks!
C.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
                   ` (5 preceding siblings ...)
  2015-11-19  3:41 ` lickdragon
@ 2015-11-19  9:09 ` Dr. David Alan Gilbert
  2015-11-19 14:51   ` lickdragon
  2015-11-19 15:10   ` lickdragon
  2015-11-19 16:20 ` Dr. David Alan Gilbert
  2016-02-22 10:27 ` Markus Schade
  8 siblings, 2 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2015-11-19  9:09 UTC (permalink / raw)
  To: qemu-devel

Hi lickdragon,
  That's because the fix turned out to be in the kernel's KVM code; I can see it in the 4.4-rc1 upstream kernel.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  New

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-11-19  9:09 ` Dr. David Alan Gilbert
@ 2015-11-19 14:51   ` lickdragon
  2015-11-19 15:10   ` lickdragon
  1 sibling, 0 replies; 13+ messages in thread
From: lickdragon @ 2015-11-19 14:51 UTC (permalink / raw)
  To: qemu-devel

> Hi lickdragon,
>   That's because the fix turned out to be in the kernel's KVM code; I can
> see it in the 4.4-rc1 upstream kernel.

Thanks!  I found it.
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7cae2bedcbd4680b155999655e49c27b9cf020fa

>   > Now, what's really interesting is that current->sched_info.run_delay
>   > gets reset because the tasks (threads) using the vCPUs change, and
>   > thus have a different current->sched_info: it looks like task 18446
>   > created the two vCPUs, and then they were handed over to 18448 and
>   > 18449 respectively. This is also verified by the fact that during the
>   > overflow, both vCPUs have the old steal time of the last vcpu_load of
> 
>   > task 18446. However, according to Documentation/virtual/kvm/api.txt:
>   The above is not entirely accurate: the vCPUs were created by the
>   threads that are used to run them (18448 and 18449 respectively), it's
>   just that the main thread is issuing ioctls during initialization, as
>   illustrated by the strace output on a different process:
> 
>    [ vCPU #0 thread creating vCPU #0 (fd 20) ]
>    [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
>    [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
>    [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
>    [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
> 
>    [ vCPU #1 thread creating vCPU #1 (fd 21) ]
>    [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
>    [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
>    [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
>    [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
> 
>    [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
>    [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
>    [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) =
> 0 [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS,
> 0x7ffc98aac010) = 0 [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) =
> 0
>    [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
>    [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
>    [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
>    [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
>    [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS,
> 0x7ffc98aac1b0) = 0 [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or
> KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
> 
>    [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
>    [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
>    [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) =
> 0 [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS,
> 0x7ffc98aac010) = 0 [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) =
> 0
>    [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
>    [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
>    [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
>    [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
>    [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS,
> 0x7ffc98aac1b0) = 0 [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or
> KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
> 
>   Using systemtap again, I noticed that the main thread's run_delay is
> copied to last_steal only after a KVM_SET_MSRS ioctl which enables the
> steal time MSR is issued by the main thread (see linux
>   3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I
>   reverted the following qemu commits:
> 
>    commit 0e5035776df31380a44a1a851850d110b551ecb6
>    Author: Marcelo Tosatti <mtosatti@redhat.com>
>    Date:   Tue Sep 3 18:55:16 2013 -0300
> 
>        fix steal time MSR vmsd callback to proper opaque type
> 
>        Convert steal time MSR vmsd callback pointer to proper X86CPU type.
> 
>        Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>        Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> 
>    commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
>    Author: Marcelo Tosatti <mtosatti@redhat.com>
>    Date:   Tue Feb 19 23:27:20 2013 -0300
> 
>        target-i386: kvm: save/restore steal time MSR
> 
>        Read and write steal time MSR, so that reporting is functional across
> migration.
> 
>        Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>        Signed-off-by: Gleb Natapov <gleb@redhat.com>
> 
>   and the steal time jump on migration went away. However, steal time was
>   not reported at all after migration, which is expected after reverting
>   917367aa.
> 
>   So it seems that after 917367aa, the steal time MSR is correctly saved
>   and copied to the receiving side, but then it is restored by the main
>   thread (probably during cpu_synchronize_all_post_init()), causing the
>   overflow when the vCPU threads are unpaused.
> 
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions


** Changed in: qemu
       Status: New => Fix Committed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  Fix Committed

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-11-19  9:09 ` Dr. David Alan Gilbert
  2015-11-19 14:51   ` lickdragon
@ 2015-11-19 15:10   ` lickdragon
  1 sibling, 0 replies; 13+ messages in thread
From: lickdragon @ 2015-11-19 15:10 UTC (permalink / raw)
  To: qemu-devel

To clarify, the 4.4 kernel needs to be running on the VM host, not the
guests?

Thanks again!

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  Fix Committed

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
                   ` (6 preceding siblings ...)
  2015-11-19  9:09 ` Dr. David Alan Gilbert
@ 2015-11-19 16:20 ` Dr. David Alan Gilbert
  2016-02-22 10:27 ` Markus Schade
  8 siblings, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2015-11-19 16:20 UTC (permalink / raw)
  To: qemu-devel

I think that's host.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in QEMU:
  Fix Committed

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
  2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
                   ` (7 preceding siblings ...)
  2015-11-19 16:20 ` Dr. David Alan Gilbert
@ 2016-02-22 10:27 ` Markus Schade
  8 siblings, 0 replies; 13+ messages in thread
From: Markus Schade @ 2016-02-22 10:27 UTC (permalink / raw)
  To: qemu-devel

The upstream fix for the kernel should be backported to trusty

** Project changed: qemu => linux

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1494350

Title:
  QEMU: causes vCPU steal time overflow on live migration

Status in Linux:
  Fix Committed

Bug description:
  I'm pasting in text from Debian Bug 785557
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
  b/c I couldn't find this issue reported.

  It is present in QEMU 2.3, but I haven't tested later versions.
  Perhaps someone else will find this bug and confirm for later
  versions.  (Or I will when I have time!)

  --------------------------------------------------------------------------------------------

  Hi,

  I'm trying to debug an issue we're having with some debian.org machines 
  running in QEMU 2.1.2 instances (see [1] for more background). In short, 
  after a live migration guests running Debian Jessie (linux 3.16) stop 
  accounting CPU time properly. /proc/stat in the guest shows no increase 
  in user and system time anymore (regardless of workload) and what stands 
  out are extremely large values for steal time:

   % cat /proc/stat
   cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
   cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
   cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
   cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
   cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
   intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ctxt 837862829
   btime 1431642967
   processes 8529939
   procs_running 1
   procs_blocked 0
   softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
   
  Reading the memory pointed to by the steal time MSRs pre- and 
  post-migration, I can see that post-migration the high bytes are set to 
  0xff:

  (qemu) xp /8b 0x1fc0cfc0
  000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff

  The "jump" in steal time happens when the guest is resumed on the 
  receiving side.

  I've also been able to consistently reproduce this on a Ganeti cluster 
  at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
  issue goes away if I disable the steal time MSR using `-cpu 
  qemu64,-kvm_steal_time`.

  So, it looks to me as if the steal time MSR is not set/copied properly 
  during live migration, although AFAICT this should be the case after 
  917367aa968fd4fef29d340e0c7ec8c608dffaab.

  After investigating a bit more, it looks like the issue comes from an overflow
  in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):

    static void accumulate_steal_time(struct kvm_vcpu *vcpu)
    {
            u64 delta;
    
            if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
                    return;
    
            delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
    
  Using systemtap with the attached script to trace KVM execution on the 
  receiving host kernel, we can see that shortly before marking the vCPUs 
  as runnable on a migrated KVM instance with 2 vCPUs, the following 
  happens (** marks lines of interest):

   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
       5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
      23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
       1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
       0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
       1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
       2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
       3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
   **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
       0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
   **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
      10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
       4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
       6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
       ...
       0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
       0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
   ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
      40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
   **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
       5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable

  Now, what's really interesting is that current->sched_info.run_delay 
  gets reset because the tasks (threads) using the vCPUs change, and thus 
  have a different current->sched_info: it looks like task 18446 created 
  the two vCPUs, and then they were handed over to 18448 and 18449 
  respectively. This is also verified by the fact that during the 
  overflow, both vCPUs have the old steal time of the last vcpu_load of 
  task 18446. However, according to Documentation/virtual/kvm/api.txt:

   - vcpu ioctls: These query and set attributes that control the operation
     of a single virtual cpu.

     Only run vcpu ioctls from the same thread that was used to create
  the vcpu.

  
   
  So it seems qemu is doing something that it shouldn't: calling vCPU 
  ioctls from a thread that didn't create the vCPU. Note that this 
  probably happens on every QEMU startup, but is not visible because the 
  guest kernel zeroes out the steal time on boot.

  There are at least two ways to mitigate the issue without a kernel
  recompilation:

   - The first one is to disable the steal time propagation from host to 
     guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
     short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
     KVM_MSR_ENABLED) and will completely disable steal time reporting in 
     the guest, which may not be desired if people rely on it to detect 
     CPU congestion.

   - The other one is using the following systemtap script to prevent the 
     steal time counter from overflowing by dropping the problematic 
     samples (WARNING: systemtap guru mode required, use at your own 
     risk):

        probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
          if (@defined($delta) && $delta < 0) {
            printk(4, "kvm: steal time delta < 0, dropping")
            $delta = 0
          }
        }

  Note that not all *guests* handle this condition in the same way: 3.2 
  guests still get the overflow in /proc/stat, but their scheduler 
  continues to work as expected. 3.16 guests OTOH go nuts once steal time 
  overflows and stop accumulating system & user time, while entering an 
  erratic state where steal time in /proc/stat is *decreasing* on every 
  clock tick.
  -------------------------------------------- Revised statement:
  > Now, what's really interesting is that current->sched_info.run_delay 
  > gets reset because the tasks (threads) using the vCPUs change, and 
  > thus have a different current->sched_info: it looks like task 18446 
  > created the two vCPUs, and then they were handed over to 18448 and 
  > 18449 respectively. This is also verified by the fact that during the 
  > overflow, both vCPUs have the old steal time of the last vcpu_load of 
  > task 18446. However, according to Documentation/virtual/kvm/api.txt:

  The above is not entirely accurate: the vCPUs were created by the 
  threads that are used to run them (18448 and 18449 respectively), it's 
  just that the main thread is issuing ioctls during initialization, as 
  illustrated by the strace output on a different process:

   [ vCPU #0 thread creating vCPU #0 (fd 20) ]
   [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
   [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
   [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
   [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
   
   [ vCPU #1 thread creating vCPU #1 (fd 21) ]
   [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
   [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
   [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
   [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
   [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
   [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
   [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
   [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
   [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
   [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
   [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
   [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
   [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
   [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
   [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
   
  Using systemtap again, I noticed that the main thread's run_delay is copied to
  last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
  MSR is issued by the main thread (see linux 
  3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
  reverted the following qemu commits:

   commit 0e5035776df31380a44a1a851850d110b551ecb6
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Sep 3 18:55:16 2013 -0300
   
       fix steal time MSR vmsd callback to proper opaque type
       
       Convert steal time MSR vmsd callback pointer to proper X86CPU type.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
   
   commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
   Author: Marcelo Tosatti <mtosatti@redhat.com>
   Date:   Tue Feb 19 23:27:20 2013 -0300
   
       target-i386: kvm: save/restore steal time MSR
       
       Read and write steal time MSR, so that reporting is functional across
       migration.
       
       Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
       Signed-off-by: Gleb Natapov <gleb@redhat.com>
   
  and the steal time jump on migration went away. However, steal time was 
  not reported at all after migration, which is expected after reverting 
  917367aa.

  So it seems that after 917367aa, the steal time MSR is correctly saved 
  and copied to the receiving side, but then it is restored by the main 
  thread (probably during cpu_synchronize_all_post_init()), causing the 
  overflow when the vCPU threads are unpaused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/linux/+bug/1494350/+subscriptions

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-02-22 10:36 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-10 14:38 [Qemu-devel] [Bug 1494350] [NEW] QEMU: causes vCPU steal time overflow on live migration c sights
2015-09-17 14:37 ` [Qemu-devel] [Bug 1494350] " Alexandre Derumier
2015-10-06  9:50 ` Mário Reis
2015-10-15 10:16 ` Markus Breitegger
2015-10-15 11:15   ` Alexandre DERUMIER
2015-10-15 11:44 ` Markus Breitegger
2015-10-16  8:41 ` Dr. David Alan Gilbert
2015-11-19  3:41 ` lickdragon
2015-11-19  9:09 ` Dr. David Alan Gilbert
2015-11-19 14:51   ` lickdragon
2015-11-19 15:10   ` lickdragon
2015-11-19 16:20 ` Dr. David Alan Gilbert
2016-02-22 10:27 ` Markus Schade

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.