All of lore.kernel.org
 help / color / mirror / Atom feed
From: Simon Kaegi <1888601@bugs.launchpad.net>
To: qemu-devel@nongnu.org
Subject: [Bug 1888601] Re: QEMU v5.1.0-rc0/rc1 hang with nested virtualization
Date: Wed, 29 Jul 2020 20:39:39 -0000	[thread overview]
Message-ID: <159605517989.16612.5973732656627579612.malone@wampee.canonical.com> (raw)
In-Reply-To: 159547584008.11100.1316842366379773629.malonedeb@wampee.canonical.com

```
(gdb) thread apply all bt

Thread 5 (LWP 211759):
#0  0x00007ff56a9988d8 in g_str_hash ()
#1  0x00007ff56a997a0c in g_hash_table_lookup ()
#2  0x00007ff56a6c528f in type_table_lookup (name=0x7ff56ac9a9dd "virtio-bus") at qom/object.c:84
#3  type_get_by_name (name=0x7ff56ac9a9dd "virtio-bus") at qom/object.c:171
#4  object_class_dynamic_cast (class=class@entry=0x555556d92ac0, typename=typename@entry=0x7ff56ac9a9dd "virtio-bus") at qom/object.c:879
#5  0x00007ff56a6c55b5 in object_class_dynamic_cast_assert (class=0x555556d92ac0, typename=typename@entry=0x7ff56ac9a9dd "virtio-bus", file=file@entry=0x7ff56aca60b8 "/root/qemu/hw/virtio/virtio.c", line=line@entry=3290, 
    func=func@entry=0x7ff56aca6c30 <__func__.31954> "virtio_queue_enabled") at qom/object.c:935
#6  0x00007ff56a415842 in virtio_queue_enabled (vdev=0x555557ed9be0, n=0) at /root/qemu/hw/virtio/virtio.c:3290
#7  0x00007ff56a5c0837 in vhost_net_start_one (dev=0x555557ed9be0, net=0x555556f99ca0) at hw/net/vhost_net.c:259
#8  vhost_net_start (dev=dev@entry=0x555557ed9be0, ncs=0x555557eef030, total_queues=total_queues@entry=2) at hw/net/vhost_net.c:351
#9  0x00007ff56a3f2d98 in virtio_net_vhost_status (status=<optimized out>, n=0x555557ed9be0) at /root/qemu/hw/net/virtio-net.c:268
#10 virtio_net_set_status (vdev=0x555557ed9be0, status=<optimized out>) at /root/qemu/hw/net/virtio-net.c:349
#11 0x00007ff56a413bdb in virtio_set_status (vdev=vdev@entry=0x555557ed9be0, val=val@entry=7 '\a') at /root/qemu/hw/virtio/virtio.c:1956
#12 0x00007ff56a65bdf0 in virtio_ioport_write (val=7, addr=18, opaque=0x555557ed1a50) at hw/virtio/virtio-pci.c:331
#13 virtio_pci_config_write (opaque=0x555557ed1a50, addr=18, val=<optimized out>, size=<optimized out>) at hw/virtio/virtio-pci.c:455
#14 0x00007ff56a46eb2a in memory_region_write_accessor (attrs=..., mask=255, shift=<optimized out>, size=1, value=0x7ff463ffd5f8, addr=<optimized out>, mr=0x555557ed2340) at /root/qemu/softmmu/memory.c:483
#15 access_with_adjusted_size (attrs=..., mr=0x555557ed2340, access_fn=<optimized out>, access_size_max=<optimized out>, access_size_min=<optimized out>, size=1, value=0x7ff463ffd5f8, addr=18)
    at /root/qemu/softmmu/memory.c:544
#16 memory_region_dispatch_write (mr=mr@entry=0x555557ed2340, addr=<optimized out>, data=<optimized out>, op=<optimized out>, attrs=..., attrs@entry=...) at /root/qemu/softmmu/memory.c:1465
#17 0x00007ff56a3a94b2 in flatview_write_continue (fv=0x7ff45426a7c0, addr=addr@entry=53394, attrs=..., attrs@entry=..., ptr=ptr@entry=0x7ff5687eb000, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, 
    mr=0x555557ed2340) at /root/qemu/include/qemu/host-utils.h:164
#18 0x00007ff56a3adc4d in flatview_write (len=1, buf=0x7ff5687eb000, attrs=..., addr=53394, fv=<optimized out>) at /root/qemu/exec.c:3216
#19 address_space_write (len=1, buf=0x7ff5687eb000, attrs=..., addr=53394, as=0x7ff5687eb000) at /root/qemu/exec.c:3307
#20 address_space_rw (as=as@entry=0x7ff56b444d60 <address_space_io>, addr=addr@entry=53394, attrs=attrs@entry=..., buf=0x7ff5687eb000, len=len@entry=1, is_write=is_write@entry=true) at /root/qemu/exec.c:3317
#21 0x00007ff56a3cdd5f in kvm_handle_io (count=1, size=1, direction=<optimized out>, data=<optimized out>, attrs=..., port=53394) at /root/qemu/accel/kvm/kvm-all.c:2262
#22 kvm_cpu_exec (cpu=cpu@entry=0x555556ffaea0) at /root/qemu/accel/kvm/kvm-all.c:2508
#23 0x00007ff56a46503c in qemu_kvm_cpu_thread_fn (arg=0x555556ffaea0) at /root/qemu/softmmu/cpus.c:1188
#24 qemu_kvm_cpu_thread_fn (arg=arg@entry=0x555556ffaea0) at /root/qemu/softmmu/cpus.c:1160
#25 0x00007ff56a7d0f13 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:521
#26 0x00007ff56ab95109 in start_thread (arg=<optimized out>) at pthread_create.c:477
#27 0x00007ff56ac43353 in clone ()

Thread 4 (LWP 211758):
#0  0x00007ff56ac3eebb in ioctl ()
#1  0x00007ff56a3cd98b in kvm_vcpu_ioctl (cpu=cpu@entry=0x555556fb4ac0, type=type@entry=44672) at /root/qemu/accel/kvm/kvm-all.c:2631
#2  0x00007ff56a3cdac5 in kvm_cpu_exec (cpu=cpu@entry=0x555556fb4ac0) at /root/qemu/accel/kvm/kvm-all.c:2468
#3  0x00007ff56a46503c in qemu_kvm_cpu_thread_fn (arg=0x555556fb4ac0) at /root/qemu/softmmu/cpus.c:1188
#4  qemu_kvm_cpu_thread_fn (arg=arg@entry=0x555556fb4ac0) at /root/qemu/softmmu/cpus.c:1160
#5  0x00007ff56a7d0f13 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:521
#6  0x00007ff56ab95109 in start_thread (arg=<optimized out>) at pthread_create.c:477
#7  0x00007ff56ac43353 in clone ()

Thread 3 (LWP 211757):
#0  0x00007ff56ac3dd0f in poll ()
#1  0x00007ff56a9aa5de in g_main_context_iterate.isra () at pthread_create.c:679
#2  0x00007ff56a9aa963 in g_main_loop_run () at pthread_create.c:679
#3  0x00007ff56a4a5b71 in iothread_run (opaque=opaque@entry=0x555556e0c800) at iothread.c:82
#4  0x00007ff56a7d0f13 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:521
#5  0x00007ff56ab95109 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007ff56ac43353 in clone ()

Thread 2 (LWP 211752):
#0  0x00007ff56ac4007d in syscall ()
#1  0x00007ff56a7d1e32 in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at /root/qemu/include/qemu/futex.h:29
#2  qemu_event_wait () at util/qemu-thread-posix.c:460
#3  0x00007ff56a7dc0f2 in call_rcu_thread () at util/rcu.c:258
#4  0x00007ff56a7d0f13 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:521
#5  0x00007ff56ab95109 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007ff56ac43353 in clone ()

Thread 1 (LWP 211751):
#0  __lll_lock_wait (futex=futex@entry=0x7ff56b447980 <qemu_global_mutex>, private=0) at lowlevellock.c:52
#1  0x00007ff56ab97263 in __pthread_mutex_lock (mutex=mutex@entry=0x7ff56b447980 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x00007ff56a7d1087 in qemu_mutex_lock_impl (mutex=0x7ff56b447980 <qemu_global_mutex>, file=0x7ff56adcf1e3 "util/main-loop.c", line=238) at util/qemu-thread-posix.c:79
#3  0x00007ff56a466f8e in qemu_mutex_lock_iothread_impl (file=file@entry=0x7ff56adcf1e3 "util/main-loop.c", line=line@entry=238) at /root/qemu/softmmu/cpus.c:1782
#4  0x00007ff56a7e909d in os_host_main_loop_wait (timeout=951196740) at util/main-loop.c:238
#5  main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:516
#6  0x00007ff56a47876e in qemu_main_loop () at /root/qemu/softmmu/vl.c:1676
#7  0x00007ff56a3a5b52 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /root/qemu/softmmu/main.c:49
```

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888601

Title:
  QEMU v5.1.0-rc0/rc1 hang with nested virtualization

Status in QEMU:
  New

Bug description:
  We're running Kata Containers using QEMU and with v5.1.0rc0 and rc1
  have noticed a problem at startup where QEMu appears to hang. We are
  not seeing this problem on our bare metal nodes and only on a VSI that
  supports nested virtualization.

  We unfortunately see nothing at all in the QEMU logs to help
  understand the problem and a hung process is just a guess at this
  point.

  Using git bisect we first see the problem with...

  ---

  f19bcdfedd53ee93412d535a842a89fa27cae7f2 is the first bad commit
  commit f19bcdfedd53ee93412d535a842a89fa27cae7f2
  Author: Jason Wang <jasowang@redhat.com>
  Date:   Wed Jul 1 22:55:28 2020 +0800

      virtio-pci: implement queue_enabled method

      With version 1, we can detect whether a queue is enabled via
      queue_enabled.

      Signed-off-by: Jason Wang <jasowang@redhat.com>
      Signed-off-by: Cindy Lu <lulu@redhat.com>
      Message-Id: <20200701145538.22333-5-lulu@redhat.com>
      Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
      Acked-by: Jason Wang <jasowang@redhat.com>

   hw/virtio/virtio-pci.c | 13 +++++++++++++
   1 file changed, 13 insertions(+)

  ---

  Reverting this commit (on top of 5.1.0-rc1) seems to work and prevent
  the hanging.

  ---

  Here's how kata ends up launching qemu in our environment --
  /opt/kata/bin/qemu-system-x86_64 -name sandbox-849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f -uuid 6bec458e-1da7-4847-a5d7-5ab31d4d2465 -machine pc,accel=kvm,kernel_irqchip -cpu host,pmu=off -qmp unix:/run/vc/vm/849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f/qmp.sock,server,nowait -m 4096M,slots=10,maxmem=30978M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=true,id=serial0,romfile= -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f/console.sock,server,nowait -device virtio-scsi-pci,id=scsi0,disable-modern=true,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0,romfile= -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f/kata.sock,server,nowait -chardev socket,id=char-396c5c3e19e29353,path=/run/vc/vm/849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-396c5c3e19e29353,tag=kataShared,romfile= -netdev tap,id=network-0,vhost=on,vhostfds=3:4,fds=5:6 -device driver=virtio-net-pci,netdev=network-0,mac=52:ac:2d:02:1f:6f,disable-modern=true,mq=on,vectors=6,romfile= -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -object memory-backend-file,id=dimm1,size=4096M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/vmlinuz-5.7.9-74 -initrd /opt/kata/share/kata-containers/kata-containers-initrd_alpine_1.11.2-6_agent.initrd -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 debug panic=1 nr_cpus=4 agent.use_vsock=false scsi_mod.scan=none init=/usr/bin/kata-agent -pidfile /run/vc/vm/849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f/pid -D /run/vc/vm/849df14c6065931adedb9d18bc9260a6d896f1814a8c5cfa239865772f1b7a5f/qemu.log -smp 2,cores=1,threads=1,sockets=4,maxcpus=4

  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1888601/+subscriptions


  parent reply	other threads:[~2020-07-29 20:46 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-23  3:43 [Bug 1888601] [NEW] QEMU v5.1.0-rc0/rc1 hang with nested virtualization Simon Kaegi
2020-07-23 14:12 ` [Bug 1888601] " Simon Kaegi
2020-07-23 14:46 ` Dr. David Alan Gilbert
2020-07-24  4:36 ` Simon Kaegi
2020-07-24  5:34 ` Jason Wang
2020-07-24  8:25 ` Dr. David Alan Gilbert
2020-07-24 14:43 ` Simon Kaegi
2020-07-24 15:42 ` Simon Kaegi
2020-07-27  3:11 ` Jason Wang
2020-07-27 17:30 ` Simon Kaegi
2020-07-29 20:39 ` Simon Kaegi [this message]
2020-07-29 21:06 ` Simon Kaegi
2020-07-30  1:53 ` Jason Wang
2020-07-30 17:56 ` Simon Kaegi
2020-08-07 16:26 ` Simon Kaegi
2020-08-07 18:53 ` Dr. David Alan Gilbert
2020-08-07 22:34 ` Simon Kaegi
2020-12-02 16:49 ` Simon Kaegi
2020-12-02 16:50 ` Simon Kaegi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=159605517989.16612.5973732656627579612.malone@wampee.canonical.com \
    --to=1888601@bugs.launchpad.net \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.