All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach
@ 2020-09-08  8:29 Lee Yarwood
  2020-09-08  8:40 ` [Bug 1894804] " Kashyap Chamarthy
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: Lee Yarwood @ 2020-09-08  8:29 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

We are in the process of moving OpenStack CI across to use 20.04 Focal
as the underlying OS and encountering the following issue in any test
attempting to detach disk devices from running QEMU instances.

We can see a single DEVICE_DELETED event raised to libvirtd for the
/machine/peripheral/virtio-disk1/virtio-backend device but we do not see
a second event for the actual disk. As a result the device is still
marked as present in libvirt but QEMU reports it as missing in
subsequent attempts to remove the device.

The following log snippets can also be found in the following pastebin
that's slightly easier to gork:

http://paste.openstack.org/show/797564/

https://review.opendev.org/#/c/746981/ libvirt: Bump
MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
nova-ceph-multistore

https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
[..]
ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
/instance-0000003a_log.txt

2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=instance-0000003a,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
-machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
-cpu qemu64 \
-m 128 \
-overcommit mem-lock=off \
-smp 1,sockets=1,cores=1,threads=1 \
-uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
-smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=39,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc \
-no-shutdown \
-boot strict=on \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
-blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
-object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
-blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
-object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
-blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
-netdev tap,fd=41,id=hostnet0 \
-device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
-add-fd set=2,fd=43 \
-chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
-device isa-serial,chardev=charserial0,id=serial0 \
-vnc 0.0.0.0:3 \
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
-object rng-random,id=objrng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/1 (label charserial0)

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
/nova-ceph-multistore/4c56def/testr_results.html

tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: None
    Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
        Body: b''

# First attempt to detach the device  by n-cpu

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
/nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt (gzipped)

29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

# DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
/virtio-backend

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
/nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
(gzipped)

329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
329348  fd=-1                                                                          
329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
329350  len=79 ret=79 errno=0                                                          
329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

# Second attempt to detach the device by n-cpu

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
/nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt (gzipped)

30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

# DeviceNotFound raised by QEMU

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
/nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
(gzipped)

332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
332485  fd=-1                                                                          
332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
332487  len=79 ret=79 errno=0                                                          
332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

# n-cpu continues to retry the detach

30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

# n-cpu eventually gives up trying to detach the device

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
/nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt (gzipped)

31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
@ 2020-09-08  8:40 ` Kashyap Chamarthy
  2020-09-08  9:36 ` Kashyap Chamarthy
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Kashyap Chamarthy @ 2020-09-08  8:40 UTC (permalink / raw)
  To: qemu-devel

Just to expand a little more on the "second DEVICE_DELETED" event:

Apparently this is an "oddity" of VirtIO devices where we see _two_
DEVICE_DELETED events upon hot-unplugging a device.  So, ideally, we
should see the following two events, but we're only seeing the first
one:

(1) "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
 
(2) "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
  2020-09-08  8:40 ` [Bug 1894804] " Kashyap Chamarthy
@ 2020-09-08  9:36 ` Kashyap Chamarthy
  2020-09-08 10:19 ` Daniel Berrange
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Kashyap Chamarthy @ 2020-09-08  9:36 UTC (permalink / raw)
  To: qemu-devel

Attaching the QMP requests and responses filtered out from the giant
libvirtd.log.

This file was generated using:

    $> grep -Ei '(QEMU_MONITOR_SEND_MSG|QEMU_MONITOR_RECV_)' libvirtd.log |& 
       tee QMP_requests_and_replies_from_libvirtd.log

(And then I `gzip`ed it; compressed - 3.1MB; uncompressed -39MB)

** Attachment added: "QMP_requests_and_replies_from_libvirtd.log.gz"
   https://bugs.launchpad.net/qemu/+bug/1894804/+attachment/5408622/+files/QMP_requests_and_replies_from_libvirtd.log.gz

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
  2020-09-08  8:40 ` [Bug 1894804] " Kashyap Chamarthy
  2020-09-08  9:36 ` Kashyap Chamarthy
@ 2020-09-08 10:19 ` Daniel Berrange
  2020-09-16 18:19 ` sean mooney
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Daniel Berrange @ 2020-09-08 10:19 UTC (permalink / raw)
  To: qemu-devel

Note that there are many VMs running, so it is important to isolate logs
for just one of them, by matching on  mon=0xXXXXXXXX.

For example considering this

$ grep 0x7f021c0b9c70 libvirtd.log  | grep -E
'QEMU_MONITOR_(RECV_REPLY|RECV_EVENT|SEND_MSG)'

We can see the "device_del" request:

2020-09-03 19:53:47.114+0000: 65331: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f021c0b9c70 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}
2020-09-03 19:53:47.116+0000: 65328: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f021c0b9c70 reply={"return": {}, "id": "libvirt-367"}


and then the first DEVICE_DELETED event:

2020-09-03 19:53:52.539+0000: 65328: info :
qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f021c0b9c70 event={"timestamp": {"seconds": 1599162832,
"microseconds": 539721}, "event": "DEVICE_DELETED", "data": {"path":
"/machine/peripheral/virtio-disk1/virtio-backend"}}


The second, important, DEVICE_DELETED event never arrives.  OpenStack hopelessly just retries  device_del forever more.

Note in normal case the first DEVICE_DELETED event (for
"/machine/peripheral/virtio-disk1/virtio-backend") gets emitted from
VCPU thread context. The second DEVICE_DELETED event (for
"/machine/peripheral/virtio-disk1/")  is emitted from the RCU thread,
and that one is missing here. THe second one is the event that libvirt
needs in order to determine that unplug is complete.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (2 preceding siblings ...)
  2020-09-08 10:19 ` Daniel Berrange
@ 2020-09-16 18:19 ` sean mooney
  2020-09-21 11:26 ` Christian Ehrhardt 
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: sean mooney @ 2020-09-16 18:19 UTC (permalink / raw)
  To: qemu-devel

hum i was hoping to indicate this affect focal in some what but not sure how to do that but this issue
happens with the ubunutu 20.04 version of qemu 4.2
it does not seam to happen with the centos 8 build of the same qemu so i dont know if there is a delta in packages or if its just a case that in the openstack ci we have more testing based on ubuntu 20.04 and its appearing there more as a result.


** Also affects: ubuntu
   Importance: Undecided
       Status: New

** Package changed: ubuntu => qemu (Ubuntu)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (3 preceding siblings ...)
  2020-09-16 18:19 ` sean mooney
@ 2020-09-21 11:26 ` Christian Ehrhardt 
  2020-09-24 13:03 ` Christian Ehrhardt 
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Christian Ehrhardt  @ 2020-09-21 11:26 UTC (permalink / raw)
  To: qemu-devel

Hi,
interesting bug report - thanks lee, Kashyap and Sean - as well as Danil for taking a look already. 

If this would always fail no unplugs would work ever which I knew can't
be right as I test that. So we need to find what is different...


@Openstack people - is that reliably triggering at the very first hot-detach for you or more like "happening in one of a million detaches triggered by the CI"?


First I simplified the case from "massive openstack CI" to just a few commands and small XMLs...
So I tried to cut ceph out of the equation here and compared hot remove between Bionic and Focal qemu/libvirt when removing a local image file.

$ qemu-img create -f qcow2 /var/lib/uvtool/libvirt/images/testdisk.qcow 20m
Disk:
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/uvtool/libvirt/images/testdisk.qcow'/>
      <target dev='vdc' bus='virtio'/>
    </disk>

The systems have just this guest each and the log debug config is like:
 /etc/libvirt/libvirtd.conf:
 > log_filters="3:qemu 3:libvirt 4:object 2:json 4:event 3:util"  
 > log_outputs="2:file:/var/log/libvirtd.log"
 $ rm /var/log/libvirtd.log
 $ systemctl restart libvirtd

Detach via:
 $ virsh detach-disk f-on-b vdc --live

I compared the monitor streams in those two cases but they both deliver DEVICE_DELETED for the 
/machine/peripheral/virtio-disk2/virtio-backend as well as /machine/peripheral/virtio-disk2.


@Jamespage could you get me a system with at least one auxiliary ceph disk that I can attach/detach to check if this might be in any way ceph specific?


Details in case we need them for comparison later on ...:

## Bionic ##

libvirt checks devices before removal:

2020-09-21 11:00:19.287+0000: 5755: info : qemuMonitorSend:1076 : QEMU_MONITOR_SEND_MSG: mon=0x7fdf94001430 msg={"execute":"qom-list","arguments":{"path":"/machine/peripheral"},"id":"libvirt-12"}
2020-09-21 11:00:19.288+0000: 5686: info : qemuMonitorJSONIOProcessLine:213 : QEMU_MONITOR_RECV_REPLY: mon=0x7fdf94001430 reply={"return": [{"name": "virtio-disk2", "type": "child<virtio-blk-pci>"}, {"name": "virtio-disk1", "type": "child<virtio-blk-pci>"}, {"name": "virtio-disk0", "type": "child<virtio-blk-pci>"}, {"name": "video0", "type": "child<cirrus-vga>"}, {"name": "serial0", "type": "child<isa-serial>"}, {"name": "balloon0", "type": "child<virtio-balloon-pci>"}, {"name": "net0", "type": "child<virtio-net-pci>"}, {"name": "usb", "type": "child<piix3-usb-uhci>"}, {"name": "type", "type": "string"}], "id": "libvirt-12"}

Then send removal command

2020-09-21 11:00:48.672+0000: 5691: info : qemuMonitorSend:1076 : QEMU_MONITOR_SEND_MSG: mon=0x7fdf94001430 msg={"execute":"device_del","arguments":{"id":"virtio-disk2"},"id":"libvirt-13"}
2020-09-21 11:00:48.673+0000: 5686: info : qemuMonitorJSONIOProcessLine:213 : QEMU_MONITOR_RECV_REPLY: mon=0x7fdf94001430 reply={"return": {}, "id": "libvirt-13"}
020-09-21 11:00:48.721+0000: 5686: info : qemuMonitorJSONIOProcessLine:208 : QEMU_MONITOR_RECV_EVENT: mon=0x7fdf94001430 event={"timestamp": {"seconds": 1600686048, "microseconds": 720908}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk2/virtio-backend"}}
2020-09-21 11:00:48.723+0000: 5686: info : qemuMonitorJSONIOProcessLine:208 : QEMU_MONITOR_RECV_EVENT: mon=0x7fdf94001430 event={"timestamp": {"seconds": 1600686048, "microseconds": 723091}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk2", "path": "/machine/peripheral/virtio-disk2"}}

^^ both DEVICE_DELETED occur in response to the "device_del".

vv Now also remove the device, but that already happened implicitly

2020-09-21 11:00:48.723+0000: 5691: info : qemuMonitorSend:1076 : QEMU_MONITOR_SEND_MSG: mon=0x7fdf94001430 msg={"execute":"human-monitor-command","arguments":{"command-line":"drive_del drive-virtio-disk2"},"id":"libvirt-14"}
2020-09-21 11:00:48.724+0000: 5686: info : qemuMonitorJSONIOProcessLine:213 : QEMU_MONITOR_RECV_REPLY: mon=0x7fdf94001430 reply={"return": "Device 'drive-virtio-disk2' not found\r\n", "id": "libvirt-14"}

In the follow up query virtio-disk2 is removed

2020-09-21 11:00:48.875+0000: 5691: info : qemuMonitorSend:1076 : QEMU_MONITOR_SEND_MSG: mon=0x7fdf94001430 msg={"execute":"qom-list","arguments":{"path":"/machine/peripheral"},"id":"libvirt-15"}
2020-09-21 11:00:48.876+0000: 5686: info : qemuMonitorJSONIOProcessLine:213 : QEMU_MONITOR_RECV_REPLY: mon=0x7fdf94001430 reply={"return": [{"name": "virtio-disk1", "type": "child<virtio-blk-pci>"}, {"name": "virtio-disk0", "type": "child<virtio-blk-pci>"}, {"name": "video0", "type": "child<cirrus-vga>"}, {"name": "serial0", "type": "child<isa-serial>"}, {"name": "balloon0", "type": "child<virtio-balloon-pci>"}, {"name": "net0", "type": "child<virtio-net-pci>"}, {"name": "usb", "type": "child<piix3-usb-uhci>"}, {"name": "type", "type": "string"}], "id": "libvirt-15"}


## Focal ##

libvirt checks devices before removal:

2020-09-21 11:00:20.856+0000: 6395: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f463800b3e0 msg={"execute":"qom-list","arguments":{"path":"/machine/peripheral"},"id":"libvirt-12"}
2020-09-21 11:00:20.857+0000: 6319: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f463800b3e0 reply={"return": [{"name": "type", "type": "string"}, {"name": "pci.7", "type": "child<pcie-root-port>"}, {"name": "pci.4", "type": "child<pcie-root-port>"}, {"name": "serial0", "type": "child<isa-serial>"}, {"name": "pci.1", "type": "child<pcie-root-port>"}, {"name": "virtio-disk1", "type": "child<virtio-blk-pci>"}, {"name": "video0", "type": "child<qxl-vga>"}, {"name": "balloon0", "type": "child<virtio-balloon-pci>"}, {"name": "pci.6", "type": "child<pcie-root-port>"}, {"name": "usb", "type": "child<qemu-xhci>"}, {"name": "pci.3", "type": "child<pcie-root-port>"}, {"name": "virtio-disk0", "type": "child<virtio-blk-pci>"}, {"name": "pci.5", "type": "child<pcie-root-port>"}, {"name": "channel0", "type": "child<virtserialport>"}, {"name": "pci.2", "type": "child<pcie-root-port>"}, {"name": "virtio-disk2", "type": "child<virtio-blk-pci>"}, {"name": "net0", "type": "child<virtio-net-pci>"}, {"name": "virtio-serial0", "type": "child<virtio-serial-pci>"}], "id": "libvirt-12"}
2020-09-21 11:00:46.509+0000: 6319: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f463800b3e0 event={"timestamp": {"seconds": 1600686046, "microseconds": 508981}, "event": "RTC_CHANGE", "data": {"offset": 1}}

# send device_del

2020-09-21 11:00:47.928+0000: 6323: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f463800b3e0 msg={"execute":"device_del","arguments":{"id":"virtio-disk2"},"id":"libvirt-13"}
# list 
2020-09-21 11:00:47.929+0000: 6319: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f463800b3e0 reply={"return": {}, "id": "libvirt-13"}

# check devices again right after (disk is still there)

2020-09-21 11:00:52.929+0000: 6323: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f463800b3e0 msg={"execute":"qom-list","arguments":{"path":"/machine/peripheral"},"id":"libvirt-14"}
2020-09-21 11:00:52.930+0000: 6319: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f463800b3e0 reply={"return": [{"name": "type", "type": "string"}, {"name": "pci.7", "type": "child<pcie-root-port>"}, {"name": "pci.4", "type": "child<pcie-root-port>"}, {"name": "serial0", "type": "child<isa-serial>"}, {"name": "pci.1", "type": "child<pcie-root-port>"}, {"name": "virtio-disk1", "type": "child<virtio-blk-pci>"}, {"name": "video0", "type": "child<qxl-vga>"}, {"name": "balloon0", "type": "child<virtio-balloon-pci>"}, {"name": "pci.6", "type": "child<pcie-root-port>"}, {"name": "usb", "type": "child<qemu-xhci>"}, {"name": "pci.3", "type": "child<pcie-root-port>"}, {"name": "virtio-disk0", "type": "child<virtio-blk-pci>"}, {"name": "pci.5", "type": "child<pcie-root-port>"}, {"name": "channel0", "type": "child<virtserialport>"}, {"name": "pci.2", "type": "child<pcie-root-port>"}, {"name": "virtio-disk2", "type": "child<virtio-blk-pci>"}, {"name": "net0", "type": "child<virtio-net-pci>"}, {"name": "virtio-serial0", "type": "child<virtio-serial-pci>"}], "id": "libvirt-14"}

A bit later both DEVICE_DELETED show up:

2020-09-21 11:00:54.118+0000: 6319: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f463800b3e0 event={"timestamp": {"seconds": 1600686054, "microseconds": 117896}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk2/virtio-backend"}}
2020-09-21 11:00:54.120+0000: 6319: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f463800b3e0 event={"timestamp": {"seconds": 1600686054, "microseconds": 120016}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk2", "path": "/machine/peripheral/virtio-disk2"}}


Some follow on cleanup (what was the HMP command on bionic)

2020-09-21 11:00:54.122+0000: 6409: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f463800b3e0 msg={"execute":"blockdev-del","arguments":{"node-name":"libvirt-1-format"},"id":"libvirt-15"}
2020-09-21 11:00:54.126+0000: 6319: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f463800b3e0 reply={"return": {}, "id": "libvirt-15"}
2020-09-21 11:00:54.126+0000: 6409: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f463800b3e0 msg={"execute":"blockdev-del","arguments":{"node-name":"libvirt-1-storage"},"id":"libvirt-16"}
2020-09-21 11:00:54.129+0000: 6319: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f463800b3e0 reply={"return": {}, "id": "libvirt-16"}

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (4 preceding siblings ...)
  2020-09-21 11:26 ` Christian Ehrhardt 
@ 2020-09-24 13:03 ` Christian Ehrhardt 
  2020-09-25 15:18 ` James Page
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Christian Ehrhardt  @ 2020-09-24 13:03 UTC (permalink / raw)
  To: qemu-devel

As outlined before the time from device_del to the DEVICE_DELETED is
indeed "a bit longer" in Focal (from ~0.1s to ~6s), but other than that
I couldn't find anything else yet.

We were wondering due to [1] in a related bug if it might be dependent to (over)load.
I was consuming Disk and CPU in a controlled fashion (stress-ng --cpu 8 --io 4 --hdd 4 1x in the container the load is in and 1x on the Host). On that I was running attach/detach loops but in all of a two hundred retries it was "->device_del <-DEVICE_DELETED <-DEVICE_DELETED ->blockdev-del ->blockdev-del".

James Page was doing tests (thanks) with real Openstack and Ceph but
couldn't - so far -trigger the issue either. He will likely later update
the bug as well, but is currently trying to ramp up concurrency to more
likely hit the issue.

https://bugs.launchpad.net/nova/+bug/1882521/comments/8

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (5 preceding siblings ...)
  2020-09-24 13:03 ` Christian Ehrhardt 
@ 2020-09-25 15:18 ` James Page
  2020-09-29  7:21 ` Christian Ehrhardt 
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: James Page @ 2020-09-25 15:18 UTC (permalink / raw)
  To: qemu-devel

I'm not able to reproduce at this point in time - as the version of
libvirt/qemu is the same I deployed an Ussuri cloud on focal and ran
block device add/remove with concurrency for several hundred iterations
but I was not able to trigger a detach failure.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  New

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (6 preceding siblings ...)
  2020-09-25 15:18 ` James Page
@ 2020-09-29  7:21 ` Christian Ehrhardt 
  2020-09-30 18:29 ` Lee Yarwood
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Christian Ehrhardt  @ 2020-09-29  7:21 UTC (permalink / raw)
  To: qemu-devel

Thanks James, but now I'm  unsure where to go from here as it isn't
reproducible with many tries at different scales that James and I did.

@Sean/Lee
Since you wondered if it might be due to Ubuntu Delta on top of 4.2 - there are two things we could compare Ubuntu's qemu to then:
1. qemu 4.2 as released by upstream
2. qemu 4.2 as build in centos (they have some delta as well)
Not sure I can provide you #2 easily and #1 will always need a bit of delta to build&integrate correctly. All of that would be doable still, but in general if I'd provide you PPA builds could you try those in your environment to !reliably! trigger the issue allowing us to play bisect-ping-pong? The word "reliable" is important here as we'd need to sort out builds/patches by a reliable yes/no on each step.

@Sean/Lee
- which "the centos 8 build of the same qemu" version is that exactly? I might take a look at comparing the patches applied. We are at 4.2.1 already, the latest I found there was 4.2.0-29.el8.3.x86_64.rpm which already is the advanced-virt version (otherwise 2.12 based).

@Sean/Lee
All of the following suggested approaches depend on the question if you can test this reliably with different qemu PPA builds:
- qemu 4.2 (as upstream) vs usual Ubuntu build -> find the offending patch in our delta
- test Ubuntu 20.10 which has qemu 5.0 and libvirt 6.6 -> if fixed there find by which change
- qemu 4.2 Ubuntu vs qemu 4.2 as in CentOS (but build for Ubuntu) -> if the latter works better then let us find by which (set of) patches.


I was checking the Delta on Centos8 advanced-virt qemu 4.2 as that was reported
to (maybe) work better. I was comparing which patches are applied, that are no
on Ubuntu and which of those might be related.
Among several individual fixes for some issues the biggest patch sets
are feature backports for Enhanced LUKS/backup/snapshot handling,
multifd migration/cancel, block-mirror, HMAT changes, virtiofs, qemu-img zero
write, arm time handling and some related build time self tests.
Due to the nature of these changes some affect the block handling by affecting block/job/hotplug. But they might only do so by accident, nothing is clearly for addressing the issue that
was reported. And even of the list below most seem unrelated - so as Sean assumed maybe it just isn't exercised on centos enough to be seen there?

eca0f3524a4eb57d03a56b0cbcef5527a0981ce4 backup: don't acquire aio_context in backup_clean
58226634c4b02af7b10862f7fbd3610a344bfb7f backup: Improve error for bdrv_getlength() failure
958a04bd32af18d9a207bcc78046e56a202aebc2 backup: Make sure that source and target size match
7b8e4857426f2e2de2441749996c6161b550bada block: Add flags to bdrv(_co)_truncate()
92b92799dc8662b6f71809100a4aabc1ae408ebb block: Add flags to BlockDriver.bdrv_co_truncate()
087ab8e775f48766068e65de1bc99d03b40d1670 block: always fill entire LUKS header space with zeros
8c6242b6f383e43fd11d2c50f8bcdd2bba1100fc block-backend: Add flags to blk_truncate()
564806c529d4e0acad209b1e5b864a8886092f1f block-backend: Reorder flush/pdiscard function definitions
0abf2581717a19d9749d5c2ff8acd0ac203452c2 block/backup-top: Don't acquire context while dropping top
1de6b45fb5c1489b450df7d1a4c692bba9678ce6 block: bdrv_reopen() with backing file in different AioContext
e1d7f8bb1ec0c6911dcea81641ce6139dbded02d block.c: adding bdrv_co_delete_file
69032253c33ae1774233c63cedf36d32242a85fc block/curl: HTTP header field names are case insensitive
7788a319399f17476ff1dd43164c869e320820a2 block/curl: HTTP header fields allow whitespace around values
91005a495e228ebd7e5e173cd18f952450eef82d blockdev: Acquire AioContext on dirty bitmap functions
471ded690e19689018535e3f48480507ed073e22 blockdev: fix coding style issues in drive_backup_prepare
3ea67e08832775a28d0bd2795f01bc77e7ea1512 blockdev: honor bdrv_try_set_aio_context() context requirements
c6996cf9a6c759c29919642be9a73ac64b38301b blockdev: Promote several bitmap functions to non-static
377410f6fb4f6b0d26d4a028c20766fae05de17e blockdev: Return bs to the proper context on snapshot abort
bb4e58c6137e80129b955789dd4b66c1504f20dc blockdev: Split off basic bitmap operations for qemu-img
5b7bfe515ecbd584b40ff6e41d2fd8b37c7d5139 blockdev: unify qmp_blockdev_backup and blockdev-backup transaction paths
2288ccfac96281c316db942d10e3f921c1373064 blockdev: unify qmp_drive_backup and drive-backup transaction paths
7f16476fab14fc32388e0ebae793f64673848efa block: Fix blk->in_flight during blk_wait_while_drained()
30dd65f307b647eef8156c4a33bd007823ef85cb block: Fix cross-AioContext blockdev-snapshot
eeea1faa099f82328f5831cf252f8ce0a59a9287 block: Fix leak in bdrv_create_file_fallback()
fd17146cd93d1704cd96d7c2757b325fc7aac6fd block: Generic file creation fallback
fbb92b6798894d3bf62fe3578d99fa62c720b242 block: Increase BB.in_flight for coroutine and sync interfaces
17e1e2be5f9e84e0298e28e70675655b43e225ea block: Introduce 'bdrv_reopen_commit_post' step
9bffae14df879255329473a7bd578643af2d4c9c block: introducing 'bdrv_co_delete_file' interface
c7a0f2be8f95b220cdadbba9a9236eaf115951dc block: Make bdrv_get_cumulative_perm() public
ef893b5c84f3199d777e33966dc28839f71b1a5c block: Make it easier to learn which BDS support bitmaps
78c81a3f108870d325b0a39d88711366afe6f703 block/nbd: Fix hang in .bdrv_close()
b92902dfeaafbceaf744ab7473f2d070284f6172 block: pass BlockDriver reference to the .bdrv_co_create
65eb7c85a3e62529e2bad782e94d5a7b11dd5a92 block/qcow2: Move bitmap reopen into bdrv_reopen_commit_post
d29d3d1f80b3947fb26e7139645c83de66d146a9 block: Relax restrictions for blockdev-snapshot
5a5e7f8cd86b7ced0732b1b6e28c82baa65b09c9 block: trickle down the fallback image creation function use to the block drivers
955c7d6687fefcd903900a1e597fcbc896c661cd block: truncate: Don't make backing file data visible
1bba30da24e1124ceeb0693c81382a0d77e20ca5 crypto.c: cleanup created file when block_crypto_co_create_opts_luks fails
87ca3b8fa615b278b33cabf9ed22b3f44b5214ba file-posix: Drop hdev_co_create_opts()
2f0c6e7a650de133eccd94e9bb6cf7b2070f07f1 file-posix: Support BDRV_REQ_ZERO_WRITE for truncate
89b6fc45614bb45dcd58f1590415afe5c2791abd hmp: Allow using qdev ID for qemu-io command
80f0900905b555f00d644894c786b6d66ac2e00e iscsi: Drop iscsi_co_create_opts()
0501e1aa1d32a6e02dd06a79bba97fbe9d557cb5 hw/pci/pcie: Forbid hot-plug if it's disabled on the slot
0dabc0f6544f2c0310546f6d6cf3b68979580a9c hw/pci/pcie: Move hot plug capability check to pre_plug callback
6a1e073378353eb6ac0565e0dc649b3db76ed5dc hw/pci/pcie: Replace PCI_DEVICE() casts with existing variable
b660a84bbb0eb1a76b505648d31d5e82594fb75e job: take each job's lock individually in job_txn_apply
530a0963184e57e71a5b538e9161f115df533e96 pcie_root_port: Add hotplug disabling optio
c6bdc312f30d5c7326aa2fdca3e0f98c15eb541a qapi: Add '@allow-write-only-overlay' feature for 'blockdev-snapshot'
5d72c68b49769c927e90b78af6d90f6a384b26ac qcow2: Expose bitmaps' size during measure
eb8a0cf3ba26611f3981f8f45ac6a868975a68cc qcow2: Forward ZERO_WRITE flag for full preallocation
f01643fb8b47e8a70c04bbf45e0f12a9e5bc54de qcow2: Support BDRV_REQ_ZERO_WRITE for truncate
a555b8092abc6f1bbe4b64c516679cbd68fcfbd8 qemu-file: Don't do IO after shutdown
08558e33257ec796594bd411261028a93414a70c replication: assert we own context before job_cancel_sync
49b44549ace7890fffdf027fd3695218ee7f1121 virtio-blk: On restart, process queued requests in the proper context
7aa1c247b466870b0704d3ccdc3755e5e7394dca virtio-blk: Refactor the code that processes queued requests
d0435bc513e23a4961b6af20164d1c6c219eb4ea virtio: don't enable notifications during polling

Vice versa being on 4.2.1 already gives Ubuntu some block changes that
might have caused this...

Waiting for Sean/Lee to comment on how testable/reliable that is on
their end -> incomplete

** Changed in: qemu (Ubuntu)
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (7 preceding siblings ...)
  2020-09-29  7:21 ` Christian Ehrhardt 
@ 2020-09-30 18:29 ` Lee Yarwood
  2020-10-01 11:28 ` Lee Yarwood
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Lee Yarwood @ 2020-09-30 18:29 UTC (permalink / raw)
  To: qemu-devel

@Christian Yes we can test this in OpenStack CI with PPAs if you could
provide Bionic based builds.

Better yet I could even try to setup a git bisect locally if you have
steps to build from a source tree somewhere?

@James Did you try using OpenStack CI sized hosts (8 vCPUs, 8GB of RAM)
and the QEMU virt_type? That was the most reliable way I found of
hitting this and running the `tox -e full` tempest target to envoke a
full tempest run.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (8 preceding siblings ...)
  2020-09-30 18:29 ` Lee Yarwood
@ 2020-10-01 11:28 ` Lee Yarwood
  2020-10-01 13:55 ` Lee Yarwood
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Lee Yarwood @ 2020-10-01 11:28 UTC (permalink / raw)
  To: qemu-devel

> @Christian Yes we can test this in OpenStack CI with PPAs if you 
> could provide Bionic based builds.

I'm not sure what I was thinking here but we *can* use Focal based
builds of QEMU in CI now.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (9 preceding siblings ...)
  2020-10-01 11:28 ` Lee Yarwood
@ 2020-10-01 13:55 ` Lee Yarwood
  2020-10-01 14:14 ` Lee Yarwood
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Lee Yarwood @ 2020-10-01 13:55 UTC (permalink / raw)
  To: qemu-devel

While testing >=v5.0.0 on Fedora 32 I've stumbled across the following
change in QEMU:

https://github.com/qemu/qemu/commit/cce8944cc9efab47d4bf29cfffb3470371c3541b

The change suggests that previously parallel requests to detach a PCIe
device (such as virtio-blk) from QEMU would result in the original
request being cancelled. We are hitting the above check and error with
v5.0.0 in OpenStack CI that made me wonder if we are hitting the
original issue in this bug where QEMU is < v5.0.0?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (10 preceding siblings ...)
  2020-10-01 13:55 ` Lee Yarwood
@ 2020-10-01 14:14 ` Lee Yarwood
  2020-10-02  8:59 ` Christian Ehrhardt 
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Lee Yarwood @ 2020-10-01 14:14 UTC (permalink / raw)
  To: qemu-devel

Here's an example from QMP_requests_and_replies_from_libvirtd.log.gz
attached earlier to the bug by Kashyap. We can see two device_del
commands prior to only seeing a single event emitted by QEMU:

http://paste.openstack.org/show/798607/

# zgrep 0x7f768806eaf0 QMP_requests_and_replies_from_libvirtd.log.gz | egrep '(device_del|DEVICE_DELETED)'
2020-09-07 19:50:11.315+0000: 65558: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}
2020-09-07 19:50:17.535+0000: 65557: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}
2020-09-07 19:50:18.138+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f768806eaf0 event={"timestamp": {"seconds": 1599508218, "microseconds": 138046}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
2020-09-07 19:50:24.549+0000: 65556: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-371"}
2020-09-07 19:50:33.561+0000: 65556: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-373"}
2020-09-07 19:50:44.572+0000: 65557: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-375"}
2020-09-07 19:50:57.583+0000: 65558: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-377"}
2020-09-07 19:51:12.595+0000: 65556: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-379"}
2020-09-07 19:51:30.548+0000: 65557: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-381"}
2020-09-07 19:51:51.804+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-383"}
2020-09-07 19:53:33.286+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-385"}
2020-09-07 19:53:40.310+0000: 65558: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-387"}
2020-09-07 19:53:49.353+0000: 65556: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-389"}
2020-09-07 19:54:00.771+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-391"}
2020-09-07 19:54:15.737+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-393"}
2020-09-07 19:54:30.767+0000: 65557: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-395"}
2020-09-07 19:54:47.778+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-397"}
2020-09-07 19:55:06.793+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f768806eaf0 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-399"}

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (11 preceding siblings ...)
  2020-10-01 14:14 ` Lee Yarwood
@ 2020-10-02  8:59 ` Christian Ehrhardt 
  2020-10-02  9:56 ` Lee Yarwood
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Christian Ehrhardt  @ 2020-10-02  8:59 UTC (permalink / raw)
  To: qemu-devel

Thanks Lee,
builds are most reliably done in Lauchpad IMHO and the Ubuntu Delta is a quilt stack which isn't as easily bisectable.
If we end up searching not between Ubuntu Delta I can help to get you qemu built from git for that.
But for some initially probing which range of changes we want to actually look at let me provide you PPAs.

Try #1 in PPA [1] is the fix you referred [2] to that avoids the double
delete to cause unexpected results. It will also add the warnings you
have seen. So giving it a test should be a great first try.

Let me know what the results with that are.

[1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4294/+packages
[2]: https://github.com/qemu/qemu/commit/cce8944cc9efab47d4bf29cfffb3470371c3541b

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (12 preceding siblings ...)
  2020-10-02  8:59 ` Christian Ehrhardt 
@ 2020-10-02  9:56 ` Lee Yarwood
  2020-10-03  7:49 ` Christian Ehrhardt 
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Lee Yarwood @ 2020-10-02  9:56 UTC (permalink / raw)
  To: qemu-devel

Thanks Christian,

I'll confirm that we see error from [2] with that PPA shortly.

In the meantime I've removed the device detach retry logic from
OpenStack Nova and now always see two DEVICE_DELETED events raised by
QEMU after a single device_del request from libvirt:

http://paste.openstack.org/show/798637/

# zgrep 'debug.*"event": "DEVICE_DELETED"' libvirtd_log.txt.gz 
2020-10-01 21:37:33.180+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588253, "microseconds": 179881}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:37:33.242+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588253, "microseconds": 242565}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:40:17.062+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588417, "microseconds": 62704}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:40:17.114+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588417, "microseconds": 113888}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:40:20.911+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588420, "microseconds": 911560}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:40:20.985+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588420, "microseconds": 985753}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:42:53.528+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588573, "microseconds": 528330}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}]
2020-10-01 21:42:53.583+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588573, "microseconds": 583063}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}]

2020-10-01 21:44:21.156+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588661, "microseconds": 155903}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}]
2020-10-01 21:44:21.214+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588661, "microseconds": 213714}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}]

2020-10-01 21:45:55.422+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588755, "microseconds": 422238}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}]
2020-10-01 21:45:55.479+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588755, "microseconds": 479250}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}]

2020-10-01 21:46:09.001+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588769, "microseconds": 970}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}]
2020-10-01 21:46:09.059+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588769, "microseconds": 58799}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}]

2020-10-01 21:48:05.182+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588885, "microseconds": 182382}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:48:05.246+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588885, "microseconds": 246077}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:48:08.211+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588888, "microseconds": 211386}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}]
2020-10-01 21:48:08.269+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601588888, "microseconds": 269371}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}]

2020-10-01 21:51:58.384+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589118, "microseconds": 384455}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:51:58.443+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589118, "microseconds": 442848}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:52:02.482+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589122, "microseconds": 361461}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk2/virtio-backend"}}]
2020-10-01 21:52:02.482+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589122, "microseconds": 427718}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk2", "path": "/machine/peripheral/virtio-disk2"}}]

2020-10-01 21:52:18.258+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589138, "microseconds": 258652}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:52:18.326+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589138, "microseconds": 325449}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:52:43.585+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589163, "microseconds": 585656}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:52:43.656+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589163, "microseconds": 656097}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:53:00.216+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589180, "microseconds": 216080}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:53:00.277+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589180, "microseconds": 276944}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:53:18.426+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589198, "microseconds": 425877}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:53:18.477+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589198, "microseconds": 477615}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 21:53:35.651+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589215, "microseconds": 252907}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 21:53:35.652+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589215, "microseconds": 304900}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:02:58.751+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589778, "microseconds": 751743}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:02:58.782+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589778, "microseconds": 782611}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:03:02.982+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589782, "microseconds": 982181}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:03:03.045+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589783, "microseconds": 45393}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:04:05.845+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589845, "microseconds": 845577}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:04:05.907+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589845, "microseconds": 907294}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:04:08.004+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589848, "microseconds": 4374}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:04:08.066+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589848, "microseconds": 66444}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:05:52.990+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589952, "microseconds": 990549}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:05:53.015+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601589953, "microseconds": 14845}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:07:37.477+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601590057, "microseconds": 477326}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:07:37.544+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601590057, "microseconds": 544586}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

2020-10-01 22:08:15.655+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601590095, "microseconds": 655117}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
2020-10-01 22:08:15.717+0000: 55842: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1601590095, "microseconds": 716847}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1", "path": "/machine/peripheral/virtio-disk1"}}]

I'll close this bug out as invalid once I've tested with the PPA and
confirmed that we hit the new check.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (13 preceding siblings ...)
  2020-10-02  9:56 ` Lee Yarwood
@ 2020-10-03  7:49 ` Christian Ehrhardt 
  2021-05-08  6:09 ` Thomas Huth
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Christian Ehrhardt  @ 2020-10-03  7:49 UTC (permalink / raw)
  To: qemu-devel

I was seeing in my manual tests that the delete->Event was taking a bit
longer, maybe your retry logic kicks in before it is complete now. And
due to that the problem takes place.

So with the PPA you should get:
a) a warning, but no more the cancelled/undefined behavior on double delete
b) can tune your retry logic to no more trigger the warning messages

So I wasn't sure from the logs - is the fix in the PPA good to overcome
the issue completely or is it missing something?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  New
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (14 preceding siblings ...)
  2020-10-03  7:49 ` Christian Ehrhardt 
@ 2021-05-08  6:09 ` Thomas Huth
  2021-07-08  4:17 ` Launchpad Bug Tracker
  2021-07-08  4:17 ` Launchpad Bug Tracker
  17 siblings, 0 replies; 19+ messages in thread
From: Thomas Huth @ 2021-05-08  6:09 UTC (permalink / raw)
  To: qemu-devel

Is there still anything left to do here for upstream QEMU?

** Changed in: qemu
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  Incomplete
Status in qemu package in Ubuntu:
  Incomplete

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (15 preceding siblings ...)
  2021-05-08  6:09 ` Thomas Huth
@ 2021-07-08  4:17 ` Launchpad Bug Tracker
  2021-07-08  4:17 ` Launchpad Bug Tracker
  17 siblings, 0 replies; 19+ messages in thread
From: Launchpad Bug Tracker @ 2021-07-08  4:17 UTC (permalink / raw)
  To: qemu-devel

[Expired for qemu (Ubuntu) because there has been no activity for 60
days.]

** Changed in: qemu (Ubuntu)
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  Expired
Status in qemu package in Ubuntu:
  Expired

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Bug 1894804] Re: Second DEVICE_DELETED event missing during virtio-blk disk device detach
  2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
                   ` (16 preceding siblings ...)
  2021-07-08  4:17 ` Launchpad Bug Tracker
@ 2021-07-08  4:17 ` Launchpad Bug Tracker
  17 siblings, 0 replies; 19+ messages in thread
From: Launchpad Bug Tracker @ 2021-07-08  4:17 UTC (permalink / raw)
  To: qemu-devel

[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894804

Title:
  Second DEVICE_DELETED event missing during virtio-blk disk device
  detach

Status in QEMU:
  Expired
Status in qemu package in Ubuntu:
  Expired

Bug description:
  We are in the process of moving OpenStack CI across to use 20.04 Focal
  as the underlying OS and encountering the following issue in any test
  attempting to detach disk devices from running QEMU instances.

  We can see a single DEVICE_DELETED event raised to libvirtd for the
  /machine/peripheral/virtio-disk1/virtio-backend device but we do not
  see a second event for the actual disk. As a result the device is
  still marked as present in libvirt but QEMU reports it as missing in
  subsequent attempts to remove the device.

  The following log snippets can also be found in the following pastebin
  that's slightly easier to gork:

  http://paste.openstack.org/show/797564/

  https://review.opendev.org/#/c/746981/ libvirt: Bump
  MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260
  nova-ceph-multistore

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/dpkg-l.txt

  ii  libvirt-daemon                       6.0.0-0ubuntu8.3                      amd64        Virtualization daemon
  ii  libvirt-daemon-driver-qemu           6.0.0-0ubuntu8.3                      amd64        Virtualization daemon QEMU connection driver
  ii  libvirt-daemon-system                6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files
  ii  libvirt-daemon-system-systemd        6.0.0-0ubuntu8.3                      amd64        Libvirt daemon configuration files (systemd)
  ii  libvirt-dev:amd64                    6.0.0-0ubuntu8.3                      amd64        development files for the libvirt library
  ii  libvirt0:amd64                       6.0.0-0ubuntu8.3                      amd64        library for interfacing with different virtualization systems
  [..]
  ii  qemu-block-extra:amd64               1:4.2-3ubuntu6.4                      amd64        extra block backend modules for qemu-system and qemu-utils
  ii  qemu-slof                            20191209+dfsg-1                       all          Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system                          1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries
  ii  qemu-system-arm                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (arm)
  ii  qemu-system-common                   1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (common files)
  ii  qemu-system-data                     1:4.2-3ubuntu6.4                      all          QEMU full system emulation (data files)
  ii  qemu-system-mips                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (mips)
  ii  qemu-system-misc                     1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (s390x)
  ii  qemu-system-sparc                    1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (sparc)
  ii  qemu-system-x86                      1:4.2-3ubuntu6.4                      amd64        QEMU full system emulation binaries (x86)
  ii  qemu-utils                           1:4.2-3ubuntu6.4                      amd64        QEMU utilities

  https://zuul.opendev.org/t/openstack/build/4c56def513884c5eb3ba7b0adf7fa260/log/controller/logs/libvirt/qemu
  /instance-0000003a_log.txt

  2020-09-07 19:29:55.021+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers@ubuntu.com> Thu, 30 Jul 2020 06:40:28 -0400), qemu version: 4.2.0Debian 1:4.2-3ubuntu6.4, kernel: 5.4.0-45-generic, hostname: ubuntu-focal-ovh-bhs1-0019682147
  LC_ALL=C \
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
  HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a \
  XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.local/share \
  XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.cache \
  XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-86-instance-0000003a/.config \
  QEMU_AUDIO_DRV=none \
  /usr/bin/qemu-system-x86_64 \
  -name guest=instance-0000003a,debug-threads=on \
  -S \
  -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-86-instance-0000003a/master-key.aes \
  -machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
  -cpu qemu64 \
  -m 128 \
  -overcommit mem-lock=off \
  -smp 1,sockets=1,cores=1,threads=1 \
  -uuid 0d59f238-daef-40d4-adf9-a4fa24c35231 \
  -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=21.1.0,serial=0d59f238-daef-40d4-adf9-a4fa24c35231,uuid=0d59f238-daef-40d4-adf9-a4fa24c35231,family=Virtual Machine' \
  -no-user-config \
  -nodefaults \
  -chardev socket,id=charmonitor,fd=39,server,nowait \
  -mon chardev=charmonitor,id=monitor,mode=control \
  -rtc base=utc \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object secret,id=libvirt-3-storage-secret0,data=zT+XibedVJZM2du1+PXpIXHMVJ9a0pVcKihOtCGwlB0=,keyid=masterKey0,iv=536Lfw+nsyvDhFBTOQG4zA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-3-storage-secret0","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
  -object secret,id=libvirt-2-storage-secret0,data=SO9AgCCTvkBBMYHZe+LVzoCF4GUNgvBtkFwRRIji7WI=,keyid=masterKey0,iv=MzGu/h2Api4mMG9lL8hvdg==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"volumes","image":"volume-04dd79b2-3c05-4492-b1d7-7969d24df768","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-2-storage-secret0","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=libvirt-2-format,id=virtio-disk1,write-cache=on,serial=04dd79b2-3c05-4492-b1d7-7969d24df768 \
  -object secret,id=libvirt-1-storage-secret0,data=lhbR9+ewiXiaf3dKoQWP3bk6hlLMLRXnbhh9ZkjZ9dQ=,keyid=masterKey0,iv=WWHpGuOHkwXqxlLxGUqpcA==,format=base64 \
  -blockdev '{"driver":"rbd","pool":"vms","image":"0d59f238-daef-40d4-adf9-a4fa24c35231_disk.config","server":[{"host":"158.69.70.115","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
  -blockdev '{"node-name":"libvirt-1-format","read-only":true,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
  -device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,write-cache=on \
  -netdev tap,fd=41,id=hostnet0 \
  -device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:4d:bb:0b,bus=pci.0,addr=0x3 \
  -add-fd set=2,fd=43 \
  -chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
  -device isa-serial,chardev=charserial0,id=serial0 \
  -vnc 0.0.0.0:3 \
  -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
  -object rng-random,id=objrng0,filename=/dev/urandom \
  -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
  -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
  -msg timestamp=on
  char device redirected to /dev/pts/1 (label charserial0)

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/testr_results.html

  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume

  2020-09-07 19:30:13,764 100285 INFO     [tempest.lib.common.rest_client] Request (ServerRescueNegativeTestJSON:_run_cleanups): 202 DELETE https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768 1.261s
  2020-09-07 19:30:13,764 100285 DEBUG    [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
          Body: None
      Response - Headers: {'date': 'Mon, 07 Sep 2020 19:30:12 GMT', 'server': 'Apache/2.4.41 (Ubuntu)', 'content-length': '0', 'content-type': 'application/json', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'x-compute-request-id': 'req-502a0106-3eb9-4d42-9dd4-c43ba89187b6', 'connection': 'close', 'status': '202', 'content-location': 'https://158.69.70.115/compute/v2.1/servers/0d59f238-daef-40d4-adf9-a4fa24c35231/os-volume_attachments/04dd79b2-3c05-4492-b1d7-7969d24df768'}
          Body: b''

  # First attempt to detach the device  by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  29957 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] detach device xml: <disk type="network" de
  29958 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  29959 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  29960 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  29961 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  29962 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  29963 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  29964 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  29965 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  29966 Sep 07 19:30:14.185403 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DEVICE_DELETED only raised for /machine/peripheral/virtio-disk1
  /virtio-backend

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  329344 2020-09-07 19:30:14.165+0000: 65559: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329345 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  329346 2020-09-07 19:30:14.165+0000: 65559: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  329347 2020-09-07 19:30:14.165+0000: 65559: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329348  fd=-1                                                                          
  329349 2020-09-07 19:30:14.165+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-367"}^M
  329350  len=79 ret=79 errno=0                                                          
  329351 2020-09-07 19:30:14.168+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id": "libvirt-367"}]
  329352 2020-09-07 19:30:14.168+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"return": {}, "id": "libvirt-367"}
  329353 2020-09-07 19:30:14.168+0000: 65559: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  329354 2020-09-07 19:30:14.201+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}]
  329355 2020-09-07 19:30:14.208+0000: 65555: info : qemuMonitorJSONIOProcessLine:234 : QEMU_MONITOR_RECV_EVENT: mon=0x7f769405e470 event={"timestamp": {"seconds": 1599507014, "microseconds": 201037}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-disk1/virtio-backend"}}
  329356 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:181 : mon=0x7f769405e470 obj=0x55dd95d0cba0
  329357 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorEmitEvent:1198 : mon=0x7f769405e470 event=DEVICE_DELETED
  329358 2020-09-07 19:30:14.208+0000: 65555: debug : qemuProcessHandleEvent:549 : vm=0x7f768c0df0b0
  329359 2020-09-07 19:30:14.208+0000: 65555: debug : virObjectEventNew:631 : obj=0x55dd95d3bf60
  329360 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONIOProcessEvent:205 : handle DEVICE_DELETED handler=0x7f7691732840 data=0x55dd95eae3c0
  329361 2020-09-07 19:30:14.208+0000: 65555: debug : qemuMonitorJSONHandleDeviceDeleted:1287 : missing device in device deleted event

  # Second attempt to detach the device by n-cpu

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  30046 Sep 07 19:30:19.192548 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG oslo.service.loopingcall [None req-502a0106-3eb9-4d42-9dd4-c43ba89187b6 tempest-ServerRescueNegativeTestJSON-73411582 tempest-ServerRescueNegativeTestJSON-73411582] Waiting for function nova.virt.libvirt.gu
  30047 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30048 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30049 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30050 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30051 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30052 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30053 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30054 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30055 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>
  30056 Sep 07 19:30:19.194846 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:  {{(pid=92697) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:510}}

  # DeviceNotFound raised by QEMU

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/libvirt/libvirtd_log.txt
  (gzipped)

  332479 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9416 : Starting job: job=modify agentJob=none asyncJob=none (vm=0x7f768c0df0b0 name=instance-0000003a, current job=none agentJob=none async=none)
  332480 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjBeginJobInternal:9470 : Started job: modify (async=none vm=0x7f768c0df0b0 name=instance-0000003a)
  332481 2020-09-07 19:30:19.196+0000: 65560: debug : qemuDomainObjEnterMonitorInternal:9869 : Entering monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332482 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2848 : devalias=virtio-disk1
  332483 2020-09-07 19:30:19.196+0000: 65560: debug : qemuMonitorDelDevice:2850 : mon:0x7f769405e470 vm:0x7f768c0df0b0 fd:39
  332484 2020-09-07 19:30:19.196+0000: 65560: info : qemuMonitorSend:993 : QEMU_MONITOR_SEND_MSG: mon=0x7f769405e470 msg={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332485  fd=-1                                                                          
  332486 2020-09-07 19:30:19.196+0000: 65555: info : qemuMonitorIOWrite:450 : QEMU_MONITOR_IO_WRITE: mon=0x7f769405e470 buf={"execute":"device_del","arguments":{"id":"virtio-disk1"},"id":"libvirt-369"}^M
  332487  len=79 ret=79 errno=0                                                          
  332488 2020-09-07 19:30:19.197+0000: 65555: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}]
  332489 2020-09-07 19:30:19.197+0000: 65555: info : qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY: mon=0x7f769405e470 reply={"id": "libvirt-369", "error": {"class": "DeviceNotFound", "desc": "Device 'virtio-disk1' not found"}}
  332490 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainObjExitMonitorInternal:9892 : Exited monitor (mon=0x7f769405e470 vm=0x7f768c0df0b0 name=instance-0000003a)
  332491 2020-09-07 19:30:19.197+0000: 65560: debug : qemuDomainDeleteDevice:128 : Detaching of device virtio-disk1 failed and no event arrived

  # n-cpu continues to retry the detach

  30245 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30246 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30247 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30248 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30249 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30250 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30251 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30252 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30253 Sep 07 19:30:26.209322 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30276 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30277 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30278 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30279 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30280 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30281 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30282 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30283 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30284 Sep 07 19:30:42.028517 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30356 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30357 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30358 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30359 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30360 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30361 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30362 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30363 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30364 Sep 07 19:30:53.232072 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30381 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30382 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30383 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30384 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30385 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30386 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30387 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30388 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30389 Sep 07 19:31:06.239532 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30478 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30479 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30480 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30481 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30482 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30483 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30484 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30485 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30486 Sep 07 19:31:21.369016 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  30796 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  30797 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  30798 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  30799 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  30800 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  30801 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  30802 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  30803 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  30804 Sep 07 19:31:42.590535 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  31050 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: DEBUG nova.virt.libvirt.guest [-] detach device xml: <disk type="network" device="disk">
  31051 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
  31052 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <source protocol="rbd" name="volumes/volume-04dd79b2-3c05-4492-b1d7-7969d24df768">
  31053 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:     <host name="158.69.70.115" port="6789"/>
  31054 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   </source>
  31055 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <target dev="vdb" bus="virtio"/>
  31056 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <serial>04dd79b2-3c05-4492-b1d7-7969d24df768</serial>
  31057 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]:   <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
  31058 Sep 07 19:32:01.613201 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: </disk>

  # n-cpu eventually gives up trying to detach the device

  https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4c5/746981/5/check
  /nova-ceph-multistore/4c56def/controller/logs/screen-n-cpu.txt
  (gzipped)

  31102 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable t
  31103 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall Traceback (most recent call last):
  31104 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, in _run_loop
  31105 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  31106 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, in _func
  31107 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     return self._sleep_time
  31108 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  31109 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     self.force_reraise()
  31110 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
  31111 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  31112 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  31113 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise value
  31114 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, in _func
  31115 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  31116 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall   File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 489, in _do_wait_and_retry_detach
  31117 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  31118 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to detach the device from the live config.
  31119 Sep 07 19:32:06.850434 ubuntu-focal-ovh-bhs1-0019682147 nova-compute[92697]: ERROR oslo.service.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894804/+subscriptions


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-07-08  4:28 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-08  8:29 [Bug 1894804] [NEW] Second DEVICE_DELETED event missing during virtio-blk disk device detach Lee Yarwood
2020-09-08  8:40 ` [Bug 1894804] " Kashyap Chamarthy
2020-09-08  9:36 ` Kashyap Chamarthy
2020-09-08 10:19 ` Daniel Berrange
2020-09-16 18:19 ` sean mooney
2020-09-21 11:26 ` Christian Ehrhardt 
2020-09-24 13:03 ` Christian Ehrhardt 
2020-09-25 15:18 ` James Page
2020-09-29  7:21 ` Christian Ehrhardt 
2020-09-30 18:29 ` Lee Yarwood
2020-10-01 11:28 ` Lee Yarwood
2020-10-01 13:55 ` Lee Yarwood
2020-10-01 14:14 ` Lee Yarwood
2020-10-02  8:59 ` Christian Ehrhardt 
2020-10-02  9:56 ` Lee Yarwood
2020-10-03  7:49 ` Christian Ehrhardt 
2021-05-08  6:09 ` Thomas Huth
2021-07-08  4:17 ` Launchpad Bug Tracker
2021-07-08  4:17 ` Launchpad Bug Tracker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.