All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [Bug 1844053] [NEW] task blocked for more than X seconds - events drm_fb_helper_dirty_work
@ 2019-09-15 13:06 James Harvey
  2019-09-15 23:26 ` [Qemu-devel] [Bug 1844053] " James Harvey
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: James Harvey @ 2019-09-15 13:06 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

I've had bunches of these errors on 9 different boots, between
2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.

I've been fighting with some other issues related to a 5.2 btrfs
regression, a QEMU qxl regression (see bug 1843151) which I ran into
when trying to temporarily abandon virtio-vga, and I haven't paid enough
attention to what impact it has on the system when these occur.  In
journalctl, I can see I often rebooted minutes after they occurred, but
sometimes much later.  That must mean whenever I saw it happen that I
rebooted the VM, or potentially it impacted functionality of the system.

Please let me know if and how I can get more information for you if
needed.

I've replicated this on both a system with integrated ASPEED video, and
on an AMD Vega 64 running amdgpu.

As an example, I have one boot which reported at 122 seconds, 245, 368,
491, 614, 737, 860, 983, 1105, 1228, then I rebooted.

I have another that reported 122/245/368/491/614/737, went away for 10
minutes, then started reporting again 122/245/368/491, and went away.
Then, I rebooted about 20 hours later.

Host system has no graphical impact when this happens, and logs nothing
in its journalctl.

==========

INFO: task kworker/0:1:15 blocked for more than 122 seconds.
      Not tainted 5.2.14-1 #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/0:1     D    0    15      2 0x800004000
Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
Call Trace:
 ? __schedule+0x27f/0x6d0
 schedule+0x3d/0xc0
 virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
 ? wait_woken+0x80/0x80
 virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
 drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
 process_one_work+0x19a/0x3b0
 worker_tread+0x50/0x3a0
 kthread+0xfd/0x130
 ? process_one_work+0x3b0/0x3b0
 ? kthread_park+0x80/0x80
 ret_from_fork+0x35/0x40

==========

/usr/bin/qemu-system-x86_64 \
   -name vm,process=qemu:vm \
   -no-user-config \
   -nodefaults \
   -nographic \
   -uuid <uuid> \
   -pidfile <pidfile> \
   -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
   -cpu SandyBridge-IBRS \
   -smp cpus=4,cores=2,threads=1,sockets=2 \
   -m 4G \
   -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
   -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
   -monitor telnet:localhost:8000,server,nowait,nodelay \
   -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
   -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
   -device virtio-vga,bus=pcie.1,addr=0 \
   -usbdevice tablet \
   -netdev bridge,id=network0,br=br0 \
   -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
   -device virtio-scsi-pci,id=scsi1 \
   -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1844053

Title:
  task blocked for more than X seconds - events drm_fb_helper_dirty_work

Status in QEMU:
  New

Bug description:
  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.

  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid
  enough attention to what impact it has on the system when these occur.
  In journalctl, I can see I often rebooted minutes after they occurred,
  but sometimes much later.  That must mean whenever I saw it happen
  that I rebooted the VM, or potentially it impacted functionality of
  the system.

  Please let me know if and how I can get more information for you if
  needed.

  I've replicated this on both a system with integrated ASPEED video,
  and on an AMD Vega 64 running amdgpu.

  As an example, I have one boot which reported at 122 seconds, 245,
  368, 491, 614, 737, 860, 983, 1105, 1228, then I rebooted.

  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.

  Host system has no graphical impact when this happens, and logs
  nothing in its journalctl.

  ==========

  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
        Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
   ? __schedule+0x27f/0x6d0
   schedule+0x3d/0xc0
   virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
   ? wait_woken+0x80/0x80
   virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
   drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
   process_one_work+0x19a/0x3b0
   worker_tread+0x50/0x3a0
   kthread+0xfd/0x130
   ? process_one_work+0x3b0/0x3b0
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40

  ==========

  /usr/bin/qemu-system-x86_64 \
     -name vm,process=qemu:vm \
     -no-user-config \
     -nodefaults \
     -nographic \
     -uuid <uuid> \
     -pidfile <pidfile> \
     -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
     -cpu SandyBridge-IBRS \
     -smp cpus=4,cores=2,threads=1,sockets=2 \
     -m 4G \
     -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
     -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
     -monitor telnet:localhost:8000,server,nowait,nodelay \
     -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
     -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
     -device virtio-vga,bus=pcie.1,addr=0 \
     -usbdevice tablet \
     -netdev bridge,id=network0,br=br0 \
     -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
     -device virtio-scsi-pci,id=scsi1 \
     -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1844053/+subscriptions


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Qemu-devel] [Bug 1844053] Re: task blocked for more than X seconds - events drm_fb_helper_dirty_work
  2019-09-15 13:06 [Qemu-devel] [Bug 1844053] [NEW] task blocked for more than X seconds - events drm_fb_helper_dirty_work James Harvey
@ 2019-09-15 23:26 ` James Harvey
  2020-06-01 12:33 ` sles
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: James Harvey @ 2019-09-15 23:26 UTC (permalink / raw)
  To: qemu-devel

** Description changed:

  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.
  
  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid enough
  attention to what impact it has on the system when these occur.  In
  journalctl, I can see I often rebooted minutes after they occurred, but
  sometimes much later.  That must mean whenever I saw it happen that I
  rebooted the VM, or potentially it impacted functionality of the system.
  
  Please let me know if and how I can get more information for you if
  needed.
  
  I've replicated this on both a system with integrated ASPEED video, and
  on an AMD Vega 64 running amdgpu.
  
  As an example, I have one boot which reported at 122 seconds, 245, 368,
  491, 614, 737, 860, 983, 1105, 1228, then I rebooted.
  
  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.
  
  Host system has no graphical impact when this happens, and logs nothing
  in its journalctl.
  
+ Guest is tty mode only, with kernel argument "video=1280x1024".  No x
+ server.
+ 
  ==========
  
  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
-       Not tainted 5.2.14-1 #1
+       Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
-  ? __schedule+0x27f/0x6d0
-  schedule+0x3d/0xc0
-  virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
-  ? wait_woken+0x80/0x80
-  virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
-  drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
-  process_one_work+0x19a/0x3b0
-  worker_tread+0x50/0x3a0
-  kthread+0xfd/0x130
-  ? process_one_work+0x3b0/0x3b0
-  ? kthread_park+0x80/0x80
-  ret_from_fork+0x35/0x40
+  ? __schedule+0x27f/0x6d0
+  schedule+0x3d/0xc0
+  virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
+  ? wait_woken+0x80/0x80
+  virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
+  drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
+  process_one_work+0x19a/0x3b0
+  worker_tread+0x50/0x3a0
+  kthread+0xfd/0x130
+  ? process_one_work+0x3b0/0x3b0
+  ? kthread_park+0x80/0x80
+  ret_from_fork+0x35/0x40
  
  ==========
  
  /usr/bin/qemu-system-x86_64 \
-    -name vm,process=qemu:vm \
-    -no-user-config \
-    -nodefaults \
-    -nographic \
-    -uuid <uuid> \
-    -pidfile <pidfile> \
-    -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
-    -cpu SandyBridge-IBRS \
-    -smp cpus=4,cores=2,threads=1,sockets=2 \
-    -m 4G \
-    -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
-    -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
-    -monitor telnet:localhost:8000,server,nowait,nodelay \
-    -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
-    -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
-    -device virtio-vga,bus=pcie.1,addr=0 \
-    -usbdevice tablet \
-    -netdev bridge,id=network0,br=br0 \
-    -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
-    -device virtio-scsi-pci,id=scsi1 \
-    -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads
+    -name vm,process=qemu:vm \
+    -no-user-config \
+    -nodefaults \
+    -nographic \
+    -uuid <uuid> \
+    -pidfile <pidfile> \
+    -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
+    -cpu SandyBridge-IBRS \
+    -smp cpus=4,cores=2,threads=1,sockets=2 \
+    -m 4G \
+    -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
+    -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
+    -monitor telnet:localhost:8000,server,nowait,nodelay \
+    -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
+    -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
+    -device virtio-vga,bus=pcie.1,addr=0 \
+    -usbdevice tablet \
+    -netdev bridge,id=network0,br=br0 \
+    -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
+    -device virtio-scsi-pci,id=scsi1 \
+    -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

** Description changed:

  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.
  
  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid enough
- attention to what impact it has on the system when these occur.  In
- journalctl, I can see I often rebooted minutes after they occurred, but
- sometimes much later.  That must mean whenever I saw it happen that I
- rebooted the VM, or potentially it impacted functionality of the system.
+ attention to what impact specifically this virtio_gpu issue has on the
+ system In journalctl, I can see I often rebooted minutes after they
+ occurred, but sometimes much later.  That must mean whenever I saw it
+ happen that I rebooted the VM, or potentially it impacted functionality
+ of the system.
  
  Please let me know if and how I can get more information for you if
  needed.
  
  I've replicated this on both a system with integrated ASPEED video, and
  on an AMD Vega 64 running amdgpu.
  
  As an example, I have one boot which reported at 122 seconds, 245, 368,
  491, 614, 737, 860, 983, 1105, 1228, then I rebooted.
  
  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.
  
  Host system has no graphical impact when this happens, and logs nothing
  in its journalctl.
  
  Guest is tty mode only, with kernel argument "video=1280x1024".  No x
  server.
  
  ==========
  
  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
        Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
   ? __schedule+0x27f/0x6d0
   schedule+0x3d/0xc0
   virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
   ? wait_woken+0x80/0x80
   virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
   drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
   process_one_work+0x19a/0x3b0
   worker_tread+0x50/0x3a0
   kthread+0xfd/0x130
   ? process_one_work+0x3b0/0x3b0
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40
  
  ==========
  
  /usr/bin/qemu-system-x86_64 \
     -name vm,process=qemu:vm \
     -no-user-config \
     -nodefaults \
     -nographic \
     -uuid <uuid> \
     -pidfile <pidfile> \
     -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
     -cpu SandyBridge-IBRS \
     -smp cpus=4,cores=2,threads=1,sockets=2 \
     -m 4G \
     -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
     -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
     -monitor telnet:localhost:8000,server,nowait,nodelay \
     -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
     -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
     -device virtio-vga,bus=pcie.1,addr=0 \
     -usbdevice tablet \
     -netdev bridge,id=network0,br=br0 \
     -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
     -device virtio-scsi-pci,id=scsi1 \
     -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1844053

Title:
  task blocked for more than X seconds - events drm_fb_helper_dirty_work

Status in QEMU:
  New

Bug description:
  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.

  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid
  enough attention to what impact specifically this virtio_gpu issue has
  on the system In journalctl, I can see I often rebooted minutes after
  they occurred, but sometimes much later.  That must mean whenever I
  saw it happen that I rebooted the VM, or potentially it impacted
  functionality of the system.

  Please let me know if and how I can get more information for you if
  needed.

  I've replicated this on both a system with integrated ASPEED video,
  and on an AMD Vega 64 running amdgpu.

  As an example, I have one boot which reported at 122 seconds, 245,
  368, 491, 614, 737, 860, 983, 1105, 1228, then I rebooted.

  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.

  Host system has no graphical impact when this happens, and logs
  nothing in its journalctl.

  Guest is tty mode only, with kernel argument "video=1280x1024".  No x
  server.

  ==========

  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
        Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
   ? __schedule+0x27f/0x6d0
   schedule+0x3d/0xc0
   virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
   ? wait_woken+0x80/0x80
   virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
   drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
   process_one_work+0x19a/0x3b0
   worker_tread+0x50/0x3a0
   kthread+0xfd/0x130
   ? process_one_work+0x3b0/0x3b0
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40

  ==========

  /usr/bin/qemu-system-x86_64 \
     -name vm,process=qemu:vm \
     -no-user-config \
     -nodefaults \
     -nographic \
     -uuid <uuid> \
     -pidfile <pidfile> \
     -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
     -cpu SandyBridge-IBRS \
     -smp cpus=4,cores=2,threads=1,sockets=2 \
     -m 4G \
     -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
     -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
     -monitor telnet:localhost:8000,server,nowait,nodelay \
     -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
     -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
     -device virtio-vga,bus=pcie.1,addr=0 \
     -usbdevice tablet \
     -netdev bridge,id=network0,br=br0 \
     -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
     -device virtio-scsi-pci,id=scsi1 \
     -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1844053/+subscriptions


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 1844053] Re: task blocked for more than X seconds - events drm_fb_helper_dirty_work
  2019-09-15 13:06 [Qemu-devel] [Bug 1844053] [NEW] task blocked for more than X seconds - events drm_fb_helper_dirty_work James Harvey
  2019-09-15 23:26 ` [Qemu-devel] [Bug 1844053] " James Harvey
@ 2020-06-01 12:33 ` sles
  2021-04-22  7:35 ` Thomas Huth
  2021-06-22  4:18 ` Launchpad Bug Tracker
  3 siblings, 0 replies; 5+ messages in thread
From: sles @ 2020-06-01 12:33 UTC (permalink / raw)
  To: qemu-devel

I have the same problem running ubuntu 20.04 on Centos 7 host with
qlx...

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1844053

Title:
  task blocked for more than X seconds - events drm_fb_helper_dirty_work

Status in QEMU:
  New

Bug description:
  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.

  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid
  enough attention to what impact specifically this virtio_gpu issue has
  on the system In journalctl, I can see I often rebooted minutes after
  they occurred, but sometimes much later.  That must mean whenever I
  saw it happen that I rebooted the VM, or potentially it impacted
  functionality of the system.

  Please let me know if and how I can get more information for you if
  needed.

  I've replicated this on both a system with integrated ASPEED video,
  and on an AMD Vega 64 running amdgpu.

  As an example, I have one boot which reported at 122 seconds, 245,
  368, 491, 614, 737, 860, 983, 1105, 1228, then I rebooted.

  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.

  Host system has no graphical impact when this happens, and logs
  nothing in its journalctl.

  Guest is tty mode only, with kernel argument "video=1280x1024".  No x
  server.

  ==========

  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
        Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
   ? __schedule+0x27f/0x6d0
   schedule+0x3d/0xc0
   virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
   ? wait_woken+0x80/0x80
   virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
   drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
   process_one_work+0x19a/0x3b0
   worker_tread+0x50/0x3a0
   kthread+0xfd/0x130
   ? process_one_work+0x3b0/0x3b0
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40

  ==========

  /usr/bin/qemu-system-x86_64 \
     -name vm,process=qemu:vm \
     -no-user-config \
     -nodefaults \
     -nographic \
     -uuid <uuid> \
     -pidfile <pidfile> \
     -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
     -cpu SandyBridge-IBRS \
     -smp cpus=4,cores=2,threads=1,sockets=2 \
     -m 4G \
     -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
     -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
     -monitor telnet:localhost:8000,server,nowait,nodelay \
     -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
     -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
     -device virtio-vga,bus=pcie.1,addr=0 \
     -usbdevice tablet \
     -netdev bridge,id=network0,br=br0 \
     -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
     -device virtio-scsi-pci,id=scsi1 \
     -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1844053/+subscriptions


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 1844053] Re: task blocked for more than X seconds - events drm_fb_helper_dirty_work
  2019-09-15 13:06 [Qemu-devel] [Bug 1844053] [NEW] task blocked for more than X seconds - events drm_fb_helper_dirty_work James Harvey
  2019-09-15 23:26 ` [Qemu-devel] [Bug 1844053] " James Harvey
  2020-06-01 12:33 ` sles
@ 2021-04-22  7:35 ` Thomas Huth
  2021-06-22  4:18 ` Launchpad Bug Tracker
  3 siblings, 0 replies; 5+ messages in thread
From: Thomas Huth @ 2021-04-22  7:35 UTC (permalink / raw)
  To: qemu-devel

The QEMU project is currently considering to move its bug tracking to
another system. For this we need to know which bugs are still valid
and which could be closed already. Thus we are setting older bugs to
"Incomplete" now.

If you still think this bug report here is valid, then please switch
the state back to "New" within the next 60 days, otherwise this report
will be marked as "Expired". Or please mark it as "Fix Released" if
the problem has been solved with a newer version of QEMU already.

Thank you and sorry for the inconvenience.


** Changed in: qemu
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1844053

Title:
  task blocked for more than X seconds - events drm_fb_helper_dirty_work

Status in QEMU:
  Incomplete

Bug description:
  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.

  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid
  enough attention to what impact specifically this virtio_gpu issue has
  on the system In journalctl, I can see I often rebooted minutes after
  they occurred, but sometimes much later.  That must mean whenever I
  saw it happen that I rebooted the VM, or potentially it impacted
  functionality of the system.

  Please let me know if and how I can get more information for you if
  needed.

  I've replicated this on both a system with integrated ASPEED video,
  and on an AMD Vega 64 running amdgpu.

  As an example, I have one boot which reported at 122 seconds, 245,
  368, 491, 614, 737, 860, 983, 1105, 1228, then I rebooted.

  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.

  Host system has no graphical impact when this happens, and logs
  nothing in its journalctl.

  Guest is tty mode only, with kernel argument "video=1280x1024".  No x
  server.

  ==========

  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
        Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
   ? __schedule+0x27f/0x6d0
   schedule+0x3d/0xc0
   virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
   ? wait_woken+0x80/0x80
   virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
   drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
   process_one_work+0x19a/0x3b0
   worker_tread+0x50/0x3a0
   kthread+0xfd/0x130
   ? process_one_work+0x3b0/0x3b0
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40

  ==========

  /usr/bin/qemu-system-x86_64 \
     -name vm,process=qemu:vm \
     -no-user-config \
     -nodefaults \
     -nographic \
     -uuid <uuid> \
     -pidfile <pidfile> \
     -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
     -cpu SandyBridge-IBRS \
     -smp cpus=4,cores=2,threads=1,sockets=2 \
     -m 4G \
     -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
     -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
     -monitor telnet:localhost:8000,server,nowait,nodelay \
     -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
     -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
     -device virtio-vga,bus=pcie.1,addr=0 \
     -usbdevice tablet \
     -netdev bridge,id=network0,br=br0 \
     -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
     -device virtio-scsi-pci,id=scsi1 \
     -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1844053/+subscriptions


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Bug 1844053] Re: task blocked for more than X seconds - events drm_fb_helper_dirty_work
  2019-09-15 13:06 [Qemu-devel] [Bug 1844053] [NEW] task blocked for more than X seconds - events drm_fb_helper_dirty_work James Harvey
                   ` (2 preceding siblings ...)
  2021-04-22  7:35 ` Thomas Huth
@ 2021-06-22  4:18 ` Launchpad Bug Tracker
  3 siblings, 0 replies; 5+ messages in thread
From: Launchpad Bug Tracker @ 2021-06-22  4:18 UTC (permalink / raw)
  To: qemu-devel

[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1844053

Title:
  task blocked for more than X seconds - events drm_fb_helper_dirty_work

Status in QEMU:
  Expired

Bug description:
  I've had bunches of these errors on 9 different boots, between
  2019-08-21 and now, with Arch host and guest, from linux 5.1.16 to
  5.2.14 on host and guest, with QEMU 4.0.0 and 4.1.0.  spice 0.14.2,
  spice-gtk 0.37, spice-protocol 0.14.0, virt-viewer 8.0.

  I've been fighting with some other issues related to a 5.2 btrfs
  regression, a QEMU qxl regression (see bug 1843151) which I ran into
  when trying to temporarily abandon virtio-vga, and I haven't paid
  enough attention to what impact specifically this virtio_gpu issue has
  on the system In journalctl, I can see I often rebooted minutes after
  they occurred, but sometimes much later.  That must mean whenever I
  saw it happen that I rebooted the VM, or potentially it impacted
  functionality of the system.

  Please let me know if and how I can get more information for you if
  needed.

  I've replicated this on both a system with integrated ASPEED video,
  and on an AMD Vega 64 running amdgpu.

  As an example, I have one boot which reported at 122 seconds, 245,
  368, 491, 614, 737, 860, 983, 1105, 1228, then I rebooted.

  I have another that reported 122/245/368/491/614/737, went away for 10
  minutes, then started reporting again 122/245/368/491, and went away.
  Then, I rebooted about 20 hours later.

  Host system has no graphical impact when this happens, and logs
  nothing in its journalctl.

  Guest is tty mode only, with kernel argument "video=1280x1024".  No x
  server.

  ==========

  INFO: task kworker/0:1:15 blocked for more than 122 seconds.
        Not tainted 5.2.14-1 #1
  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  kworker/0:1     D    0    15      2 0x800004000
  Workqueue: events drm_fb_helper_dirty_work [drm_kms_helper]
  Call Trace:
   ? __schedule+0x27f/0x6d0
   schedule+0x3d/0xc0
   virtio_gpu_queue_fenced_ctrl_buffer+0xa1/0x130 [virtio_gpu]
   ? wait_woken+0x80/0x80
   virtio_gpu_surface_dirty+0x2a5/0x300 [virtio_gpu]
   drm_fb_helper_dirty_work+0x156/0x160 [drm_kms_helper]
   process_one_work+0x19a/0x3b0
   worker_tread+0x50/0x3a0
   kthread+0xfd/0x130
   ? process_one_work+0x3b0/0x3b0
   ? kthread_park+0x80/0x80
   ret_from_fork+0x35/0x40

  ==========

  /usr/bin/qemu-system-x86_64 \
     -name vm,process=qemu:vm \
     -no-user-config \
     -nodefaults \
     -nographic \
     -uuid <uuid> \
     -pidfile <pidfile> \
     -machine q35,accel=kvm,vmport=off,dump-guest-core=off \
     -cpu SandyBridge-IBRS \
     -smp cpus=4,cores=2,threads=1,sockets=2 \
     -m 4G \
     -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
     -drive if=pflash,format=raw,file=/var/qemu/efivars/vm.fd \
     -monitor telnet:localhost:8000,server,nowait,nodelay \
     -spice unix,addr=/tmp/spice.vm.sock,disable-ticketing \
     -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \
     -device virtio-vga,bus=pcie.1,addr=0 \
     -usbdevice tablet \
     -netdev bridge,id=network0,br=br0 \
     -device virtio-net-pci,netdev=network0,mac=F4:F6:34:F6:34:2d,bus=pcie.0,addr=3 \
     -device virtio-scsi-pci,id=scsi1 \
     -drive driver=raw,node-name=hd0,file=/dev/lvm/vm,if=none,discard=unmap,cache=none,aio=threads

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1844053/+subscriptions


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-06-22  5:45 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-15 13:06 [Qemu-devel] [Bug 1844053] [NEW] task blocked for more than X seconds - events drm_fb_helper_dirty_work James Harvey
2019-09-15 23:26 ` [Qemu-devel] [Bug 1844053] " James Harvey
2020-06-01 12:33 ` sles
2021-04-22  7:35 ` Thomas Huth
2021-06-22  4:18 ` Launchpad Bug Tracker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.