qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [Bug 1831750] [NEW] virtual machine cpu soft lockup when qemu attach disk
@ 2019-06-05 13:58 贞贵李
  2021-05-05 11:46 ` [Bug 1831750] " Thomas Huth
  0 siblings, 1 reply; 2+ messages in thread
From: 贞贵李 @ 2019-06-05 13:58 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

Hi,  I found a problem that virtual machine cpu soft lockup when I
attach a disk to the vm in the case that

backend storage network has a large delay or IO pressure is too large.

1) The disk xml which I attached is:

    <disk type='block' device='lun' rawio='yes'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/mapper/360022a11000c1e0a0787c23a000001cb'/>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-1-0'/>
      <address type='drive' controller='0' bus='0' target='1' unit='0'/>
    </disk>

2) The bt of qemu main thread:

#0 0x0000ffff9d78402c in pread64 () from /lib64/libpthread.so.0
#1 0x0000aaaace3357d8 in pread64 (__offset=0, __nbytes=4096, __buf=0xaaaad47a5200, __fd=202) at /usr/include/bits/unistd.h:99
#2 raw_is_io_aligned (fd=fd@entry=202, buf=buf@entry=0xaaaad47a5200, len=len@entry=4096) at block/raw_posix.c:294
#3 0x0000aaaace33597c in raw_probe_alignment (bs=bs@entry=0xaaaad32ea920, fd=202, errp=errp@entry=0xfffffef7a330) at block/raw_posix.c:349
#4 0x0000aaaace335a48 in raw_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330) at block/raw_posix.c:811
#5 0x0000aaaace3404b0 in bdrv_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330, errp@entry=0xfffffef7a360) at block/io.c:122
#6 0x0000aaaace340504 in bdrv_refresh_limits (bs=bs@entry=0xaaaad09ce800, errp=errp@entry=0xfffffef7a3b0) at block/io.c:97
#7 0x0000aaaace2eb9f0 in bdrv_open_common (bs=bs@entry=0xaaaad09ce800, file=file@entry=0xaaaad0e89800, options=<optimized out>, errp=errp@entry=0xfffffef7a450)
at block.c:1194
#8 0x0000aaaace2eedec in bdrv_open_inherit (filename=<optimized out>, filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d",
reference=reference@entry=0x0, options=<optimized out>, options@entry=0xaaaad3d0f4b0, flags=<optimized out>, flags@entry=128, parent=parent@entry=0x0,
child_role=child_role@entry=0x0, errp=errp@entry=0xfffffef7a710) at block.c:1895
#9 0x0000aaaace2ef510 in bdrv_open (filename=filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", reference=reference@entry=0x0,
options=options@entry=0xaaaad3d0f4b0, flags=flags@entry=128, errp=errp@entry=0xfffffef7a710) at block.c:1979
#10 0x0000aaaace331ef0 in blk_new_open (filename=filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", reference=reference@entry=0x0,
options=options@entry=0xaaaad3d0f4b0, flags=128, errp=errp@entry=0xfffffef7a710) at block/block_backend.c:213
#11 0x0000aaaace0da1f4 in blockdev_init (file=file@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", bs_opts=bs_opts@entry=0xaaaad3d0f4b0,
errp=errp@entry=0xfffffef7a710) at blockdev.c:603
#12 0x0000aaaace0dc478 in drive_new (all_opts=all_opts@entry=0xaaaad4dc31d0, block_default_type=<optimized out>) at blockdev.c:1116
#13 0x0000aaaace0e3ee0 in add_init_drive (
optstr=optstr@entry=0xaaaad0872ec0 "file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native")
at device_hotplug.c:46
#14 0x0000aaaace0e3f78 in hmp_drive_add (mon=0xfffffef7a810, qdict=0xaaaad0c8f000) at device_hotplug.c:67
#15 0x0000aaaacdf7d688 in handle_hmp_command (mon=0xfffffef7a810, cmdline=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:3199
#16 0x0000aaaacdf7d778 in qmp_human_monitor_command (
command_line=0xaaaacfc8e3c0 "drive_add dummy file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native",
has_cpu_index=false, cpu_index=0, errp=errp@entry=0xfffffef7a968) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:660
#17 0x0000aaaace0fdb30 in qmp_marshal_human_monitor_command (args=<optimized out>, ret=0xfffffef7a9e0, errp=0xfffffef7a9d8) at qmp-marshal.c:2223
#18 0x0000aaaace3b6ad0 in do_qmp_dispatch (request=<optimized out>, errp=0xfffffef7aa20, errp@entry=0xfffffef7aa40) at qapi/qmp_dispatch.c:115
#19 0x0000aaaace3b6d58 in qmp_dispatch (request=<optimized out>) at qapi/qmp_dispatch.c:142
#20 0x0000aaaacdf79398 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4010
#21 0x0000aaaace3bd6c0 in json_message_process_token (lexer=0xaaaacf834c80, input=<optimized out>, type=JSON_RCURLY, x=214, y=274) at qobject/json_streamer.c:105
#22 0x0000aaaace3f3d4c in json_lexer_feed_char (lexer=lexer@entry=0xaaaacf834c80, ch=<optimized out>, flush=flush@entry=false) at qobject/json_lexer.c:319
#23 0x0000aaaace3f3e6c in json_lexer_feed (lexer=0xaaaacf834c80, buffer=<optimized out>, size=<optimized out>) at qobject/json_lexer.c:369
#24 0x0000aaaacdf77c64 in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4040
#25 0x0000aaaace0eab18 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0xaaaacf90b280) at qemu_char.c:3260
#26 0x0000ffff9dadf200 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#27 0x0000aaaace3c4a00 in glib_pollfds_poll () at util/main_loop.c:230
--Type <RET> for more, q to quit, c to continue without paging--
#28 0x0000aaaace3c4a88 in os_host_main_loop_wait (timeout=<optimized out>) at util/main_loop.c:278
#29 0x0000aaaace3c4bf0 in main_loop_wait (nonblocking=<optimized out>) at util/main_loop.c:534
#30 0x0000aaaace0f5d08 in main_loop () at vl.c:2120
#31 0x0000aaaacdf3a770 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:5017


>From the bt we can see,  when do qmp sush as drive_add,  qemu main thread locks the qemu_global_mutex  and do pread in raw_probe_alignmen. Pread is a synchronous operation. If backend storage network has a large delay or IO pressure is too large,  the pread operation will not return for a long time, which make vcpu thread can't acquire qemu_global_mutex for a long time and make the vcpu thread unable to be scheduled for a long time.  So virtual machine cpu soft lockup happened.


I thank  qemu main thread should not hold qemu_global_mutex for a long time when do qmp that involving IO synchronous operation sush pread , ioctl, etc.

Do you have any solutions or good ideas about it? Thanks for your reply!

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1831750

Title:
  virtual machine cpu soft lockup when qemu attach disk

Status in QEMU:
  New

Bug description:
  Hi,  I found a problem that virtual machine cpu soft lockup when I
  attach a disk to the vm in the case that

  backend storage network has a large delay or IO pressure is too large.

  1) The disk xml which I attached is:

      <disk type='block' device='lun' rawio='yes'>
        <driver name='qemu' type='raw' cache='none' io='native'/>
        <source dev='/dev/mapper/360022a11000c1e0a0787c23a000001cb'/>
        <backingStore/>
        <target dev='sdb' bus='scsi'/>
        <alias name='scsi0-0-1-0'/>
        <address type='drive' controller='0' bus='0' target='1' unit='0'/>
      </disk>

  2) The bt of qemu main thread:

  #0 0x0000ffff9d78402c in pread64 () from /lib64/libpthread.so.0
  #1 0x0000aaaace3357d8 in pread64 (__offset=0, __nbytes=4096, __buf=0xaaaad47a5200, __fd=202) at /usr/include/bits/unistd.h:99
  #2 raw_is_io_aligned (fd=fd@entry=202, buf=buf@entry=0xaaaad47a5200, len=len@entry=4096) at block/raw_posix.c:294
  #3 0x0000aaaace33597c in raw_probe_alignment (bs=bs@entry=0xaaaad32ea920, fd=202, errp=errp@entry=0xfffffef7a330) at block/raw_posix.c:349
  #4 0x0000aaaace335a48 in raw_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330) at block/raw_posix.c:811
  #5 0x0000aaaace3404b0 in bdrv_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330, errp@entry=0xfffffef7a360) at block/io.c:122
  #6 0x0000aaaace340504 in bdrv_refresh_limits (bs=bs@entry=0xaaaad09ce800, errp=errp@entry=0xfffffef7a3b0) at block/io.c:97
  #7 0x0000aaaace2eb9f0 in bdrv_open_common (bs=bs@entry=0xaaaad09ce800, file=file@entry=0xaaaad0e89800, options=<optimized out>, errp=errp@entry=0xfffffef7a450)
  at block.c:1194
  #8 0x0000aaaace2eedec in bdrv_open_inherit (filename=<optimized out>, filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d",
  reference=reference@entry=0x0, options=<optimized out>, options@entry=0xaaaad3d0f4b0, flags=<optimized out>, flags@entry=128, parent=parent@entry=0x0,
  child_role=child_role@entry=0x0, errp=errp@entry=0xfffffef7a710) at block.c:1895
  #9 0x0000aaaace2ef510 in bdrv_open (filename=filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", reference=reference@entry=0x0,
  options=options@entry=0xaaaad3d0f4b0, flags=flags@entry=128, errp=errp@entry=0xfffffef7a710) at block.c:1979
  #10 0x0000aaaace331ef0 in blk_new_open (filename=filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", reference=reference@entry=0x0,
  options=options@entry=0xaaaad3d0f4b0, flags=128, errp=errp@entry=0xfffffef7a710) at block/block_backend.c:213
  #11 0x0000aaaace0da1f4 in blockdev_init (file=file@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", bs_opts=bs_opts@entry=0xaaaad3d0f4b0,
  errp=errp@entry=0xfffffef7a710) at blockdev.c:603
  #12 0x0000aaaace0dc478 in drive_new (all_opts=all_opts@entry=0xaaaad4dc31d0, block_default_type=<optimized out>) at blockdev.c:1116
  #13 0x0000aaaace0e3ee0 in add_init_drive (
  optstr=optstr@entry=0xaaaad0872ec0 "file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native")
  at device_hotplug.c:46
  #14 0x0000aaaace0e3f78 in hmp_drive_add (mon=0xfffffef7a810, qdict=0xaaaad0c8f000) at device_hotplug.c:67
  #15 0x0000aaaacdf7d688 in handle_hmp_command (mon=0xfffffef7a810, cmdline=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:3199
  #16 0x0000aaaacdf7d778 in qmp_human_monitor_command (
  command_line=0xaaaacfc8e3c0 "drive_add dummy file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native",
  has_cpu_index=false, cpu_index=0, errp=errp@entry=0xfffffef7a968) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:660
  #17 0x0000aaaace0fdb30 in qmp_marshal_human_monitor_command (args=<optimized out>, ret=0xfffffef7a9e0, errp=0xfffffef7a9d8) at qmp-marshal.c:2223
  #18 0x0000aaaace3b6ad0 in do_qmp_dispatch (request=<optimized out>, errp=0xfffffef7aa20, errp@entry=0xfffffef7aa40) at qapi/qmp_dispatch.c:115
  #19 0x0000aaaace3b6d58 in qmp_dispatch (request=<optimized out>) at qapi/qmp_dispatch.c:142
  #20 0x0000aaaacdf79398 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4010
  #21 0x0000aaaace3bd6c0 in json_message_process_token (lexer=0xaaaacf834c80, input=<optimized out>, type=JSON_RCURLY, x=214, y=274) at qobject/json_streamer.c:105
  #22 0x0000aaaace3f3d4c in json_lexer_feed_char (lexer=lexer@entry=0xaaaacf834c80, ch=<optimized out>, flush=flush@entry=false) at qobject/json_lexer.c:319
  #23 0x0000aaaace3f3e6c in json_lexer_feed (lexer=0xaaaacf834c80, buffer=<optimized out>, size=<optimized out>) at qobject/json_lexer.c:369
  #24 0x0000aaaacdf77c64 in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4040
  #25 0x0000aaaace0eab18 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0xaaaacf90b280) at qemu_char.c:3260
  #26 0x0000ffff9dadf200 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
  #27 0x0000aaaace3c4a00 in glib_pollfds_poll () at util/main_loop.c:230
  --Type <RET> for more, q to quit, c to continue without paging--
  #28 0x0000aaaace3c4a88 in os_host_main_loop_wait (timeout=<optimized out>) at util/main_loop.c:278
  #29 0x0000aaaace3c4bf0 in main_loop_wait (nonblocking=<optimized out>) at util/main_loop.c:534
  #30 0x0000aaaace0f5d08 in main_loop () at vl.c:2120
  #31 0x0000aaaacdf3a770 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:5017

  
  From the bt we can see,  when do qmp sush as drive_add,  qemu main thread locks the qemu_global_mutex  and do pread in raw_probe_alignmen. Pread is a synchronous operation. If backend storage network has a large delay or IO pressure is too large,  the pread operation will not return for a long time, which make vcpu thread can't acquire qemu_global_mutex for a long time and make the vcpu thread unable to be scheduled for a long time.  So virtual machine cpu soft lockup happened.

  
  I thank  qemu main thread should not hold qemu_global_mutex for a long time when do qmp that involving IO synchronous operation sush pread , ioctl, etc.

  Do you have any solutions or good ideas about it? Thanks for your
  reply!

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1831750/+subscriptions


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug 1831750] Re: virtual machine cpu soft lockup when qemu attach disk
  2019-06-05 13:58 [Qemu-devel] [Bug 1831750] [NEW] virtual machine cpu soft lockup when qemu attach disk 贞贵李
@ 2021-05-05 11:46 ` Thomas Huth
  0 siblings, 0 replies; 2+ messages in thread
From: Thomas Huth @ 2021-05-05 11:46 UTC (permalink / raw)
  To: qemu-devel

This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/176


** Changed in: qemu
       Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #176
   https://gitlab.com/qemu-project/qemu/-/issues/176

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1831750

Title:
  virtual machine cpu soft lockup when qemu attach disk

Status in QEMU:
  Expired

Bug description:
  Hi,  I found a problem that virtual machine cpu soft lockup when I
  attach a disk to the vm in the case that

  backend storage network has a large delay or IO pressure is too large.

  1) The disk xml which I attached is:

      <disk type='block' device='lun' rawio='yes'>
        <driver name='qemu' type='raw' cache='none' io='native'/>
        <source dev='/dev/mapper/360022a11000c1e0a0787c23a000001cb'/>
        <backingStore/>
        <target dev='sdb' bus='scsi'/>
        <alias name='scsi0-0-1-0'/>
        <address type='drive' controller='0' bus='0' target='1' unit='0'/>
      </disk>

  2) The bt of qemu main thread:

  #0 0x0000ffff9d78402c in pread64 () from /lib64/libpthread.so.0
  #1 0x0000aaaace3357d8 in pread64 (__offset=0, __nbytes=4096, __buf=0xaaaad47a5200, __fd=202) at /usr/include/bits/unistd.h:99
  #2 raw_is_io_aligned (fd=fd@entry=202, buf=buf@entry=0xaaaad47a5200, len=len@entry=4096) at block/raw_posix.c:294
  #3 0x0000aaaace33597c in raw_probe_alignment (bs=bs@entry=0xaaaad32ea920, fd=202, errp=errp@entry=0xfffffef7a330) at block/raw_posix.c:349
  #4 0x0000aaaace335a48 in raw_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330) at block/raw_posix.c:811
  #5 0x0000aaaace3404b0 in bdrv_refresh_limits (bs=0xaaaad32ea920, errp=0xfffffef7a330, errp@entry=0xfffffef7a360) at block/io.c:122
  #6 0x0000aaaace340504 in bdrv_refresh_limits (bs=bs@entry=0xaaaad09ce800, errp=errp@entry=0xfffffef7a3b0) at block/io.c:97
  #7 0x0000aaaace2eb9f0 in bdrv_open_common (bs=bs@entry=0xaaaad09ce800, file=file@entry=0xaaaad0e89800, options=<optimized out>, errp=errp@entry=0xfffffef7a450)
  at block.c:1194
  #8 0x0000aaaace2eedec in bdrv_open_inherit (filename=<optimized out>, filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d",
  reference=reference@entry=0x0, options=<optimized out>, options@entry=0xaaaad3d0f4b0, flags=<optimized out>, flags@entry=128, parent=parent@entry=0x0,
  child_role=child_role@entry=0x0, errp=errp@entry=0xfffffef7a710) at block.c:1895
  #9 0x0000aaaace2ef510 in bdrv_open (filename=filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", reference=reference@entry=0x0,
  options=options@entry=0xaaaad3d0f4b0, flags=flags@entry=128, errp=errp@entry=0xfffffef7a710) at block.c:1979
  #10 0x0000aaaace331ef0 in blk_new_open (filename=filename@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", reference=reference@entry=0x0,
  options=options@entry=0xaaaad3d0f4b0, flags=128, errp=errp@entry=0xfffffef7a710) at block/block_backend.c:213
  #11 0x0000aaaace0da1f4 in blockdev_init (file=file@entry=0xaaaad25f92d0 "/dev/mapper/36384c4f100630193359db7a80000011d", bs_opts=bs_opts@entry=0xaaaad3d0f4b0,
  errp=errp@entry=0xfffffef7a710) at blockdev.c:603
  #12 0x0000aaaace0dc478 in drive_new (all_opts=all_opts@entry=0xaaaad4dc31d0, block_default_type=<optimized out>) at blockdev.c:1116
  #13 0x0000aaaace0e3ee0 in add_init_drive (
  optstr=optstr@entry=0xaaaad0872ec0 "file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native")
  at device_hotplug.c:46
  #14 0x0000aaaace0e3f78 in hmp_drive_add (mon=0xfffffef7a810, qdict=0xaaaad0c8f000) at device_hotplug.c:67
  #15 0x0000aaaacdf7d688 in handle_hmp_command (mon=0xfffffef7a810, cmdline=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:3199
  #16 0x0000aaaacdf7d778 in qmp_human_monitor_command (
  command_line=0xaaaacfc8e3c0 "drive_add dummy file=/dev/mapper/36384c4f100630193359db7a80000011d,format=raw,if=none,id=drive-scsi0-0-0-3,cache=none,aio=native",
  has_cpu_index=false, cpu_index=0, errp=errp@entry=0xfffffef7a968) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:660
  #17 0x0000aaaace0fdb30 in qmp_marshal_human_monitor_command (args=<optimized out>, ret=0xfffffef7a9e0, errp=0xfffffef7a9d8) at qmp-marshal.c:2223
  #18 0x0000aaaace3b6ad0 in do_qmp_dispatch (request=<optimized out>, errp=0xfffffef7aa20, errp@entry=0xfffffef7aa40) at qapi/qmp_dispatch.c:115
  #19 0x0000aaaace3b6d58 in qmp_dispatch (request=<optimized out>) at qapi/qmp_dispatch.c:142
  #20 0x0000aaaacdf79398 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4010
  #21 0x0000aaaace3bd6c0 in json_message_process_token (lexer=0xaaaacf834c80, input=<optimized out>, type=JSON_RCURLY, x=214, y=274) at qobject/json_streamer.c:105
  #22 0x0000aaaace3f3d4c in json_lexer_feed_char (lexer=lexer@entry=0xaaaacf834c80, ch=<optimized out>, flush=flush@entry=false) at qobject/json_lexer.c:319
  #23 0x0000aaaace3f3e6c in json_lexer_feed (lexer=0xaaaacf834c80, buffer=<optimized out>, size=<optimized out>) at qobject/json_lexer.c:369
  #24 0x0000aaaacdf77c64 in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-kvm-2.8.1/monitor.c:4040
  #25 0x0000aaaace0eab18 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0xaaaacf90b280) at qemu_char.c:3260
  #26 0x0000ffff9dadf200 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
  #27 0x0000aaaace3c4a00 in glib_pollfds_poll () at util/main_loop.c:230
  --Type <RET> for more, q to quit, c to continue without paging--
  #28 0x0000aaaace3c4a88 in os_host_main_loop_wait (timeout=<optimized out>) at util/main_loop.c:278
  #29 0x0000aaaace3c4bf0 in main_loop_wait (nonblocking=<optimized out>) at util/main_loop.c:534
  #30 0x0000aaaace0f5d08 in main_loop () at vl.c:2120
  #31 0x0000aaaacdf3a770 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:5017

  
  From the bt we can see,  when do qmp sush as drive_add,  qemu main thread locks the qemu_global_mutex  and do pread in raw_probe_alignmen. Pread is a synchronous operation. If backend storage network has a large delay or IO pressure is too large,  the pread operation will not return for a long time, which make vcpu thread can't acquire qemu_global_mutex for a long time and make the vcpu thread unable to be scheduled for a long time.  So virtual machine cpu soft lockup happened.

  
  I thank  qemu main thread should not hold qemu_global_mutex for a long time when do qmp that involving IO synchronous operation sush pread , ioctl, etc.

  Do you have any solutions or good ideas about it? Thanks for your
  reply!

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1831750/+subscriptions


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-05-05 11:57 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-05 13:58 [Qemu-devel] [Bug 1831750] [NEW] virtual machine cpu soft lockup when qemu attach disk 贞贵李
2021-05-05 11:46 ` [Bug 1831750] " Thomas Huth

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).