qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
@ 2021-04-21  8:32 Vladimir Sementsov-Ogievskiy
  2021-04-21 19:47 ` Philippe Mathieu-Daudé
  2021-04-22 13:01 ` Max Reitz
  0 siblings, 2 replies; 5+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-21  8:32 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, mreitz, kwolf, vsementsov

Max reported the following bug:

$ ./qemu-img create -f raw src.img 1G
$ ./qemu-img create -f raw dst.img 1G

$ (echo '
   {"execute":"qmp_capabilities"}
   {"execute":"blockdev-mirror",
    "arguments":{"job-id":"mirror",
                 "device":"source",
                 "target":"target",
                 "sync":"full",
                 "filter-node-name":"mirror-top"}}
'; sleep 3; echo '
   {"execute":"human-monitor-command",
    "arguments":{"command-line":
                 "qemu-io mirror-top \"write 0 1G\""}}') \
| x86_64-softmmu/qemu-system-x86_64 \
   -qmp stdio \
   -blockdev file,node-name=source,filename=src.img \
   -blockdev file,node-name=target,filename=dst.img \
   -object iothread,id=iothr0 \
   -device virtio-blk,drive=source,iothread=iothr0

crashes:

0  raise () at /usr/lib/libc.so.6
1  abort () at /usr/lib/libc.so.6
2  error_exit
   (err=<optimized out>,
   msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
   at ../util/qemu-thread-posix.c:37
3  qemu_mutex_unlock_impl
   (mutex=mutex@entry=0x55fbb25ab6e0,
   file=file@entry=0x55fbb1636957 "../util/async.c",
   line=line@entry=650)
   at ../util/qemu-thread-posix.c:109
4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
5  bdrv_do_drained_begin
   (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
   parent=parent@entry=0x0,
   ignore_bds_parents=ignore_bds_parents@entry=false,
   poll=poll@entry=true) at ../block/io.c:441
6  bdrv_do_drained_begin
   (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
   bs=0x55fbb3a87000) at ../block/io.c:448
7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
   at ../block/monitor/block-hmp-cmds.c:628

man pthread_mutex_unlock
...
    EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
    PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
    current thread does not own the mutex.

So, thread doesn't own the mutex. And we have iothread here.

Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
exactly once by caller. But where is it acquired in the call stack?
Seems nowhere.

qemuio_command do acquire aio context.. But we need context acquired
around blk_unref as well. Let's do it.

Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/monitor/block-hmp-cmds.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index ebf1033f31..934100d0eb 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
 {
     BlockBackend *blk;
     BlockBackend *local_blk = NULL;
+    AioContext *ctx;
     bool qdev = qdict_get_try_bool(qdict, "qdev", false);
     const char *device = qdict_get_str(qdict, "device");
     const char *command = qdict_get_str(qdict, "command");
@@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
     qemuio_command(blk, command);
 
 fail:
+    ctx = blk_get_aio_context(blk);
+    aio_context_acquire(ctx);
+
     blk_unref(local_blk);
+
+    aio_context_release(ctx);
+
     hmp_handle_error(mon, err);
 }
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
  2021-04-21  8:32 [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash Vladimir Sementsov-Ogievskiy
@ 2021-04-21 19:47 ` Philippe Mathieu-Daudé
  2021-04-21 22:20   ` Vladimir Sementsov-Ogievskiy
  2021-04-22 13:01 ` Max Reitz
  1 sibling, 1 reply; 5+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-04-21 19:47 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, qemu-devel, mreitz

On 4/21/21 10:32 AM, Vladimir Sementsov-Ogievskiy wrote:
> Max reported the following bug:
> 
> $ ./qemu-img create -f raw src.img 1G
> $ ./qemu-img create -f raw dst.img 1G
> 
> $ (echo '
>    {"execute":"qmp_capabilities"}
>    {"execute":"blockdev-mirror",
>     "arguments":{"job-id":"mirror",
>                  "device":"source",
>                  "target":"target",
>                  "sync":"full",
>                  "filter-node-name":"mirror-top"}}
> '; sleep 3; echo '
>    {"execute":"human-monitor-command",
>     "arguments":{"command-line":
>                  "qemu-io mirror-top \"write 0 1G\""}}') \
> | x86_64-softmmu/qemu-system-x86_64 \
>    -qmp stdio \
>    -blockdev file,node-name=source,filename=src.img \
>    -blockdev file,node-name=target,filename=dst.img \
>    -object iothread,id=iothr0 \
>    -device virtio-blk,drive=source,iothread=iothr0
> 
> crashes:
> 
> 0  raise () at /usr/lib/libc.so.6
> 1  abort () at /usr/lib/libc.so.6
> 2  error_exit
>    (err=<optimized out>,
>    msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>    at ../util/qemu-thread-posix.c:37
> 3  qemu_mutex_unlock_impl
>    (mutex=mutex@entry=0x55fbb25ab6e0,
>    file=file@entry=0x55fbb1636957 "../util/async.c",
>    line=line@entry=650)
>    at ../util/qemu-thread-posix.c:109
> 4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
> 5  bdrv_do_drained_begin
>    (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
>    parent=parent@entry=0x0,
>    ignore_bds_parents=ignore_bds_parents@entry=false,
>    poll=poll@entry=true) at ../block/io.c:441
> 6  bdrv_do_drained_begin
>    (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
>    bs=0x55fbb3a87000) at ../block/io.c:448
> 7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
> 8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
> 9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
> 10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>    at ../block/monitor/block-hmp-cmds.c:628
> 
> man pthread_mutex_unlock
> ...
>     EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
>     PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
>     current thread does not own the mutex.
> 
> So, thread doesn't own the mutex. And we have iothread here.
> 
> Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
> exactly once by caller. But where is it acquired in the call stack?
> Seems nowhere.
> 
> qemuio_command do acquire aio context.. But we need context acquired
> around blk_unref as well. Let's do it.
> 
> Reported-by: Max Reitz <mreitz@redhat.com>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/monitor/block-hmp-cmds.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
> index ebf1033f31..934100d0eb 100644
> --- a/block/monitor/block-hmp-cmds.c
> +++ b/block/monitor/block-hmp-cmds.c
> @@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>  {
>      BlockBackend *blk;
>      BlockBackend *local_blk = NULL;
> +    AioContext *ctx;
>      bool qdev = qdict_get_try_bool(qdict, "qdev", false);
>      const char *device = qdict_get_str(qdict, "device");
>      const char *command = qdict_get_str(qdict, "command");
> @@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>      qemuio_command(blk, command);
>  
>  fail:
> +    ctx = blk_get_aio_context(blk);
> +    aio_context_acquire(ctx);
> +
>      blk_unref(local_blk);
> +
> +    aio_context_release(ctx);

I dare to mention "code smell" here... Not to your fix, but to
the API. Can't we simplify it somehow? Maybe we can't, I don't
understand it well. But it seems bug prone, and expensive in
human brain resources (either develop, debug or review).

>      hmp_handle_error(mon, err);
>  }
>  
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
  2021-04-21 19:47 ` Philippe Mathieu-Daudé
@ 2021-04-21 22:20   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 5+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-21 22:20 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-block; +Cc: qemu-devel, mreitz, kwolf

21.04.2021 22:47, Philippe Mathieu-Daudé wrote:
> On 4/21/21 10:32 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Max reported the following bug:
>>
>> $ ./qemu-img create -f raw src.img 1G
>> $ ./qemu-img create -f raw dst.img 1G
>>
>> $ (echo '
>>     {"execute":"qmp_capabilities"}
>>     {"execute":"blockdev-mirror",
>>      "arguments":{"job-id":"mirror",
>>                   "device":"source",
>>                   "target":"target",
>>                   "sync":"full",
>>                   "filter-node-name":"mirror-top"}}
>> '; sleep 3; echo'
>>     {"execute":"human-monitor-command",
>>      "arguments":{"command-line":
>>                   "qemu-io mirror-top \"write 0 1G\""}}') \
>> | x86_64-softmmu/qemu-system-x86_64 \
>>     -qmp stdio \
>>     -blockdev file,node-name=source,filename=src.img \
>>     -blockdev file,node-name=target,filename=dst.img \
>>     -object iothread,id=iothr0 \
>>     -device virtio-blk,drive=source,iothread=iothr0
>>
>> crashes:
>>
>> 0  raise () at /usr/lib/libc.so.6
>> 1  abort () at /usr/lib/libc.so.6
>> 2  error_exit
>>     (err=<optimized out>,
>>     msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>>     at ../util/qemu-thread-posix.c:37
>> 3  qemu_mutex_unlock_impl
>>     (mutex=mutex@entry=0x55fbb25ab6e0,
>>     file=file@entry=0x55fbb1636957 "../util/async.c",
>>     line=line@entry=650)
>>     at ../util/qemu-thread-posix.c:109
>> 4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
>> 5  bdrv_do_drained_begin
>>     (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
>>     parent=parent@entry=0x0,
>>     ignore_bds_parents=ignore_bds_parents@entry=false,
>>     poll=poll@entry=true) at ../block/io.c:441
>> 6  bdrv_do_drained_begin
>>     (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
>>     bs=0x55fbb3a87000) at ../block/io.c:448
>> 7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
>> 8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
>> 9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
>> 10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>>     at ../block/monitor/block-hmp-cmds.c:628
>>
>> man pthread_mutex_unlock
>> ...
>>      EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
>>      PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
>>      current thread does not own the mutex.
>>
>> So, thread doesn't own the mutex. And we have iothread here.
>>
>> Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
>> exactly once by caller. But where is it acquired in the call stack?
>> Seems nowhere.
>>
>> qemuio_command do acquire aio context.. But we need context acquired
>> around blk_unref as well. Let's do it.
>>
>> Reported-by: Max Reitz <mreitz@redhat.com>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/monitor/block-hmp-cmds.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
>> index ebf1033f31..934100d0eb 100644
>> --- a/block/monitor/block-hmp-cmds.c
>> +++ b/block/monitor/block-hmp-cmds.c
>> @@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>>   {
>>       BlockBackend *blk;
>>       BlockBackend *local_blk = NULL;
>> +    AioContext *ctx;
>>       bool qdev = qdict_get_try_bool(qdict, "qdev", false);
>>       const char *device = qdict_get_str(qdict, "device");
>>       const char *command = qdict_get_str(qdict, "command");
>> @@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>>       qemuio_command(blk, command);
>>   
>>   fail:
>> +    ctx = blk_get_aio_context(blk);
>> +    aio_context_acquire(ctx);
>> +
>>       blk_unref(local_blk);
>> +
>> +    aio_context_release(ctx);
> 
> I dare to mention "code smell" here... Not to your fix, but to
> the API. Can't we simplify it somehow? Maybe we can't, I don't
> understand it well. But it seems bug prone, and expensive in
> human brain resources (either develop, debug or review).

Better is move hmp_qemu_io to coroutine together with all called functions and qemu-io commands.. But it's a lot more work.


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
  2021-04-21  8:32 [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash Vladimir Sementsov-Ogievskiy
  2021-04-21 19:47 ` Philippe Mathieu-Daudé
@ 2021-04-22 13:01 ` Max Reitz
  2021-04-22 13:59   ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 5+ messages in thread
From: Max Reitz @ 2021-04-22 13:01 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block; +Cc: kwolf, qemu-devel

On 21.04.21 10:32, Vladimir Sementsov-Ogievskiy wrote:
> Max reported the following bug:
> 
> $ ./qemu-img create -f raw src.img 1G
> $ ./qemu-img create -f raw dst.img 1G
> 
> $ (echo '
>     {"execute":"qmp_capabilities"}
>     {"execute":"blockdev-mirror",
>      "arguments":{"job-id":"mirror",
>                   "device":"source",
>                   "target":"target",
>                   "sync":"full",
>                   "filter-node-name":"mirror-top"}}
> '; sleep 3; echo'
>     {"execute":"human-monitor-command",
>      "arguments":{"command-line":
>                   "qemu-io mirror-top \"write 0 1G\""}}') \
> | x86_64-softmmu/qemu-system-x86_64 \
>     -qmp stdio \
>     -blockdev file,node-name=source,filename=src.img \
>     -blockdev file,node-name=target,filename=dst.img \
>     -object iothread,id=iothr0 \
>     -device virtio-blk,drive=source,iothread=iothr0
> 
> crashes:
> 
> 0  raise () at /usr/lib/libc.so.6
> 1  abort () at /usr/lib/libc.so.6
> 2  error_exit
>     (err=<optimized out>,
>     msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>     at ../util/qemu-thread-posix.c:37
> 3  qemu_mutex_unlock_impl
>     (mutex=mutex@entry=0x55fbb25ab6e0,
>     file=file@entry=0x55fbb1636957 "../util/async.c",
>     line=line@entry=650)
>     at ../util/qemu-thread-posix.c:109
> 4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
> 5  bdrv_do_drained_begin
>     (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
>     parent=parent@entry=0x0,
>     ignore_bds_parents=ignore_bds_parents@entry=false,
>     poll=poll@entry=true) at ../block/io.c:441
> 6  bdrv_do_drained_begin
>     (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
>     bs=0x55fbb3a87000) at ../block/io.c:448
> 7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
> 8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
> 9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
> 10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>     at ../block/monitor/block-hmp-cmds.c:628
> 
> man pthread_mutex_unlock
> ...
>      EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
>      PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
>      current thread does not own the mutex.
> 
> So, thread doesn't own the mutex. And we have iothread here.
> 
> Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
> exactly once by caller. But where is it acquired in the call stack?
> Seems nowhere.
> 
> qemuio_command do acquire aio context.. But we need context acquired
> around blk_unref as well. Let's do it.
> 
> Reported-by: Max Reitz <mreitz@redhat.com>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   block/monitor/block-hmp-cmds.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
> index ebf1033f31..934100d0eb 100644
> --- a/block/monitor/block-hmp-cmds.c
> +++ b/block/monitor/block-hmp-cmds.c
> @@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>   {
>       BlockBackend *blk;
>       BlockBackend *local_blk = NULL;
> +    AioContext *ctx;
>       bool qdev = qdict_get_try_bool(qdict, "qdev", false);
>       const char *device = qdict_get_str(qdict, "device");
>       const char *command = qdict_get_str(qdict, "command");
> @@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>       qemuio_command(blk, command);
>   
>   fail:
> +    ctx = blk_get_aio_context(blk);
> +    aio_context_acquire(ctx);
> +
>       blk_unref(local_blk);
> +
> +    aio_context_release(ctx);
> +
>       hmp_handle_error(mon, err);
>   }

Looks good.  Now I wonder about the rest of this function, though. 
qemuio_command() acquires the context on its own.  So the only thing 
left that looks a bit like it may want to have the context locked is 
blk_insert_bs().  Most of its callers seem to run in the BB’s native 
context, so they don’t have to acquire it; but blk_exp_add() has the 
context held around it, so... should this place, too?

Max



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
  2021-04-22 13:01 ` Max Reitz
@ 2021-04-22 13:59   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 5+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-22 13:59 UTC (permalink / raw)
  To: Max Reitz, qemu-block; +Cc: qemu-devel, kwolf

22.04.2021 16:01, Max Reitz wrote:
> On 21.04.21 10:32, Vladimir Sementsov-Ogievskiy wrote:
>> Max reported the following bug:
>>
>> $ ./qemu-img create -f raw src.img 1G
>> $ ./qemu-img create -f raw dst.img 1G
>>
>> $ (echo '
>>     {"execute":"qmp_capabilities"}
>>     {"execute":"blockdev-mirror",
>>      "arguments":{"job-id":"mirror",
>>                   "device":"source",
>>                   "target":"target",
>>                   "sync":"full",
>>                   "filter-node-name":"mirror-top"}}
>> '; sleep 3; echo'
>>     {"execute":"human-monitor-command",
>>      "arguments":{"command-line":
>>                   "qemu-io mirror-top \"write 0 1G\""}}') \
>> | x86_64-softmmu/qemu-system-x86_64 \
>>     -qmp stdio \
>>     -blockdev file,node-name=source,filename=src.img \
>>     -blockdev file,node-name=target,filename=dst.img \
>>     -object iothread,id=iothr0 \
>>     -device virtio-blk,drive=source,iothread=iothr0
>>
>> crashes:
>>
>> 0  raise () at /usr/lib/libc.so.6
>> 1  abort () at /usr/lib/libc.so.6
>> 2  error_exit
>>     (err=<optimized out>,
>>     msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>>     at ../util/qemu-thread-posix.c:37
>> 3  qemu_mutex_unlock_impl
>>     (mutex=mutex@entry=0x55fbb25ab6e0,
>>     file=file@entry=0x55fbb1636957 "../util/async.c",
>>     line=line@entry=650)
>>     at ../util/qemu-thread-posix.c:109
>> 4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
>> 5  bdrv_do_drained_begin
>>     (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
>>     parent=parent@entry=0x0,
>>     ignore_bds_parents=ignore_bds_parents@entry=false,
>>     poll=poll@entry=true) at ../block/io.c:441
>> 6  bdrv_do_drained_begin
>>     (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
>>     bs=0x55fbb3a87000) at ../block/io.c:448
>> 7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
>> 8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
>> 9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
>> 10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>>     at ../block/monitor/block-hmp-cmds.c:628
>>
>> man pthread_mutex_unlock
>> ...
>>      EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
>>      PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
>>      current thread does not own the mutex.
>>
>> So, thread doesn't own the mutex. And we have iothread here.
>>
>> Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
>> exactly once by caller. But where is it acquired in the call stack?
>> Seems nowhere.
>>
>> qemuio_command do acquire aio context.. But we need context acquired
>> around blk_unref as well. Let's do it.
>>
>> Reported-by: Max Reitz <mreitz@redhat.com>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/monitor/block-hmp-cmds.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
>> index ebf1033f31..934100d0eb 100644
>> --- a/block/monitor/block-hmp-cmds.c
>> +++ b/block/monitor/block-hmp-cmds.c
>> @@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>>   {
>>       BlockBackend *blk;
>>       BlockBackend *local_blk = NULL;
>> +    AioContext *ctx;
>>       bool qdev = qdict_get_try_bool(qdict, "qdev", false);
>>       const char *device = qdict_get_str(qdict, "device");
>>       const char *command = qdict_get_str(qdict, "command");
>> @@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>>       qemuio_command(blk, command);
>>   fail:
>> +    ctx = blk_get_aio_context(blk);
>> +    aio_context_acquire(ctx);
>> +
>>       blk_unref(local_blk);
>> +
>> +    aio_context_release(ctx);
>> +
>>       hmp_handle_error(mon, err);
>>   }
> 
> Looks good.  Now I wonder about the rest of this function, though. qemuio_command() acquires the context on its own.  So the only thing left that looks a bit like it may want to have the context locked is blk_insert_bs().  Most of its callers seem to run in the BB’s native context, so they don’t have to acquire it; but blk_exp_add() has the context held around it, so... should this place, too?

Seems you are right. blk_insert_bs() calls bdrv_root_attach_child(), and bdrv_root_attach_child() is documented so "The caller must hold the AioContext lock @child_bs".

I'll see, what could be done here. adding one more section looks bad. Creating nested aio-context acquire when Paolo is working on removing aio context lock doesn't seem good too..

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-04-22 14:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-21  8:32 [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash Vladimir Sementsov-Ogievskiy
2021-04-21 19:47 ` Philippe Mathieu-Daudé
2021-04-21 22:20   ` Vladimir Sementsov-Ogievskiy
2021-04-22 13:01 ` Max Reitz
2021-04-22 13:59   ` Vladimir Sementsov-Ogievskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).