From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: Max Reitz <mreitz@redhat.com>, qemu-block@nongnu.org
Cc: qemu-devel@nongnu.org, kwolf@redhat.com
Subject: Re: [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash
Date: Thu, 22 Apr 2021 16:59:10 +0300 [thread overview]
Message-ID: <312139f8-c315-8099-a1c0-1e0bee3ca9ca@virtuozzo.com> (raw)
In-Reply-To: <adae2572-a168-886b-d139-6ee8bd11ecd5@redhat.com>
22.04.2021 16:01, Max Reitz wrote:
> On 21.04.21 10:32, Vladimir Sementsov-Ogievskiy wrote:
>> Max reported the following bug:
>>
>> $ ./qemu-img create -f raw src.img 1G
>> $ ./qemu-img create -f raw dst.img 1G
>>
>> $ (echo '
>> {"execute":"qmp_capabilities"}
>> {"execute":"blockdev-mirror",
>> "arguments":{"job-id":"mirror",
>> "device":"source",
>> "target":"target",
>> "sync":"full",
>> "filter-node-name":"mirror-top"}}
>> '; sleep 3; echo'
>> {"execute":"human-monitor-command",
>> "arguments":{"command-line":
>> "qemu-io mirror-top \"write 0 1G\""}}') \
>> | x86_64-softmmu/qemu-system-x86_64 \
>> -qmp stdio \
>> -blockdev file,node-name=source,filename=src.img \
>> -blockdev file,node-name=target,filename=dst.img \
>> -object iothread,id=iothr0 \
>> -device virtio-blk,drive=source,iothread=iothr0
>>
>> crashes:
>>
>> 0 raise () at /usr/lib/libc.so.6
>> 1 abort () at /usr/lib/libc.so.6
>> 2 error_exit
>> (err=<optimized out>,
>> msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
>> at ../util/qemu-thread-posix.c:37
>> 3 qemu_mutex_unlock_impl
>> (mutex=mutex@entry=0x55fbb25ab6e0,
>> file=file@entry=0x55fbb1636957 "../util/async.c",
>> line=line@entry=650)
>> at ../util/qemu-thread-posix.c:109
>> 4 aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
>> 5 bdrv_do_drained_begin
>> (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
>> parent=parent@entry=0x0,
>> ignore_bds_parents=ignore_bds_parents@entry=false,
>> poll=poll@entry=true) at ../block/io.c:441
>> 6 bdrv_do_drained_begin
>> (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
>> bs=0x55fbb3a87000) at ../block/io.c:448
>> 7 blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
>> 8 blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
>> 9 blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
>> 10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=<optimized out>)
>> at ../block/monitor/block-hmp-cmds.c:628
>>
>> man pthread_mutex_unlock
>> ...
>> EPERM The mutex type is PTHREAD_MUTEX_ERRORCHECK or
>> PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
>> current thread does not own the mutex.
>>
>> So, thread doesn't own the mutex. And we have iothread here.
>>
>> Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
>> exactly once by caller. But where is it acquired in the call stack?
>> Seems nowhere.
>>
>> qemuio_command do acquire aio context.. But we need context acquired
>> around blk_unref as well. Let's do it.
>>
>> Reported-by: Max Reitz <mreitz@redhat.com>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>> block/monitor/block-hmp-cmds.c | 7 +++++++
>> 1 file changed, 7 insertions(+)
>>
>> diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
>> index ebf1033f31..934100d0eb 100644
>> --- a/block/monitor/block-hmp-cmds.c
>> +++ b/block/monitor/block-hmp-cmds.c
>> @@ -559,6 +559,7 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>> {
>> BlockBackend *blk;
>> BlockBackend *local_blk = NULL;
>> + AioContext *ctx;
>> bool qdev = qdict_get_try_bool(qdict, "qdev", false);
>> const char *device = qdict_get_str(qdict, "device");
>> const char *command = qdict_get_str(qdict, "command");
>> @@ -615,7 +616,13 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
>> qemuio_command(blk, command);
>> fail:
>> + ctx = blk_get_aio_context(blk);
>> + aio_context_acquire(ctx);
>> +
>> blk_unref(local_blk);
>> +
>> + aio_context_release(ctx);
>> +
>> hmp_handle_error(mon, err);
>> }
>
> Looks good. Now I wonder about the rest of this function, though. qemuio_command() acquires the context on its own. So the only thing left that looks a bit like it may want to have the context locked is blk_insert_bs(). Most of its callers seem to run in the BB’s native context, so they don’t have to acquire it; but blk_exp_add() has the context held around it, so... should this place, too?
Seems you are right. blk_insert_bs() calls bdrv_root_attach_child(), and bdrv_root_attach_child() is documented so "The caller must hold the AioContext lock @child_bs".
I'll see, what could be done here. adding one more section looks bad. Creating nested aio-context acquire when Paolo is working on removing aio context lock doesn't seem good too..
--
Best regards,
Vladimir
prev parent reply other threads:[~2021-04-22 14:07 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-21 8:32 [PATCH] monitor: hmp_qemu_io: acquire aio contex, fix crash Vladimir Sementsov-Ogievskiy
2021-04-21 19:47 ` Philippe Mathieu-Daudé
2021-04-21 22:20 ` Vladimir Sementsov-Ogievskiy
2021-04-22 13:01 ` Max Reitz
2021-04-22 13:59 ` Vladimir Sementsov-Ogievskiy [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=312139f8-c315-8099-a1c0-1e0bee3ca9ca@virtuozzo.com \
--to=vsementsov@virtuozzo.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).