qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Eric Blake <eblake@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>, Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] nbd/server: attach client channel to the export's AioContext
Date: Mon, 16 Sep 2019 17:30:58 -0500	[thread overview]
Message-ID: <13154181-ab02-3c81-f666-a3b995fe66b7@redhat.com> (raw)
In-Reply-To: <20190912113111.GH5383@linux.fritz.box>


[-- Attachment #1.1: Type: text/plain, Size: 2623 bytes --]

On 9/12/19 6:31 AM, Kevin Wolf wrote:

>>
>> Yes, I think locking the context during the "if (exp->blk) {" block at
>> nbd/server.c:1646 should do the trick.

That line number has moved over time; which function are you referring to?

> 
> We need to be careful to avoid locking things twice, so maybe
> nbd_export_put() is already too deep inside the NBD server.
> 
> Its callers are:
> 
> * qmp_nbd_server_add(). Like all other QMP handlers in blockdev-nbd.c it
>   neglects to lock the AioContext, but it should do so. The lock is not
>   only needed for the nbd_export_put() call, but even before.
> 
> * nbd_export_close(), which in turn is called from:
>     * nbd_eject_notifier(): run in the main thread, not locked
>     * nbd_export_remove():
>         * qmp_nbd_server_remove(): see above
>     * nbd_export_close_all():
>         * bdrv_close_all()
>         * qmp_nbd_server_stop()

Even weirder: nbd_export_put() calls nbd_export_close(), and
nbd_export_close() calls nbd_export_put().  The mutual recursion is
mind-numbing, and the fact that we use get/put instead of ref/unref like
most other qemu code is not making it any easier to reason about.

> 
> There are also calls from qemu-nbd, but these can be ignored because we
> don't have iothreads there.
> 
> I think the cleanest would be to take the lock in the outermost callers,
> i.e. all QMP handlers that deal with a specific export, in the eject
> notifier and in nbd_export_close_all().

Okay, I'm trying that (I already tried grabbing the aio_context in
nbd_export_close(), but as you predicted, that deadlocked when a nested
call already encountered the lock taken from an outer call).

> 
>> On the other hand, I wonder if there is any situation in which calling
>> to blk_unref() without locking the context could be safe. If there isn't
>> any, perhaps we should assert that the lock is held if blk->ctx != NULL
>> to catch this kind of bugs earlier?
> 
> blk_unref() must be called from the main thread, and if the BlockBackend
> to be unreferenced is not in the main AioContext, the lock must be held.
> 
> I'm not sure how to assert that locks are held, though. I once looked
> for a way to do this, but it involved either looking at the internal
> state of pthreads mutexes or hacking up QemuMutex with debug state.

Even if we can only test that in a debug build but not during normal
builds, could any of our CI builds set up that configuration?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2019-09-16 22:32 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-11 16:15 [Qemu-devel] [PATCH] nbd/server: attach client channel to the export's AioContext Sergio Lopez
2019-09-11 17:21 ` Eric Blake
2019-09-11 21:33   ` Eric Blake
2019-09-11 22:03     ` Eric Blake
2019-09-12  6:37     ` Sergio Lopez
2019-09-16 21:11       ` Eric Blake
2019-09-12  8:14     ` Kevin Wolf
2019-09-12 10:30       ` Sergio Lopez
2019-09-12 11:31         ` Kevin Wolf
2019-09-16 22:30           ` Eric Blake [this message]
2019-09-12  8:23 ` Kevin Wolf
2019-09-12 10:13   ` Sergio Lopez
2019-09-12 10:25     ` Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=13154181-ab02-3c81-f666-a3b995fe66b7@redhat.com \
    --to=eblake@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=slp@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).