All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eric Blake <eblake@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>, qemu-block@nongnu.org
Cc: mreitz@redhat.com, stefanha@redhat.com, berrange@redhat.com,
	pbonzini@redhat.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 03/12] nbd: Restrict connection_co reentrance
Date: Mon, 18 Feb 2019 14:30:03 -0600	[thread overview]
Message-ID: <cb0d71c8-65fb-768e-7e5c-758de16d5d76@redhat.com> (raw)
In-Reply-To: <20190218161822.3573-4-kwolf@redhat.com>

On 2/18/19 10:18 AM, Kevin Wolf wrote:
> nbd_client_attach_aio_context() schedules connection_co in the new
> AioContext and this way reenters it in any arbitrary place that has
> yielded. We can restrict this a bit to the function call where the
> coroutine actually sits waiting when it's idle.
> 
> This doesn't solve any bug yet, but it shows where in the code we need
> to support this random reentrance and where we don't have to care.
> 
> Add FIXME comments for the existing bugs that the rest of this series
> will fix.

Wow, that's a lot of comments. Thanks for working on this.

> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  block/nbd-client.h |  1 +
>  block/nbd-client.c | 23 +++++++++++++++++++++++
>  2 files changed, 24 insertions(+)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

> +++ b/block/nbd-client.c
> @@ -76,8 +76,24 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
>      Error *local_err = NULL;
>  
>      while (!s->quit) {
> +        /*
> +         * The NBD client can only really be considered idle when it has
> +         * yielded from qio_channel_readv_all_eof(), waiting for data. This is
> +         * the point where the additional scheduled coroutine entry happens
> +         * after nbd_client_attach_aio_context().
> +         *
> +         * Therefore we keep an additional in_flight reference all the time and
> +         * only drop it temporarily here.
> +         *
> +         * FIXME This is not safe because the QIOChannel could wake up the
> +         * coroutine for a second time; it is not prepared for coroutine
> +         * resumption from external code.
> +         */
> +        bdrv_dec_in_flight(s->bs);
>          assert(s->reply.handle == 0);
>          ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
> +        bdrv_inc_in_flight(s->bs);
> +
>          if (local_err) {
>              trace_nbd_read_reply_entry_fail(ret, error_get_pretty(local_err));
>              error_free(local_err);
> @@ -116,6 +132,8 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
>  
>      s->quit = true;
>      nbd_recv_coroutines_wake_all(s);
> +    bdrv_dec_in_flight(s->bs);
> +
>      s->connection_co = NULL;
>      aio_wait_kick();
>  }
> @@ -970,6 +988,9 @@ void nbd_client_attach_aio_context(BlockDriverState *bs,
>  {
>      NBDClientSession *client = nbd_get_client_session(bs);
>      qio_channel_attach_aio_context(QIO_CHANNEL(client->ioc), new_context);
> +
> +    /* FIXME Really need a bdrv_inc_in_flight() here, but the corresponding
> +     * bdrv_dec_in_flight() would have to be in QIOChannel code :-/ */
>      aio_co_schedule(new_context, client->connection_co);
>  }
>  
> @@ -1076,6 +1097,7 @@ static int nbd_client_connect(BlockDriverState *bs,
>       * kick the reply mechanism.  */
>      qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
>      client->connection_co = qemu_coroutine_create(nbd_connection_entry, client);
> +    bdrv_inc_in_flight(bs);
>      nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
>  
>      logout("Established connection with NBD server\n");
> @@ -1108,6 +1130,7 @@ int nbd_client_init(BlockDriverState *bs,
>  {
>      NBDClientSession *client = nbd_get_client_session(bs);
>  
> +    client->bs = bs;
>      qemu_co_mutex_init(&client->send_mutex);
>      qemu_co_queue_init(&client->free_sema);
>  
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

  reply	other threads:[~2019-02-18 20:30 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-18 16:18 [Qemu-devel] [PATCH 00/12] block: bdrv_set_aio_context() related fixes Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 01/12] block-backend: Make blk_inc/dec_in_flight public Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 02/12] virtio-blk: Increase in_flight for request restart BH Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 03/12] nbd: Restrict connection_co reentrance Kevin Wolf
2019-02-18 20:30   ` Eric Blake [this message]
2019-02-18 16:18 ` [Qemu-devel] [PATCH 04/12] io: Make qio_channel_yield() interruptible Kevin Wolf
2019-02-18 17:11   ` Paolo Bonzini
2019-02-19 10:24     ` Kevin Wolf
2019-02-18 20:33   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 05/12] nbd: Move nbd_read_eof() to nbd/client.c Kevin Wolf
2019-02-18 20:42   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 06/12] nbd: Use low-level QIOChannel API in nbd_read_eof() Kevin Wolf
2019-02-18 20:48   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 07/12] nbd: Increase bs->in_flight during AioContext switch Kevin Wolf
2019-02-18 17:22   ` Paolo Bonzini
2019-02-19 11:11     ` Kevin Wolf
2019-02-19 14:12       ` Paolo Bonzini
2019-02-20 16:33     ` Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 08/12] block: Don't poll in bdrv_set_aio_context() Kevin Wolf
2019-02-18 20:52   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 09/12] block: Fix AioContext switch for drained node Kevin Wolf
2019-02-18 20:53   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 10/12] test-bdrv-drain: AioContext switch in drained section Kevin Wolf
2019-02-18 20:55   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 11/12] block: Use normal drain for bdrv_set_aio_context() Kevin Wolf
2019-02-18 20:57   ` Eric Blake
2019-02-19 11:23     ` Kevin Wolf
2019-02-19 16:04       ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 12/12] aio-posix: Assert that aio_poll() is always called in home thread Kevin Wolf
2019-02-18 20:58   ` Eric Blake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb0d71c8-65fb-768e-7e5c-758de16d5d76@redhat.com \
    --to=eblake@redhat.com \
    --cc=berrange@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.