All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, mreitz@redhat.com, eblake@redhat.com,
	stefanha@redhat.com, berrange@redhat.com, pbonzini@redhat.com,
	qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH 03/12] nbd: Restrict connection_co reentrance
Date: Mon, 18 Feb 2019 17:18:13 +0100	[thread overview]
Message-ID: <20190218161822.3573-4-kwolf@redhat.com> (raw)
In-Reply-To: <20190218161822.3573-1-kwolf@redhat.com>

nbd_client_attach_aio_context() schedules connection_co in the new
AioContext and this way reenters it in any arbitrary place that has
yielded. We can restrict this a bit to the function call where the
coroutine actually sits waiting when it's idle.

This doesn't solve any bug yet, but it shows where in the code we need
to support this random reentrance and where we don't have to care.

Add FIXME comments for the existing bugs that the rest of this series
will fix.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/nbd-client.h |  1 +
 block/nbd-client.c | 23 +++++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index d990207a5c..09e03013d2 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -35,6 +35,7 @@ typedef struct NBDClientSession {
 
     NBDClientRequest requests[MAX_NBD_REQUESTS];
     NBDReply reply;
+    BlockDriverState *bs;
     bool quit;
 } NBDClientSession;
 
diff --git a/block/nbd-client.c b/block/nbd-client.c
index f0ad54ce21..e776785325 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -76,8 +76,24 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
     Error *local_err = NULL;
 
     while (!s->quit) {
+        /*
+         * The NBD client can only really be considered idle when it has
+         * yielded from qio_channel_readv_all_eof(), waiting for data. This is
+         * the point where the additional scheduled coroutine entry happens
+         * after nbd_client_attach_aio_context().
+         *
+         * Therefore we keep an additional in_flight reference all the time and
+         * only drop it temporarily here.
+         *
+         * FIXME This is not safe because the QIOChannel could wake up the
+         * coroutine for a second time; it is not prepared for coroutine
+         * resumption from external code.
+         */
+        bdrv_dec_in_flight(s->bs);
         assert(s->reply.handle == 0);
         ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
+        bdrv_inc_in_flight(s->bs);
+
         if (local_err) {
             trace_nbd_read_reply_entry_fail(ret, error_get_pretty(local_err));
             error_free(local_err);
@@ -116,6 +132,8 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
 
     s->quit = true;
     nbd_recv_coroutines_wake_all(s);
+    bdrv_dec_in_flight(s->bs);
+
     s->connection_co = NULL;
     aio_wait_kick();
 }
@@ -970,6 +988,9 @@ void nbd_client_attach_aio_context(BlockDriverState *bs,
 {
     NBDClientSession *client = nbd_get_client_session(bs);
     qio_channel_attach_aio_context(QIO_CHANNEL(client->ioc), new_context);
+
+    /* FIXME Really need a bdrv_inc_in_flight() here, but the corresponding
+     * bdrv_dec_in_flight() would have to be in QIOChannel code :-/ */
     aio_co_schedule(new_context, client->connection_co);
 }
 
@@ -1076,6 +1097,7 @@ static int nbd_client_connect(BlockDriverState *bs,
      * kick the reply mechanism.  */
     qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
     client->connection_co = qemu_coroutine_create(nbd_connection_entry, client);
+    bdrv_inc_in_flight(bs);
     nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
 
     logout("Established connection with NBD server\n");
@@ -1108,6 +1130,7 @@ int nbd_client_init(BlockDriverState *bs,
 {
     NBDClientSession *client = nbd_get_client_session(bs);
 
+    client->bs = bs;
     qemu_co_mutex_init(&client->send_mutex);
     qemu_co_queue_init(&client->free_sema);
 
-- 
2.20.1

  parent reply	other threads:[~2019-02-18 16:19 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-18 16:18 [Qemu-devel] [PATCH 00/12] block: bdrv_set_aio_context() related fixes Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 01/12] block-backend: Make blk_inc/dec_in_flight public Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 02/12] virtio-blk: Increase in_flight for request restart BH Kevin Wolf
2019-02-18 16:18 ` Kevin Wolf [this message]
2019-02-18 20:30   ` [Qemu-devel] [PATCH 03/12] nbd: Restrict connection_co reentrance Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 04/12] io: Make qio_channel_yield() interruptible Kevin Wolf
2019-02-18 17:11   ` Paolo Bonzini
2019-02-19 10:24     ` Kevin Wolf
2019-02-18 20:33   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 05/12] nbd: Move nbd_read_eof() to nbd/client.c Kevin Wolf
2019-02-18 20:42   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 06/12] nbd: Use low-level QIOChannel API in nbd_read_eof() Kevin Wolf
2019-02-18 20:48   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 07/12] nbd: Increase bs->in_flight during AioContext switch Kevin Wolf
2019-02-18 17:22   ` Paolo Bonzini
2019-02-19 11:11     ` Kevin Wolf
2019-02-19 14:12       ` Paolo Bonzini
2019-02-20 16:33     ` Kevin Wolf
2019-02-18 16:18 ` [Qemu-devel] [PATCH 08/12] block: Don't poll in bdrv_set_aio_context() Kevin Wolf
2019-02-18 20:52   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 09/12] block: Fix AioContext switch for drained node Kevin Wolf
2019-02-18 20:53   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 10/12] test-bdrv-drain: AioContext switch in drained section Kevin Wolf
2019-02-18 20:55   ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 11/12] block: Use normal drain for bdrv_set_aio_context() Kevin Wolf
2019-02-18 20:57   ` Eric Blake
2019-02-19 11:23     ` Kevin Wolf
2019-02-19 16:04       ` Eric Blake
2019-02-18 16:18 ` [Qemu-devel] [PATCH 12/12] aio-posix: Assert that aio_poll() is always called in home thread Kevin Wolf
2019-02-18 20:58   ` Eric Blake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190218161822.3573-4-kwolf@redhat.com \
    --to=kwolf@redhat.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.