All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH 0/3] io/nbd: AioContext support
@ 2016-12-23 18:26 Paolo Bonzini
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Paolo Bonzini @ 2016-12-23 18:26 UTC (permalink / raw)
  To: qemu-devel

This is RFC because the APIs it uses (aio_co_schedule/aio_co_wake) do
not exist yet in master, but it should be enough for a first review of
the QIOChannel API concepts and to give an idea of their usage.

It makes qio_channel_yield aware of AioContexts by adding a new
API qio_channel_set_aio_context, and it lets separate coroutines
use qio_channel_yield for reading vs. writing.

The last patch rewrites the NBD client's I/O management to use the
new infrastructure (and I think the logic here was first proposed
years ago by Stefan).  The benefit is that the new version does not
block if the server writes a partial reply header, and is also a
bit smaller.

Paolo

Paolo Bonzini (3):
  io: add methods to set I/O handlers on AioContext
  io: make qio_channel_yield aware of AioContexts
  nbd: do not block on partial reply header reads

 block/nbd-client.c   | 108 ++++++++++++++++++++-------------------------------
 include/io/channel.h |  41 +++++++++++++++++++
 io/channel-socket.c  |  16 +++++---
 io/channel-tls.c     |  12 ++++++
 io/channel-watch.c   |   6 +++
 io/channel.c         |  87 +++++++++++++++++++++++++++++++----------
 6 files changed, 178 insertions(+), 92 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext
  2016-12-23 18:26 [Qemu-devel] [RFC PATCH 0/3] io/nbd: AioContext support Paolo Bonzini
@ 2016-12-23 18:26 ` Paolo Bonzini
  2017-01-04 16:45   ` Eric Blake
  2017-01-04 17:14   ` Daniel P. Berrange
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts Paolo Bonzini
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads Paolo Bonzini
  2 siblings, 2 replies; 13+ messages in thread
From: Paolo Bonzini @ 2016-12-23 18:26 UTC (permalink / raw)
  To: qemu-devel

This is in preparation for making qio_channel_yield work on
AioContexts other than the main one.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/io/channel.h | 42 ++++++++++++++++++++++++++++++++++++++++++
 io/channel-socket.c  | 16 +++++++++++-----
 io/channel-tls.c     | 12 ++++++++++++
 io/channel-watch.c   |  6 ++++++
 io/channel.c         | 25 +++++++++++++++++++++++++
 5 files changed, 96 insertions(+), 5 deletions(-)

diff --git a/include/io/channel.h b/include/io/channel.h
index 32a9470..248bc76 100644
--- a/include/io/channel.h
+++ b/include/io/channel.h
@@ -23,6 +23,7 @@
 
 #include "qemu-common.h"
 #include "qom/object.h"
+#include "block/aio.h"
 
 #define TYPE_QIO_CHANNEL "qio-channel"
 #define QIO_CHANNEL(obj)                                    \
@@ -58,6 +59,8 @@ typedef gboolean (*QIOChannelFunc)(QIOChannel *ioc,
                                    GIOCondition condition,
                                    gpointer data);
 
+typedef struct QIOChannelRestart QIOChannelRestart;
+
 /**
  * QIOChannel:
  *
@@ -80,6 +83,9 @@ struct QIOChannel {
     Object parent;
     unsigned int features; /* bitmask of QIOChannelFeatures */
     char *name;
+    AioContext *ctx;
+    QIOChannelRestart *read_coroutine;
+    QIOChannelRestart *write_coroutine;
 #ifdef _WIN32
     HANDLE event; /* For use with GSource on Win32 */
 #endif
@@ -132,6 +138,11 @@ struct QIOChannelClass {
                      off_t offset,
                      int whence,
                      Error **errp);
+    void (*io_set_fd_handler)(QIOChannel *ioc,
+                              AioContext *ctx,
+                              IOHandler *io_read,
+                              IOHandler *io_write,
+                              void *opaque);
 };
 
 /* General I/O handling functions */
@@ -497,6 +508,18 @@ guint qio_channel_add_watch(QIOChannel *ioc,
 
 
 /**
+ * qio_channel_set_aio_context:
+ * @ioc: the channel object
+ * @ctx: the #AioContext to set the handlers on
+ *
+ * Request that qio_channel_yield() sets I/O handlers on
+ * the given #AioContext.  If @ctx is %NULL, qio_channel_yield()
+ * uses QEMU's main thread event loop.
+ */
+void qio_channel_set_aio_context(QIOChannel *ioc,
+                                 AioContext *ctx);
+
+/**
  * qio_channel_yield:
  * @ioc: the channel object
  * @condition: the I/O condition to wait for
@@ -525,4 +548,23 @@ void qio_channel_yield(QIOChannel *ioc,
 void qio_channel_wait(QIOChannel *ioc,
                       GIOCondition condition);
 
+/**
+ * qio_channel_set_fd_handler:
+ * @ioc: the channel object
+ * @ctx: the AioContext to set the handlers on
+ * @io_read: the read handler
+ * @io_write: the write handler
+ * @opaque: the opaque value passed to the handler
+ *
+ * This is used internally by qio_channel_yield().  It can
+ * be used by channel implementations to forward the handlers
+ * to another channel (e.g. from #QIOChannelTLS to the
+ * underlying socket).
+ */
+void qio_channel_set_fd_handler(QIOChannel *ioc,
+                                AioContext *ctx,
+                                IOHandler *io_read,
+                                IOHandler *io_write,
+                                void *opaque);
+
 #endif /* QIO_CHANNEL_H */
diff --git a/io/channel-socket.c b/io/channel-socket.c
index d7e03f6..4d5f180 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -661,11 +661,6 @@ qio_channel_socket_set_blocking(QIOChannel *ioc,
         qemu_set_block(sioc->fd);
     } else {
         qemu_set_nonblock(sioc->fd);
-#ifdef WIN32
-        WSAEventSelect(sioc->fd, ioc->event,
-                       FD_READ | FD_ACCEPT | FD_CLOSE |
-                       FD_CONNECT | FD_WRITE | FD_OOB);
-#endif
     }
     return 0;
 }
@@ -745,6 +740,16 @@ qio_channel_socket_shutdown(QIOChannel *ioc,
     return 0;
 }
 
+static GSource *qio_channel_socket_set_fd_handler(QIOChannel *ioc,
+                                                  AioContext *ctx,
+                                                  IOHandler *io_read,
+                                                  IOHandler *io_write,
+                                                  void *opaque)
+{
+    QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
+    aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, opaque);
+}
+
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
                                                 GIOCondition condition)
 {
@@ -767,6 +772,7 @@ static void qio_channel_socket_class_init(ObjectClass *klass,
     ioc_klass->io_set_cork = qio_channel_socket_set_cork;
     ioc_klass->io_set_delay = qio_channel_socket_set_delay;
     ioc_klass->io_create_watch = qio_channel_socket_create_watch;
+    ioc_klass->io_set_fd_handler = qio_channel_socket_set_fd_handler;
 }
 
 static const TypeInfo qio_channel_socket_info = {
diff --git a/io/channel-tls.c b/io/channel-tls.c
index d24dc8c..bd2239c 100644
--- a/io/channel-tls.c
+++ b/io/channel-tls.c
@@ -349,6 +349,17 @@ static int qio_channel_tls_close(QIOChannel *ioc,
     return qio_channel_close(tioc->master, errp);
 }
 
+static void qio_channel_tls_set_fd_handler(QIOChannel *ioc,
+                                           AioContext *ctx,
+                                           IOHandler *io_read,
+                                           IOHandler *io_write,
+                                           void *opaque)
+{
+    QIOChannelTLS *tioc = QIO_CHANNEL_TLS(ioc);
+
+    qio_channel_set_fd_handler(tioc->master, ctx, io_read, io_write, opaque);
+}
+
 static GSource *qio_channel_tls_create_watch(QIOChannel *ioc,
                                              GIOCondition condition)
 {
@@ -376,6 +387,7 @@ static void qio_channel_tls_class_init(ObjectClass *klass,
     ioc_klass->io_close = qio_channel_tls_close;
     ioc_klass->io_shutdown = qio_channel_tls_shutdown;
     ioc_klass->io_create_watch = qio_channel_tls_create_watch;
+    ioc_klass->io_set_fd_handler = qio_channel_tls_set_fd_handler;
 }
 
 static const TypeInfo qio_channel_tls_info = {
diff --git a/io/channel-watch.c b/io/channel-watch.c
index cf1cdff..8640d1c 100644
--- a/io/channel-watch.c
+++ b/io/channel-watch.c
@@ -285,6 +285,12 @@ GSource *qio_channel_create_socket_watch(QIOChannel *ioc,
     GSource *source;
     QIOChannelSocketSource *ssource;
 
+#ifdef WIN32
+    WSAEventSelect(socket, ioc->event,
+                   FD_READ | FD_ACCEPT | FD_CLOSE |
+                   FD_CONNECT | FD_WRITE | FD_OOB);
+#endif
+
     source = g_source_new(&qio_channel_socket_source_funcs,
                           sizeof(QIOChannelSocketSource));
     ssource = (QIOChannelSocketSource *)source;
diff --git a/io/channel.c b/io/channel.c
index 80924c1..9ef683c 100644
--- a/io/channel.c
+++ b/io/channel.c
@@ -154,6 +154,17 @@ GSource *qio_channel_create_watch(QIOChannel *ioc,
 }
 
 
+void qio_channel_set_fd_handler(QIOChannel *ioc,
+                                AioContext *ctx,
+                                IOHandler *io_read,
+                                IOHandler *io_write,
+                                void *opaque)
+{
+    QIOChannelClass *klass = QIO_CHANNEL_GET_CLASS(ioc);
+
+    klass->io_set_fd_handler(ioc, ctx, io_read, io_write, opaque);
+}
+
 guint qio_channel_add_watch(QIOChannel *ioc,
                             GIOCondition condition,
                             QIOChannelFunc func,
@@ -244,6 +255,20 @@ static gboolean qio_channel_yield_enter(QIOChannel *ioc,
 }
 
 
+void qio_channel_set_aio_context(QIOChannel *ioc,
+                                 AioContext *ctx)
+{
+    if (ioc->ctx == ctx) {
+        return;
+    }
+
+    qio_channel_set_fd_handler(ioc,
+                               ioc->ctx ? ioc->ctx : iohandler_get_aio_context(),
+                               NULL, NULL, NULL);
+    ioc->ctx = ctx;
+    qio_channel_set_fd_handlers(ioc);
+}
+
 void coroutine_fn qio_channel_yield(QIOChannel *ioc,
                                     GIOCondition condition)
 {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts
  2016-12-23 18:26 [Qemu-devel] [RFC PATCH 0/3] io/nbd: AioContext support Paolo Bonzini
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext Paolo Bonzini
@ 2016-12-23 18:26 ` Paolo Bonzini
  2017-01-04 17:18   ` Daniel P. Berrange
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads Paolo Bonzini
  2 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2016-12-23 18:26 UTC (permalink / raw)
  To: qemu-devel

Support separate coroutines for reading and writing, and
restart the coroutine on the AioContext where it was suspended on.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/io/channel.h |  7 +++---
 io/channel-socket.c  | 12 +++++-----
 io/channel.c         | 62 ++++++++++++++++++++++++++++++++++------------------
 3 files changed, 50 insertions(+), 31 deletions(-)

diff --git a/include/io/channel.h b/include/io/channel.h
index 248bc76..3030fdb 100644
--- a/include/io/channel.h
+++ b/include/io/channel.h
@@ -23,6 +23,7 @@
 
 #include "qemu-common.h"
 #include "qom/object.h"
+#include "qemu/coroutine.h"
 #include "block/aio.h"
 
 #define TYPE_QIO_CHANNEL "qio-channel"
@@ -59,8 +60,6 @@ typedef gboolean (*QIOChannelFunc)(QIOChannel *ioc,
                                    GIOCondition condition,
                                    gpointer data);
 
-typedef struct QIOChannelRestart QIOChannelRestart;
-
 /**
  * QIOChannel:
  *
@@ -84,8 +83,8 @@ struct QIOChannel {
     unsigned int features; /* bitmask of QIOChannelFeatures */
     char *name;
     AioContext *ctx;
-    QIOChannelRestart *read_coroutine;
-    QIOChannelRestart *write_coroutine;
+    Coroutine *read_coroutine;
+    Coroutine *write_coroutine;
 #ifdef _WIN32
     HANDLE event; /* For use with GSource on Win32 */
 #endif
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 4d5f180..732ba20 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -740,14 +740,14 @@ qio_channel_socket_shutdown(QIOChannel *ioc,
     return 0;
 }
 
-static GSource *qio_channel_socket_set_fd_handler(QIOChannel *ioc,
-                                                  AioContext *ctx,
-                                                  IOHandler *io_read,
-                                                  IOHandler *io_write,
-                                                  void *opaque)
+static void qio_channel_socket_set_fd_handler(QIOChannel *ioc,
+                                              AioContext *ctx,
+                                              IOHandler *io_read,
+                                              IOHandler *io_write,
+                                              void *opaque)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
-    aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, opaque);
+    aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, NULL, opaque);
 }
 
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
diff --git a/io/channel.c b/io/channel.c
index 9ef683c..d4b3a5a3 100644
--- a/io/channel.c
+++ b/io/channel.c
@@ -21,7 +21,7 @@
 #include "qemu/osdep.h"
 #include "io/channel.h"
 #include "qapi/error.h"
-#include "qemu/coroutine.h"
+#include "qemu/main-loop.h"
 
 bool qio_channel_has_feature(QIOChannel *ioc,
                              QIOChannelFeature feature)
@@ -238,22 +238,43 @@ off_t qio_channel_io_seek(QIOChannel *ioc,
 }
 
 
-typedef struct QIOChannelYieldData QIOChannelYieldData;
-struct QIOChannelYieldData {
-    QIOChannel *ioc;
-    Coroutine *co;
-};
+static void qio_channel_set_fd_handlers(QIOChannel *ioc);
+
+static void qio_channel_restart_read(void *opaque)
+{
+    QIOChannel *ioc = opaque;
+    Coroutine *co = ioc->read_coroutine;
 
+    ioc->read_coroutine = NULL;
+    qio_channel_set_fd_handlers(ioc);
+    aio_co_wake(co);
+}
 
-static gboolean qio_channel_yield_enter(QIOChannel *ioc,
-                                        GIOCondition condition,
-                                        gpointer opaque)
+static void qio_channel_restart_write(void *opaque)
 {
-    QIOChannelYieldData *data = opaque;
-    qemu_coroutine_enter(data->co);
-    return FALSE;
+    QIOChannel *ioc = opaque;
+    Coroutine *co = ioc->write_coroutine;
+
+    ioc->write_coroutine = NULL;
+    qio_channel_set_fd_handlers(ioc);
+    aio_co_wake(co);
 }
 
+static void qio_channel_set_fd_handlers(QIOChannel *ioc)
+{
+    IOHandler *rd_handler = NULL, *wr_handler = NULL;
+
+    if (ioc->read_coroutine) {
+	rd_handler = qio_channel_restart_read;
+    }
+    if (ioc->write_coroutine) {
+	rd_handler = qio_channel_restart_write;
+    }
+
+    qio_channel_set_fd_handler(ioc,
+                               ioc->ctx ? ioc->ctx : iohandler_get_aio_context(),
+                               rd_handler, wr_handler, ioc);
+}
 
 void qio_channel_set_aio_context(QIOChannel *ioc,
                                  AioContext *ctx)
@@ -272,16 +293,15 @@ void qio_channel_set_aio_context(QIOChannel *ioc,
 void coroutine_fn qio_channel_yield(QIOChannel *ioc,
                                     GIOCondition condition)
 {
-    QIOChannelYieldData data;
-
     assert(qemu_in_coroutine());
-    data.ioc = ioc;
-    data.co = qemu_coroutine_self();
-    qio_channel_add_watch(ioc,
-                          condition,
-                          qio_channel_yield_enter,
-                          &data,
-                          NULL);
+    if (condition == G_IO_IN) {
+        ioc->read_coroutine = qemu_coroutine_self();
+    } else if (condition == G_IO_OUT) {
+        ioc->write_coroutine = qemu_coroutine_self();
+    } else {
+        abort();
+    }
+    qio_channel_set_fd_handlers(ioc);
     qemu_coroutine_yield();
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads
  2016-12-23 18:26 [Qemu-devel] [RFC PATCH 0/3] io/nbd: AioContext support Paolo Bonzini
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext Paolo Bonzini
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts Paolo Bonzini
@ 2016-12-23 18:26 ` Paolo Bonzini
  2017-01-04 17:36   ` Eric Blake
  2017-01-16 13:17   ` Stefan Hajnoczi
  2 siblings, 2 replies; 13+ messages in thread
From: Paolo Bonzini @ 2016-12-23 18:26 UTC (permalink / raw)
  To: qemu-devel

Read the replies from a coroutine.  qio_channel_yield is used so that
the right coroutine is restarted automatically, eliminating the need
for send_coroutine in NBDClientSession.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/nbd-client.c | 108 +++++++++++++++++++++--------------------------------
 1 file changed, 42 insertions(+), 66 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 06f1532..a3c2c19 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -33,8 +33,9 @@
 #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
 #define INDEX_TO_HANDLE(bs, index)  ((index)  ^ ((uint64_t)(intptr_t)bs))
 
-static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
+static void nbd_recv_coroutines_enter_all(BlockDriverState *bs)
 {
+    NBDClientSession *s = nbd_get_client_session(bs);
     int i;
 
     for (i = 0; i < MAX_NBD_REQUESTS; i++) {
@@ -42,6 +43,7 @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
             qemu_coroutine_enter(s->recv_coroutine[i]);
         }
     }
+    BDRV_POLL_WHILE(bs, s->read_reply_co);
 }
 
 static void nbd_teardown_connection(BlockDriverState *bs)
@@ -56,7 +58,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     qio_channel_shutdown(client->ioc,
                          QIO_CHANNEL_SHUTDOWN_BOTH,
                          NULL);
-    nbd_recv_coroutines_enter_all(client);
+    nbd_recv_coroutines_enter_all(bs);
 
     nbd_client_detach_aio_context(bs);
     object_unref(OBJECT(client->sioc));
@@ -65,54 +67,34 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     client->ioc = NULL;
 }
 
-static void nbd_reply_ready(void *opaque)
+static void nbd_read_reply_entry(void *opaque)
 {
-    BlockDriverState *bs = opaque;
-    NBDClientSession *s = nbd_get_client_session(bs);
+    NBDClientSession *s = opaque;
     uint64_t i;
     int ret;
 
-    if (!s->ioc) { /* Already closed */
-        return;
-    }
-
-    if (s->reply.handle == 0) {
-        /* No reply already in flight.  Fetch a header.  It is possible
-         * that another thread has done the same thing in parallel, so
-         * the socket is not readable anymore.
-         */
+    for (;;) {
+        assert(s->reply.handle == 0);
         ret = nbd_receive_reply(s->ioc, &s->reply);
-        if (ret == -EAGAIN) {
-            return;
-        }
         if (ret < 0) {
-            s->reply.handle = 0;
-            goto fail;
+            break;
         }
-    }
-
-    /* There's no need for a mutex on the receive side, because the
-     * handler acts as a synchronization point and ensures that only
-     * one coroutine is called until the reply finishes.  */
-    i = HANDLE_TO_INDEX(s, s->reply.handle);
-    if (i >= MAX_NBD_REQUESTS) {
-        goto fail;
-    }
 
-    if (s->recv_coroutine[i]) {
-        qemu_coroutine_enter(s->recv_coroutine[i]);
-        return;
-    }
-
-fail:
-    nbd_teardown_connection(bs);
-}
+        /* There's no need for a mutex on the receive side, because the
+         * handler acts as a synchronization point and ensures that only
+         * one coroutine is called until the reply finishes.
+         */
+        i = HANDLE_TO_INDEX(s, s->reply.handle);
+        if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
+            break;
+        }
 
-static void nbd_restart_write(void *opaque)
-{
-    BlockDriverState *bs = opaque;
+        aio_co_wake(s->recv_coroutine[i]);
 
-    qemu_coroutine_enter(nbd_get_client_session(bs)->send_coroutine);
+        /* We're woken up by the recv_coroutine itself.  */
+        qemu_coroutine_yield();
+    }
+    s->read_reply_co = NULL;
 }
 
 static int nbd_co_send_request(BlockDriverState *bs,
@@ -120,7 +102,6 @@ static int nbd_co_send_request(BlockDriverState *bs,
                                QEMUIOVector *qiov)
 {
     NBDClientSession *s = nbd_get_client_session(bs);
-    AioContext *aio_context;
     int rc, ret, i;
 
     qemu_co_mutex_lock(&s->send_mutex);
@@ -141,11 +122,6 @@ static int nbd_co_send_request(BlockDriverState *bs,
         return -EPIPE;
     }
 
-    s->send_coroutine = qemu_coroutine_self();
-    aio_context = bdrv_get_aio_context(bs);
-
-    aio_set_fd_handler(aio_context, s->sioc->fd, false,
-                       nbd_reply_ready, nbd_restart_write, NULL, bs);
     if (qiov) {
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
@@ -160,9 +136,6 @@ static int nbd_co_send_request(BlockDriverState *bs,
     } else {
         rc = nbd_send_request(s->ioc, request);
     }
-    aio_set_fd_handler(aio_context, s->sioc->fd, false,
-                       nbd_reply_ready, NULL, NULL, bs);
-    s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
 }
@@ -174,8 +147,7 @@ static void nbd_co_receive_reply(NBDClientSession *s,
 {
     int ret;
 
-    /* Wait until we're woken up by the read handler.  TODO: perhaps
-     * peek at the next reply and avoid yielding if it's ours?  */
+    /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
     *reply = s->reply;
     if (reply->handle != request->handle ||
@@ -209,14 +181,18 @@ static void nbd_coroutine_start(NBDClientSession *s,
     /* s->recv_coroutine[i] is set as soon as we get the send_lock.  */
 }
 
-static void nbd_coroutine_end(NBDClientSession *s,
+static void nbd_coroutine_end(BlockDriverState *bs,
                               NBDRequest *request)
 {
+    NBDClientSession *s = nbd_get_client_session(bs);
     int i = HANDLE_TO_INDEX(s, request->handle);
+
     s->recv_coroutine[i] = NULL;
-    if (s->in_flight-- == MAX_NBD_REQUESTS) {
-        qemu_co_queue_next(&s->free_sema);
-    }
+
+    /* Kick the read_reply_co to get the next reply.  */
+    aio_co_wake(s->read_reply_co);
+    s->in_flight--;
+    qemu_co_queue_next(&s->free_sema);
 }
 
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
@@ -241,7 +217,7 @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
     } else {
         nbd_co_receive_reply(client, &request, &reply, qiov);
     }
-    nbd_coroutine_end(client, &request);
+    nbd_coroutine_end(bs, &request);
     return -reply.error;
 }
 
@@ -271,7 +247,7 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
     } else {
         nbd_co_receive_reply(client, &request, &reply, NULL);
     }
-    nbd_coroutine_end(client, &request);
+    nbd_coroutine_end(bs, &request);
     return -reply.error;
 }
 
@@ -306,7 +282,7 @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
     } else {
         nbd_co_receive_reply(client, &request, &reply, NULL);
     }
-    nbd_coroutine_end(client, &request);
+    nbd_coroutine_end(bs, &request);
     return -reply.error;
 }
 
@@ -331,7 +307,7 @@ int nbd_client_co_flush(BlockDriverState *bs)
     } else {
         nbd_co_receive_reply(client, &request, &reply, NULL);
     }
-    nbd_coroutine_end(client, &request);
+    nbd_coroutine_end(bs, &request);
     return -reply.error;
 }
 
@@ -357,23 +333,23 @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int count)
     } else {
         nbd_co_receive_reply(client, &request, &reply, NULL);
     }
-    nbd_coroutine_end(client, &request);
+    nbd_coroutine_end(bs, &request);
     return -reply.error;
 
 }
 
 void nbd_client_detach_aio_context(BlockDriverState *bs)
 {
-    aio_set_fd_handler(bdrv_get_aio_context(bs),
-                       nbd_get_client_session(bs)->sioc->fd,
-                       false, NULL, NULL, NULL, NULL);
+    NBDClientSession *client = nbd_get_client_session(bs);
+    qio_channel_set_aio_context(QIO_CHANNEL(client->sioc), NULL);
 }
 
 void nbd_client_attach_aio_context(BlockDriverState *bs,
                                    AioContext *new_context)
 {
-    aio_set_fd_handler(new_context, nbd_get_client_session(bs)->sioc->fd,
-                       false, nbd_reply_ready, NULL, NULL, bs);
+    NBDClientSession *client = nbd_get_client_session(bs);
+    qio_channel_set_aio_context(QIO_CHANNEL(client->sioc), new_context);
+    aio_co_schedule(new_context, client->read_reply_co);
 }
 
 void nbd_client_close(BlockDriverState *bs)
@@ -434,7 +410,7 @@ int nbd_client_init(BlockDriverState *bs,
     /* Now that we're connected, set the socket to be non-blocking and
      * kick the reply mechanism.  */
     qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
-
+    client->read_reply_co = qemu_coroutine_create(nbd_read_reply_entry, client);
     nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
 
     logout("Established connection with NBD server\n");
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext Paolo Bonzini
@ 2017-01-04 16:45   ` Eric Blake
  2017-01-04 16:56     ` Paolo Bonzini
  2017-01-04 17:14   ` Daniel P. Berrange
  1 sibling, 1 reply; 13+ messages in thread
From: Eric Blake @ 2017-01-04 16:45 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1657 bytes --]

On 12/23/2016 12:26 PM, Paolo Bonzini wrote:
> This is in preparation for making qio_channel_yield work on
> AioContexts other than the main one.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  include/io/channel.h | 42 ++++++++++++++++++++++++++++++++++++++++++
>  io/channel-socket.c  | 16 +++++++++++-----
>  io/channel-tls.c     | 12 ++++++++++++
>  io/channel-watch.c   |  6 ++++++
>  io/channel.c         | 25 +++++++++++++++++++++++++
>  5 files changed, 96 insertions(+), 5 deletions(-)
> 

> +++ b/io/channel-socket.c
> @@ -661,11 +661,6 @@ qio_channel_socket_set_blocking(QIOChannel *ioc,
>          qemu_set_block(sioc->fd);
>      } else {
>          qemu_set_nonblock(sioc->fd);
> -#ifdef WIN32
> -        WSAEventSelect(sioc->fd, ioc->event,
> -                       FD_READ | FD_ACCEPT | FD_CLOSE |
> -                       FD_CONNECT | FD_WRITE | FD_OOB);
> -#endif
>      }
>      return 0;

How does this hunk fit in?

> +++ b/io/channel-watch.c
> @@ -285,6 +285,12 @@ GSource *qio_channel_create_socket_watch(QIOChannel *ioc,
>      GSource *source;
>      QIOChannelSocketSource *ssource;
>  
> +#ifdef WIN32
> +    WSAEventSelect(socket, ioc->event,
> +                   FD_READ | FD_ACCEPT | FD_CLOSE |
> +                   FD_CONNECT | FD_WRITE | FD_OOB);
> +#endif
> +

Is it that you're moving it, now that the ability to associate handlers
lets you have more fine-grained control on when to adjust the properties?

Otherwise the patch makes sense to me.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext
  2017-01-04 16:45   ` Eric Blake
@ 2017-01-04 16:56     ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2017-01-04 16:56 UTC (permalink / raw)
  To: Eric Blake, qemu-devel



On 04/01/2017 17:45, Eric Blake wrote:
>> -#ifdef WIN32
>> -        WSAEventSelect(sioc->fd, ioc->event,
>> -                       FD_READ | FD_ACCEPT | FD_CLOSE |
>> -                       FD_CONNECT | FD_WRITE | FD_OOB);
>> -#endif
>>      }
>>      return 0;
> How does this hunk fit in?
> 
>> +++ b/io/channel-watch.c
>> @@ -285,6 +285,12 @@ GSource *qio_channel_create_socket_watch(QIOChannel *ioc,
>>      GSource *source;
>>      QIOChannelSocketSource *ssource;
>>  
>> +#ifdef WIN32
>> +    WSAEventSelect(socket, ioc->event,
>> +                   FD_READ | FD_ACCEPT | FD_CLOSE |
>> +                   FD_CONNECT | FD_WRITE | FD_OOB);
>> +#endif
>> +
> Is it that you're moving it, now that the ability to associate handlers
> lets you have more fine-grained control on when to adjust the properties?

Yes, I'm setting the event here because aio_set_fd_handler can also call
WSAEventSelect, and only one event can be in use at a time.  So you
cannot use qio_channel_create_socket_watch and qio_channel_yield at the
same time, but you can switch from one to the other.

Paolo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext Paolo Bonzini
  2017-01-04 16:45   ` Eric Blake
@ 2017-01-04 17:14   ` Daniel P. Berrange
  1 sibling, 0 replies; 13+ messages in thread
From: Daniel P. Berrange @ 2017-01-04 17:14 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Fri, Dec 23, 2016 at 07:26:39PM +0100, Paolo Bonzini wrote:
> This is in preparation for making qio_channel_yield work on
> AioContexts other than the main one.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  include/io/channel.h | 42 ++++++++++++++++++++++++++++++++++++++++++
>  io/channel-socket.c  | 16 +++++++++++-----
>  io/channel-tls.c     | 12 ++++++++++++
>  io/channel-watch.c   |  6 ++++++
>  io/channel.c         | 25 +++++++++++++++++++++++++
>  5 files changed, 96 insertions(+), 5 deletions(-)
> 

>  /**
> + * qio_channel_set_aio_context:
> + * @ioc: the channel object
> + * @ctx: the #AioContext to set the handlers on
> + *
> + * Request that qio_channel_yield() sets I/O handlers on
> + * the given #AioContext.  If @ctx is %NULL, qio_channel_yield()
> + * uses QEMU's main thread event loop.
> + */
> +void qio_channel_set_aio_context(QIOChannel *ioc,
> +                                 AioContext *ctx);
> +
> +/**
>   * qio_channel_yield:
>   * @ioc: the channel object
>   * @condition: the I/O condition to wait for
> @@ -525,4 +548,23 @@ void qio_channel_yield(QIOChannel *ioc,
>  void qio_channel_wait(QIOChannel *ioc,
>                        GIOCondition condition);
>  
> +/**
> + * qio_channel_set_fd_handler:
> + * @ioc: the channel object
> + * @ctx: the AioContext to set the handlers on
> + * @io_read: the read handler
> + * @io_write: the write handler
> + * @opaque: the opaque value passed to the handler
> + *
> + * This is used internally by qio_channel_yield().  It can
> + * be used by channel implementations to forward the handlers
> + * to another channel (e.g. from #QIOChannelTLS to the
> + * underlying socket).
> + */
> +void qio_channel_set_fd_handler(QIOChannel *ioc,
> +                                AioContext *ctx,
> +                                IOHandler *io_read,
> +                                IOHandler *io_write,
> +                                void *opaque);

nitpick - could we call set  '...set_aio_fd_handler' or '..set_aio_handler'

> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index d7e03f6..4d5f180 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c

> diff --git a/io/channel.c b/io/channel.c
> index 80924c1..9ef683c 100644
> --- a/io/channel.c
> +++ b/io/channel.c

> @@ -244,6 +255,20 @@ static gboolean qio_channel_yield_enter(QIOChannel *ioc,
>  }
>  
>  
> +void qio_channel_set_aio_context(QIOChannel *ioc,
> +                                 AioContext *ctx)
> +{
> +    if (ioc->ctx == ctx) {
> +        return;
> +    }
> +
> +    qio_channel_set_fd_handler(ioc,
> +                               ioc->ctx ? ioc->ctx : iohandler_get_aio_context(),

Max line length exceeded.

> +                               NULL, NULL, NULL);
> +    ioc->ctx = ctx;
> +    qio_channel_set_fd_handlers(ioc);

This method does not exist in this patch - only added by the next
patch.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts Paolo Bonzini
@ 2017-01-04 17:18   ` Daniel P. Berrange
  2017-01-04 21:26     ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel P. Berrange @ 2017-01-04 17:18 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Fri, Dec 23, 2016 at 07:26:40PM +0100, Paolo Bonzini wrote:
> Support separate coroutines for reading and writing, and
> restart the coroutine on the AioContext where it was suspended on.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  include/io/channel.h |  7 +++---
>  io/channel-socket.c  | 12 +++++-----
>  io/channel.c         | 62 ++++++++++++++++++++++++++++++++++------------------
>  3 files changed, 50 insertions(+), 31 deletions(-)
> 
> diff --git a/include/io/channel.h b/include/io/channel.h
> index 248bc76..3030fdb 100644
> --- a/include/io/channel.h
> +++ b/include/io/channel.h
> @@ -23,6 +23,7 @@
>  
>  #include "qemu-common.h"
>  #include "qom/object.h"
> +#include "qemu/coroutine.h"
>  #include "block/aio.h"
>  
>  #define TYPE_QIO_CHANNEL "qio-channel"
> @@ -59,8 +60,6 @@ typedef gboolean (*QIOChannelFunc)(QIOChannel *ioc,
>                                     GIOCondition condition,
>                                     gpointer data);
>  
> -typedef struct QIOChannelRestart QIOChannelRestart;
> -
>  /**
>   * QIOChannel:
>   *
> @@ -84,8 +83,8 @@ struct QIOChannel {
>      unsigned int features; /* bitmask of QIOChannelFeatures */
>      char *name;
>      AioContext *ctx;
> -    QIOChannelRestart *read_coroutine;
> -    QIOChannelRestart *write_coroutine;
> +    Coroutine *read_coroutine;
> +    Coroutine *write_coroutine;

Seems this ought to have been squashed into the previous patch

> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index 4d5f180..732ba20 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c
> @@ -740,14 +740,14 @@ qio_channel_socket_shutdown(QIOChannel *ioc,
>      return 0;
>  }
>  
> -static GSource *qio_channel_socket_set_fd_handler(QIOChannel *ioc,
> -                                                  AioContext *ctx,
> -                                                  IOHandler *io_read,
> -                                                  IOHandler *io_write,
> -                                                  void *opaque)
> +static void qio_channel_socket_set_fd_handler(QIOChannel *ioc,
> +                                              AioContext *ctx,
> +                                              IOHandler *io_read,
> +                                              IOHandler *io_write,
> +                                              void *opaque)
>  {
>      QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> -    aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, opaque);
> +    aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, NULL, opaque);
>  }

And this ought to be in previous patch  too.

>  
> diff --git a/io/channel.c b/io/channel.c
> index 9ef683c..d4b3a5a3 100644
> --- a/io/channel.c
> +++ b/io/channel.c
> @@ -21,7 +21,7 @@
>  #include "qemu/osdep.h"
>  #include "io/channel.h"
>  #include "qapi/error.h"
> -#include "qemu/coroutine.h"
> +#include "qemu/main-loop.h"
>  
>  bool qio_channel_has_feature(QIOChannel *ioc,
>                               QIOChannelFeature feature)
> @@ -238,22 +238,43 @@ off_t qio_channel_io_seek(QIOChannel *ioc,
>  }
>  
>  
> -typedef struct QIOChannelYieldData QIOChannelYieldData;
> -struct QIOChannelYieldData {
> -    QIOChannel *ioc;
> -    Coroutine *co;
> -};
> +static void qio_channel_set_fd_handlers(QIOChannel *ioc);
> +
> +static void qio_channel_restart_read(void *opaque)
> +{
> +    QIOChannel *ioc = opaque;
> +    Coroutine *co = ioc->read_coroutine;
>  
> +    ioc->read_coroutine = NULL;
> +    qio_channel_set_fd_handlers(ioc);
> +    aio_co_wake(co);
> +}
>  
> -static gboolean qio_channel_yield_enter(QIOChannel *ioc,
> -                                        GIOCondition condition,
> -                                        gpointer opaque)
> +static void qio_channel_restart_write(void *opaque)
>  {
> -    QIOChannelYieldData *data = opaque;
> -    qemu_coroutine_enter(data->co);
> -    return FALSE;
> +    QIOChannel *ioc = opaque;
> +    Coroutine *co = ioc->write_coroutine;
> +
> +    ioc->write_coroutine = NULL;
> +    qio_channel_set_fd_handlers(ioc);
> +    aio_co_wake(co);
>  }
>  
> +static void qio_channel_set_fd_handlers(QIOChannel *ioc)
> +{
> +    IOHandler *rd_handler = NULL, *wr_handler = NULL;
> +
> +    if (ioc->read_coroutine) {
> +	rd_handler = qio_channel_restart_read;
> +    }
> +    if (ioc->write_coroutine) {
> +	rd_handler = qio_channel_restart_write;
> +    }
> +
> +    qio_channel_set_fd_handler(ioc,
> +                               ioc->ctx ? ioc->ctx : iohandler_get_aio_context(),
> +                               rd_handler, wr_handler, ioc);
> +}

ioc->read_coroutine & ioc->write_coroutine can only be non-NULL during
a qio_channel_yield() caller. So it seems that calling
qio_channel_set_fd_handlers() from the qio_channel_set_aio_context()
method in the previous patch is not required, as those two callback
pointers will always be NULL.

> @@ -272,16 +293,15 @@ void qio_channel_set_aio_context(QIOChannel *ioc,
>  void coroutine_fn qio_channel_yield(QIOChannel *ioc,
>                                      GIOCondition condition)
>  {
> -    QIOChannelYieldData data;
> -
>      assert(qemu_in_coroutine());
> -    data.ioc = ioc;
> -    data.co = qemu_coroutine_self();
> -    qio_channel_add_watch(ioc,
> -                          condition,
> -                          qio_channel_yield_enter,
> -                          &data,
> -                          NULL);
> +    if (condition == G_IO_IN) {
> +        ioc->read_coroutine = qemu_coroutine_self();
> +    } else if (condition == G_IO_OUT) {
> +        ioc->write_coroutine = qemu_coroutine_self();
> +    } else {
> +        abort();
> +    }

Do we really need this to be an either/or/abort ? It looks like
the qio_channel_set_fd_handlers() method is happy top have both
read_coroutine & write_coroutine set.

If it does need to be exclusive though, can you update the API
docs for this method to mention that.

> +    qio_channel_set_fd_handlers(ioc);
>      qemu_coroutine_yield();
>  }

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads Paolo Bonzini
@ 2017-01-04 17:36   ` Eric Blake
  2017-01-16 13:17   ` Stefan Hajnoczi
  1 sibling, 0 replies; 13+ messages in thread
From: Eric Blake @ 2017-01-04 17:36 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 823 bytes --]

On 12/23/2016 12:26 PM, Paolo Bonzini wrote:
> Read the replies from a coroutine.  qio_channel_yield is used so that
> the right coroutine is restarted automatically, eliminating the need
> for send_coroutine in NBDClientSession.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  block/nbd-client.c | 108 +++++++++++++++++++++--------------------------------
>  1 file changed, 42 insertions(+), 66 deletions(-)
> 

Looks like a nice simplification. I'm not quite confident enough of my
understanding of coroutines to give R-b yet, but I can certainly try and
play with the patches to provide some test results (especially since
this series still depends on others being reviewed).

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts
  2017-01-04 17:18   ` Daniel P. Berrange
@ 2017-01-04 21:26     ` Paolo Bonzini
  2017-01-05 10:26       ` Daniel P. Berrange
  0 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2017-01-04 21:26 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: qemu-devel

> > @@ -84,8 +83,8 @@ struct QIOChannel {
> >      unsigned int features; /* bitmask of QIOChannelFeatures */
> >      char *name;
> >      AioContext *ctx;
> > -    QIOChannelRestart *read_coroutine;
> > -    QIOChannelRestart *write_coroutine;
> > +    Coroutine *read_coroutine;
> > +    Coroutine *write_coroutine;
> 
> Seems this ought to have been squashed into the previous patch

Yeah, sent this last thing before Christmas and it shows. :)

> > +static void qio_channel_set_fd_handlers(QIOChannel *ioc)
> > +{
> > +    IOHandler *rd_handler = NULL, *wr_handler = NULL;
> > +
> > +    if (ioc->read_coroutine) {
> > +	rd_handler = qio_channel_restart_read;
> > +    }
> > +    if (ioc->write_coroutine) {
> > +	rd_handler = qio_channel_restart_write;
> > +    }
> > +
> > +    qio_channel_set_fd_handler(ioc,
> > +                               ioc->ctx ? ioc->ctx :
> > iohandler_get_aio_context(),
> > +                               rd_handler, wr_handler, ioc);
> > +}
> 
> ioc->read_coroutine & ioc->write_coroutine can only be non-NULL during
> a qio_channel_yield() caller. So it seems that calling
> qio_channel_set_fd_handlers() from the qio_channel_set_aio_context()
> method in the previous patch is not required, as those two callback
> pointers will always be NULL.

Not necessarily.  You can have one coroutine calling qio_channel_yield(),
and then the non-coroutine code can call qio_channel_set_aio_context()
before the coroutine reenters.

This actually happens in the next patch.  Where the NBD socket is quiescent
and no response is in flight, such as during a bdrv_drain_begin/end()
section, the "coroutine that receives NBD headers" has yielded.  This
is also the time when set_aio_context can be called.

> > +    if (condition == G_IO_IN) {
> > +        ioc->read_coroutine = qemu_coroutine_self();
> > +    } else if (condition == G_IO_OUT) {
> > +        ioc->write_coroutine = qemu_coroutine_self();
> > +    } else {
> > +        abort();
> > +    }
> 
> Do we really need this to be an either/or/abort ? It looks like
> the qio_channel_set_fd_handlers() method is happy top have both
> read_coroutine & write_coroutine set.

The idea is that this would be called by a coroutine after a
recv or send that returns EAGAIN (with G_IO_IN for recv and
G_IO_OUT for send).  If not exclusive, you'd have to check
for ioc->read_coroutine == ioc->write_coroutine in the handler.
Not a big deal, I can do it, but it adds an edge case and I
didn't see a use for it.

> If it does need to be exclusive though, can you update the API
> docs for this method to mention that.

Sure.

Thanks for the speedy review!

Paolo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts
  2017-01-04 21:26     ` Paolo Bonzini
@ 2017-01-05 10:26       ` Daniel P. Berrange
  2017-01-05 10:46         ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel P. Berrange @ 2017-01-05 10:26 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

On Wed, Jan 04, 2017 at 04:26:24PM -0500, Paolo Bonzini wrote:
> > > @@ -84,8 +83,8 @@ struct QIOChannel {
> > > +static void qio_channel_set_fd_handlers(QIOChannel *ioc)
> > > +{
> > > +    IOHandler *rd_handler = NULL, *wr_handler = NULL;
> > > +
> > > +    if (ioc->read_coroutine) {
> > > +	rd_handler = qio_channel_restart_read;
> > > +    }
> > > +    if (ioc->write_coroutine) {
> > > +	rd_handler = qio_channel_restart_write;
> > > +    }
> > > +
> > > +    qio_channel_set_fd_handler(ioc,
> > > +                               ioc->ctx ? ioc->ctx :
> > > iohandler_get_aio_context(),
> > > +                               rd_handler, wr_handler, ioc);
> > > +}
> > 
> > ioc->read_coroutine & ioc->write_coroutine can only be non-NULL during
> > a qio_channel_yield() caller. So it seems that calling
> > qio_channel_set_fd_handlers() from the qio_channel_set_aio_context()
> > method in the previous patch is not required, as those two callback
> > pointers will always be NULL.
> 
> Not necessarily.  You can have one coroutine calling qio_channel_yield(),
> and then the non-coroutine code can call qio_channel_set_aio_context()
> before the coroutine reenters.
> 
> This actually happens in the next patch.  Where the NBD socket is quiescent
> and no response is in flight, such as during a bdrv_drain_begin/end()
> section, the "coroutine that receives NBD headers" has yielded.  This
> is also the time when set_aio_context can be called.

Ok, that's a little surprising :-) Can you document on qio_channel_yield
that its permitted to yield and set an aio context while waiting.

> > > +    if (condition == G_IO_IN) {
> > > +        ioc->read_coroutine = qemu_coroutine_self();
> > > +    } else if (condition == G_IO_OUT) {
> > > +        ioc->write_coroutine = qemu_coroutine_self();
> > > +    } else {
> > > +        abort();
> > > +    }
> > 
> > Do we really need this to be an either/or/abort ? It looks like
> > the qio_channel_set_fd_handlers() method is happy top have both
> > read_coroutine & write_coroutine set.
> 
> The idea is that this would be called by a coroutine after a
> recv or send that returns EAGAIN (with G_IO_IN for recv and
> G_IO_OUT for send).  If not exclusive, you'd have to check
> for ioc->read_coroutine == ioc->write_coroutine in the handler.
> Not a big deal, I can do it, but it adds an edge case and I
> didn't see a use for it.

Yep, it feels unlikely. Tht said, it looks similar to the case
where you have two coroutines using the same channel, and one
does a yield(G_IO_IN) and the other does a yield(G_IO_OUT) while
the first is still waiting, which feels like a more plausible
scenario that could actually happen. So perhaps we do need to
consider it

> > If it does need to be exclusive though, can you update the API
> > docs for this method to mention that.
> 
> Sure.
> 
> Thanks for the speedy review!
> 
> Paolo

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts
  2017-01-05 10:26       ` Daniel P. Berrange
@ 2017-01-05 10:46         ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2017-01-05 10:46 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: qemu-devel



On 05/01/2017 11:26, Daniel P. Berrange wrote:
>>>> +    if (condition == G_IO_IN) {
>>>> +        ioc->read_coroutine = qemu_coroutine_self();
>>>> +    } else if (condition == G_IO_OUT) {
>>>> +        ioc->write_coroutine = qemu_coroutine_self();
>>>> +    } else {
>>>> +        abort();
>>>> +    }
>>> Do we really need this to be an either/or/abort ? It looks like
>>> the qio_channel_set_fd_handlers() method is happy top have both
>>> read_coroutine & write_coroutine set.
>> The idea is that this would be called by a coroutine after a
>> recv or send that returns EAGAIN (with G_IO_IN for recv and
>> G_IO_OUT for send).  If not exclusive, you'd have to check
>> for ioc->read_coroutine == ioc->write_coroutine in the handler.
>> Not a big deal, I can do it, but it adds an edge case and I
>> didn't see a use for it.
> 
> Yep, it feels unlikely. Tht said, it looks similar to the case
> where you have two coroutines using the same channel, and one
> does a yield(G_IO_IN) and the other does a yield(G_IO_OUT) while
> the first is still waiting, which feels like a more plausible
> scenario that could actually happen. So perhaps we do need to
> consider it

Yes, it is possible to have both two conditions active at the same time
for two different coroutines and it does happen.  As in the
set_aio_context case, NBD has G_IO_IN pretty much always active to poll
for responses---and in the meanwhile a coroutine might send a request
and activate G_IO_OUT polling.  In this case, however, there would be
two qio_channel_yield calls.

The unlikely part is when a single coroutine wants to poll both conditions.

One improvement I can do here is to assert !ioc->read_coroutine if
condition is G_IO_IN and !ioc->write_coroutine if condition is G_IO_OUT.
 If this is not respected, the coroutine that yielded first would never
be called back again!

In any case, I'm happy that overall the API and the implementation look
sane to you.

Paolo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads
  2016-12-23 18:26 ` [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads Paolo Bonzini
  2017-01-04 17:36   ` Eric Blake
@ 2017-01-16 13:17   ` Stefan Hajnoczi
  1 sibling, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2017-01-16 13:17 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 384 bytes --]

On Fri, Dec 23, 2016 at 07:26:41PM +0100, Paolo Bonzini wrote:
> @@ -65,54 +67,34 @@ static void nbd_teardown_connection(BlockDriverState *bs)
>      client->ioc = NULL;
>  }
>  
> -static void nbd_reply_ready(void *opaque)
> +static void nbd_read_reply_entry(void *opaque)

Please use the coroutine_fn attribute on all functions that must be
called from coroutine context.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-01-16 13:17 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-23 18:26 [Qemu-devel] [RFC PATCH 0/3] io/nbd: AioContext support Paolo Bonzini
2016-12-23 18:26 ` [Qemu-devel] [PATCH 1/3] io: add methods to set I/O handlers on AioContext Paolo Bonzini
2017-01-04 16:45   ` Eric Blake
2017-01-04 16:56     ` Paolo Bonzini
2017-01-04 17:14   ` Daniel P. Berrange
2016-12-23 18:26 ` [Qemu-devel] [PATCH 2/3] io: make qio_channel_yield aware of AioContexts Paolo Bonzini
2017-01-04 17:18   ` Daniel P. Berrange
2017-01-04 21:26     ` Paolo Bonzini
2017-01-05 10:26       ` Daniel P. Berrange
2017-01-05 10:46         ` Paolo Bonzini
2016-12-23 18:26 ` [Qemu-devel] [PATCH 3/3] nbd: do not block on partial reply header reads Paolo Bonzini
2017-01-04 17:36   ` Eric Blake
2017-01-16 13:17   ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.