All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 00/11] NBD reconnect
@ 2018-06-09 15:32 Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors Vladimir Sementsov-Ogievskiy
                   ` (11 more replies)
  0 siblings, 12 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Hi all.

Here is NBD reconnect.
The feature realized inside nbd-client driver and works as follows:

There are two parameters: reconnect-attempts and reconnect-timeout.
So, we will try to reconnect in case of initial connection failed or
in case of connection lost. All current and new io operations will wait
until we make reconnect-attempts tries to reconnect. After this, all
requests will fail with EIO, but we will continue trying to reconnect.

v3:
06: fix build error in function 'nbd_co_send_request':
     error: 'i' may be used uninitialized in this function

v2 notes:
Here is v2 of NBD reconnect, but it is very very different from v1, so,
forget about v1.
The series includes my "NBD reconnect: preliminary refactoring", with
changes in 05: leave asserts (Eric).

Vladimir Sementsov-Ogievskiy (11):
  block/nbd-client: split channel errors from export errors
  block/nbd: move connection code from block/nbd to block/nbd-client
  block/nbd-client: split connection from initialization
  block/nbd-client: fix nbd_reply_chunk_iter_receive
  block/nbd-client: don't check ioc
  block/nbd-client: move from quit to state
  block/nbd-client: rename read_reply_co to connection_co
  block/nbd-client: move connecting to connection_co
  block/nbd: add cmdline and qapi parameters for nbd reconnect
  block/nbd-client: nbd reconnect
  iotests: test nbd reconnect

 qapi/block-core.json          |  12 +-
 block/nbd-client.h            |  23 ++-
 block/nbd-client.c            | 429 ++++++++++++++++++++++++++++++------------
 block/nbd.c                   |  61 +++---
 tests/qemu-iotests/220        |  68 +++++++
 tests/qemu-iotests/220.out    |   7 +
 tests/qemu-iotests/group      |   1 +
 tests/qemu-iotests/iotests.py |   4 +
 8 files changed, 445 insertions(+), 160 deletions(-)
 create mode 100755 tests/qemu-iotests/220
 create mode 100644 tests/qemu-iotests/220.out

-- 
2.11.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-07-20 20:14   ` Eric Blake
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 02/11] block/nbd: move connection code from block/nbd to block/nbd-client Vladimir Sementsov-Ogievskiy
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

To implement nbd reconnect in further patches, we need to distinguish
error codes, returned by nbd server, from channel errors, to reconnect
only in the latter case.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 83 +++++++++++++++++++++++++++++++-----------------------
 1 file changed, 47 insertions(+), 36 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 8d69eaaa32..9b9a82fef1 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -501,11 +501,11 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
  */
 static coroutine_fn int nbd_co_receive_one_chunk(
         NBDClientSession *s, uint64_t handle, bool only_structured,
-        QEMUIOVector *qiov, NBDReply *reply, void **payload, Error **errp)
+        int *request_ret, QEMUIOVector *qiov, NBDReply *reply, void **payload,
+        Error **errp)
 {
-    int request_ret;
     int ret = nbd_co_do_receive_one_chunk(s, handle, only_structured,
-                                          &request_ret, qiov, payload, errp);
+                                          request_ret, qiov, payload, errp);
 
     if (ret < 0) {
         s->quit = true;
@@ -515,7 +515,6 @@ static coroutine_fn int nbd_co_receive_one_chunk(
             *reply = s->reply;
         }
         s->reply.handle = 0;
-        ret = request_ret;
     }
 
     if (s->read_reply_co) {
@@ -527,22 +526,17 @@ static coroutine_fn int nbd_co_receive_one_chunk(
 
 typedef struct NBDReplyChunkIter {
     int ret;
-    bool fatal;
+    int request_ret;
     Error *err;
     bool done, only_structured;
 } NBDReplyChunkIter;
 
-static void nbd_iter_error(NBDReplyChunkIter *iter, bool fatal,
-                           int ret, Error **local_err)
+static void nbd_iter_channel_error(NBDReplyChunkIter *iter,
+                                   int ret, Error **local_err)
 {
     assert(ret < 0);
 
-    if ((fatal && !iter->fatal) || iter->ret == 0) {
-        if (iter->ret != 0) {
-            error_free(iter->err);
-            iter->err = NULL;
-        }
-        iter->fatal = fatal;
+    if (!iter->ret) {
         iter->ret = ret;
         error_propagate(&iter->err, *local_err);
     } else {
@@ -552,6 +546,15 @@ static void nbd_iter_error(NBDReplyChunkIter *iter, bool fatal,
     *local_err = NULL;
 }
 
+static void nbd_iter_request_error(NBDReplyChunkIter *iter, int ret)
+{
+    assert(ret < 0);
+
+    if (!iter->request_ret) {
+        iter->request_ret = ret;
+    }
+}
+
 /* NBD_FOREACH_REPLY_CHUNK
  */
 #define NBD_FOREACH_REPLY_CHUNK(s, iter, handle, structured, \
@@ -567,13 +570,13 @@ static bool nbd_reply_chunk_iter_receive(NBDClientSession *s,
                                          QEMUIOVector *qiov, NBDReply *reply,
                                          void **payload)
 {
-    int ret;
+    int ret, request_ret;
     NBDReply local_reply;
     NBDStructuredReplyChunk *chunk;
     Error *local_err = NULL;
     if (s->quit) {
         error_setg(&local_err, "Connection closed");
-        nbd_iter_error(iter, true, -EIO, &local_err);
+        nbd_iter_channel_error(iter, -EIO, &local_err);
         goto break_loop;
     }
 
@@ -587,10 +590,12 @@ static bool nbd_reply_chunk_iter_receive(NBDClientSession *s,
     }
 
     ret = nbd_co_receive_one_chunk(s, handle, iter->only_structured,
-                                   qiov, reply, payload, &local_err);
+                                   &request_ret, qiov, reply, payload,
+                                   &local_err);
     if (ret < 0) {
-        /* If it is a fatal error s->quit is set by nbd_co_receive_one_chunk */
-        nbd_iter_error(iter, s->quit, ret, &local_err);
+        nbd_iter_channel_error(iter, ret, &local_err);
+    } else if (request_ret < 0) {
+        nbd_iter_request_error(iter, request_ret);
     }
 
     /* Do not execute the body of NBD_FOREACH_REPLY_CHUNK for simple reply. */
@@ -627,7 +632,7 @@ break_loop:
 }
 
 static int nbd_co_receive_return_code(NBDClientSession *s, uint64_t handle,
-                                      Error **errp)
+                                      int *request_ret, Error **errp)
 {
     NBDReplyChunkIter iter;
 
@@ -636,12 +641,13 @@ static int nbd_co_receive_return_code(NBDClientSession *s, uint64_t handle,
     }
 
     error_propagate(errp, iter.err);
+    *request_ret = iter.request_ret;
     return iter.ret;
 }
 
 static int nbd_co_receive_cmdread_reply(NBDClientSession *s, uint64_t handle,
                                         uint64_t offset, QEMUIOVector *qiov,
-                                        Error **errp)
+                                        int *request_ret, Error **errp)
 {
     NBDReplyChunkIter iter;
     NBDReply reply;
@@ -666,7 +672,7 @@ static int nbd_co_receive_cmdread_reply(NBDClientSession *s, uint64_t handle,
                                                 offset, qiov, &local_err);
             if (ret < 0) {
                 s->quit = true;
-                nbd_iter_error(&iter, true, ret, &local_err);
+                nbd_iter_channel_error(&iter, ret, &local_err);
             }
             break;
         default:
@@ -676,7 +682,7 @@ static int nbd_co_receive_cmdread_reply(NBDClientSession *s, uint64_t handle,
                 error_setg(&local_err,
                            "Unexpected reply type: %d (%s) for CMD_READ",
                            chunk->type, nbd_reply_type_lookup(chunk->type));
-                nbd_iter_error(&iter, true, -EINVAL, &local_err);
+                nbd_iter_channel_error(&iter, -EINVAL, &local_err);
             }
         }
 
@@ -685,12 +691,14 @@ static int nbd_co_receive_cmdread_reply(NBDClientSession *s, uint64_t handle,
     }
 
     error_propagate(errp, iter.err);
+    *request_ret = iter.request_ret;
     return iter.ret;
 }
 
 static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
                                             uint64_t handle, uint64_t length,
-                                            NBDExtent *extent, Error **errp)
+                                            NBDExtent *extent,
+                                            int *request_ret, Error **errp)
 {
     NBDReplyChunkIter iter;
     NBDReply reply;
@@ -712,7 +720,7 @@ static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
             if (received) {
                 s->quit = true;
                 error_setg(&local_err, "Several BLOCK_STATUS chunks in reply");
-                nbd_iter_error(&iter, true, -EINVAL, &local_err);
+                nbd_iter_channel_error(&iter, -EINVAL, &local_err);
             }
             received = true;
 
@@ -721,7 +729,7 @@ static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
                                                 &local_err);
             if (ret < 0) {
                 s->quit = true;
-                nbd_iter_error(&iter, true, ret, &local_err);
+                nbd_iter_channel_error(&iter, ret, &local_err);
             }
             break;
         default:
@@ -731,7 +739,7 @@ static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
                            "Unexpected reply type: %d (%s) "
                            "for CMD_BLOCK_STATUS",
                            chunk->type, nbd_reply_type_lookup(chunk->type));
-                nbd_iter_error(&iter, true, -EINVAL, &local_err);
+                nbd_iter_channel_error(&iter, -EINVAL, &local_err);
             }
         }
 
@@ -746,14 +754,16 @@ static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
             iter.ret = -EIO;
         }
     }
+
     error_propagate(errp, iter.err);
+    *request_ret = iter.request_ret;
     return iter.ret;
 }
 
 static int nbd_co_request(BlockDriverState *bs, NBDRequest *request,
                           QEMUIOVector *write_qiov)
 {
-    int ret;
+    int ret, request_ret;
     Error *local_err = NULL;
     NBDClientSession *client = nbd_get_client_session(bs);
 
@@ -769,17 +779,18 @@ static int nbd_co_request(BlockDriverState *bs, NBDRequest *request,
         return ret;
     }
 
-    ret = nbd_co_receive_return_code(client, request->handle, &local_err);
+    ret = nbd_co_receive_return_code(client, request->handle,
+                                     &request_ret, &local_err);
     if (local_err) {
         error_report_err(local_err);
     }
-    return ret;
+    return ret ? ret : request_ret;
 }
 
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
                          uint64_t bytes, QEMUIOVector *qiov, int flags)
 {
-    int ret;
+    int ret, request_ret;
     Error *local_err = NULL;
     NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = {
@@ -800,11 +811,11 @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
     }
 
     ret = nbd_co_receive_cmdread_reply(client, request.handle, offset, qiov,
-                                       &local_err);
+                                       &request_ret, &local_err);
     if (local_err) {
         error_report_err(local_err);
     }
-    return ret;
+    return ret ? ret : request_ret;
 }
 
 int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
@@ -898,7 +909,7 @@ int coroutine_fn nbd_client_co_block_status(BlockDriverState *bs,
                                             int64_t *pnum, int64_t *map,
                                             BlockDriverState **file)
 {
-    int64_t ret;
+    int ret, request_ret;
     NBDExtent extent = { 0 };
     NBDClientSession *client = nbd_get_client_session(bs);
     Error *local_err = NULL;
@@ -923,12 +934,12 @@ int coroutine_fn nbd_client_co_block_status(BlockDriverState *bs,
     }
 
     ret = nbd_co_receive_blockstatus_reply(client, request.handle, bytes,
-                                           &extent, &local_err);
+                                           &extent, &request_ret, &local_err);
     if (local_err) {
         error_report_err(local_err);
     }
-    if (ret < 0) {
-        return ret;
+    if (ret < 0 || request_ret < 0) {
+        return ret ? ret : request_ret;
     }
 
     assert(extent.length);
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 02/11] block/nbd: move connection code from block/nbd to block/nbd-client
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 03/11] block/nbd-client: split connection from initialization Vladimir Sementsov-Ogievskiy
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Keep all connection code in one file, to be able to implement reconnect
in further patches.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  2 +-
 block/nbd-client.c | 37 +++++++++++++++++++++++++++++++++++--
 block/nbd.c        | 41 ++---------------------------------------
 3 files changed, 38 insertions(+), 42 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 0ece76e5af..a93f2114b9 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -41,7 +41,7 @@ typedef struct NBDClientSession {
 NBDClientSession *nbd_get_client_session(BlockDriverState *bs);
 
 int nbd_client_init(BlockDriverState *bs,
-                    QIOChannelSocket *sock,
+                    SocketAddress *saddr,
                     const char *export_name,
                     QCryptoTLSCreds *tlscreds,
                     const char *hostname,
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 9b9a82fef1..6ff505c4b8 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -976,8 +976,31 @@ void nbd_client_close(BlockDriverState *bs)
     nbd_teardown_connection(bs);
 }
 
+static QIOChannelSocket *nbd_establish_connection(SocketAddress *saddr,
+                                                  Error **errp)
+{
+    QIOChannelSocket *sioc;
+    Error *local_err = NULL;
+
+    sioc = qio_channel_socket_new();
+    qio_channel_set_name(QIO_CHANNEL(sioc), "nbd-client");
+
+    qio_channel_socket_connect_sync(sioc,
+                                    saddr,
+                                    &local_err);
+    if (local_err) {
+        object_unref(OBJECT(sioc));
+        error_propagate(errp, local_err);
+        return NULL;
+    }
+
+    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
+
+    return sioc;
+}
+
 int nbd_client_init(BlockDriverState *bs,
-                    QIOChannelSocket *sioc,
+                    SocketAddress *saddr,
                     const char *export,
                     QCryptoTLSCreds *tlscreds,
                     const char *hostname,
@@ -986,6 +1009,15 @@ int nbd_client_init(BlockDriverState *bs,
     NBDClientSession *client = nbd_get_client_session(bs);
     int ret;
 
+    /* establish TCP connection, return error if it fails
+     * TODO: Configurable retry-until-timeout behaviour.
+     */
+    QIOChannelSocket *sioc = nbd_establish_connection(saddr, errp);
+
+    if (!sioc) {
+        return -ECONNREFUSED;
+    }
+
     /* NBD handshake */
     logout("session init %s\n", export);
     qio_channel_set_blocking(QIO_CHANNEL(sioc), true, NULL);
@@ -998,12 +1030,14 @@ int nbd_client_init(BlockDriverState *bs,
                                 &client->ioc, &client->info, errp);
     if (ret < 0) {
         logout("Failed to negotiate with the NBD server\n");
+        object_unref(OBJECT(sioc));
         return ret;
     }
     if (client->info.flags & NBD_FLAG_READ_ONLY &&
         !bdrv_is_read_only(bs)) {
         error_setg(errp,
                    "request for write access conflicts with read-only export");
+        object_unref(OBJECT(sioc));
         return -EACCES;
     }
     if (client->info.flags & NBD_FLAG_SEND_FUA) {
@@ -1017,7 +1051,6 @@ int nbd_client_init(BlockDriverState *bs,
     qemu_co_mutex_init(&client->send_mutex);
     qemu_co_queue_init(&client->free_sema);
     client->sioc = sioc;
-    object_ref(OBJECT(client->sioc));
 
     if (!client->ioc) {
         client->ioc = QIO_CHANNEL(sioc);
diff --git a/block/nbd.c b/block/nbd.c
index ff8333e3c1..a851b8cd68 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -305,30 +305,6 @@ NBDClientSession *nbd_get_client_session(BlockDriverState *bs)
     return &s->client;
 }
 
-static QIOChannelSocket *nbd_establish_connection(SocketAddress *saddr,
-                                                  Error **errp)
-{
-    QIOChannelSocket *sioc;
-    Error *local_err = NULL;
-
-    sioc = qio_channel_socket_new();
-    qio_channel_set_name(QIO_CHANNEL(sioc), "nbd-client");
-
-    qio_channel_socket_connect_sync(sioc,
-                                    saddr,
-                                    &local_err);
-    if (local_err) {
-        object_unref(OBJECT(sioc));
-        error_propagate(errp, local_err);
-        return NULL;
-    }
-
-    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
-
-    return sioc;
-}
-
-
 static QCryptoTLSCreds *nbd_get_tls_creds(const char *id, Error **errp)
 {
     Object *obj;
@@ -398,7 +374,6 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
     BDRVNBDState *s = bs->opaque;
     QemuOpts *opts = NULL;
     Error *local_err = NULL;
-    QIOChannelSocket *sioc = NULL;
     QCryptoTLSCreds *tlscreds = NULL;
     const char *hostname = NULL;
     int ret = -EINVAL;
@@ -438,22 +413,10 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         hostname = s->saddr->u.inet.host;
     }
 
-    /* establish TCP connection, return error if it fails
-     * TODO: Configurable retry-until-timeout behaviour.
-     */
-    sioc = nbd_establish_connection(s->saddr, errp);
-    if (!sioc) {
-        ret = -ECONNREFUSED;
-        goto error;
-    }
-
     /* NBD handshake */
-    ret = nbd_client_init(bs, sioc, s->export,
-                          tlscreds, hostname, errp);
+    ret = nbd_client_init(bs, s->saddr, s->export, tlscreds, hostname, errp);
+
  error:
-    if (sioc) {
-        object_unref(OBJECT(sioc));
-    }
     if (tlscreds) {
         object_unref(OBJECT(tlscreds));
     }
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 03/11] block/nbd-client: split connection from initialization
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 02/11] block/nbd: move connection code from block/nbd to block/nbd-client Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 04/11] block/nbd-client: fix nbd_reply_chunk_iter_receive Vladimir Sementsov-Ogievskiy
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Split connection code to reuse it for reconnect.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 6ff505c4b8..14b42f31df 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -999,12 +999,12 @@ static QIOChannelSocket *nbd_establish_connection(SocketAddress *saddr,
     return sioc;
 }
 
-int nbd_client_init(BlockDriverState *bs,
-                    SocketAddress *saddr,
-                    const char *export,
-                    QCryptoTLSCreds *tlscreds,
-                    const char *hostname,
-                    Error **errp)
+static int nbd_client_connect(BlockDriverState *bs,
+                              SocketAddress *saddr,
+                              const char *export,
+                              QCryptoTLSCreds *tlscreds,
+                              const char *hostname,
+                              Error **errp)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
     int ret;
@@ -1048,8 +1048,6 @@ int nbd_client_init(BlockDriverState *bs,
         bs->supported_zero_flags |= BDRV_REQ_MAY_UNMAP;
     }
 
-    qemu_co_mutex_init(&client->send_mutex);
-    qemu_co_queue_init(&client->free_sema);
     client->sioc = sioc;
 
     if (!client->ioc) {
@@ -1066,3 +1064,18 @@ int nbd_client_init(BlockDriverState *bs,
     logout("Established connection with NBD server\n");
     return 0;
 }
+
+int nbd_client_init(BlockDriverState *bs,
+                    SocketAddress *saddr,
+                    const char *export,
+                    QCryptoTLSCreds *tlscreds,
+                    const char *hostname,
+                    Error **errp)
+{
+    NBDClientSession *client = nbd_get_client_session(bs);
+
+    qemu_co_mutex_init(&client->send_mutex);
+    qemu_co_queue_init(&client->free_sema);
+
+    return nbd_client_connect(bs, saddr, export, tlscreds, hostname, errp);
+}
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 04/11] block/nbd-client: fix nbd_reply_chunk_iter_receive
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 03/11] block/nbd-client: split connection from initialization Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 05/11] block/nbd-client: don't check ioc Vladimir Sementsov-Ogievskiy
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Use exported report, not the variable to be reused (should not really
matter).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 14b42f31df..dd712c59b3 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -599,7 +599,7 @@ static bool nbd_reply_chunk_iter_receive(NBDClientSession *s,
     }
 
     /* Do not execute the body of NBD_FOREACH_REPLY_CHUNK for simple reply. */
-    if (nbd_reply_is_simple(&s->reply) || s->quit) {
+    if (nbd_reply_is_simple(reply) || s->quit) {
         goto break_loop;
     }
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 05/11] block/nbd-client: don't check ioc
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 04/11] block/nbd-client: fix nbd_reply_chunk_iter_receive Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 06/11] block/nbd-client: move from quit to state Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

We have several paranoiac checks for ioc != NULL. But ioc may become
NULL only on close, which should not happen during requests handling.
Also, we check ioc only sometimes, not after each yield, which is
inconsistent. Let's drop these checks. However, for safety, lets leave
asserts instead.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index dd712c59b3..1589ceb475 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -51,9 +51,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
 
-    if (!client->ioc) { /* Already closed */
-        return;
-    }
+    assert(client->ioc);
 
     /* finish any pending coroutines */
     qio_channel_shutdown(client->ioc,
@@ -150,10 +148,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
         rc = -EIO;
         goto err;
     }
-    if (!s->ioc) {
-        rc = -EPIPE;
-        goto err;
-    }
+    assert(s->ioc);
 
     if (qiov) {
         qio_channel_set_cork(s->ioc, true);
@@ -426,10 +421,11 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    if (!s->ioc || s->quit) {
+    if (s->quit) {
         error_setg(errp, "Connection closed");
         return -EIO;
     }
+    assert(s->ioc);
 
     assert(s->reply.handle == handle);
 
@@ -967,9 +963,7 @@ void nbd_client_close(BlockDriverState *bs)
     NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = { .type = NBD_CMD_DISC };
 
-    if (client->ioc == NULL) {
-        return;
-    }
+    assert(client->ioc);
 
     nbd_send_request(client->ioc, &request);
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 06/11] block/nbd-client: move from quit to state
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 05/11] block/nbd-client: don't check ioc Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 07/11] block/nbd-client: rename read_reply_co to connection_co Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

To implement reconnect we need several states for the client:
CONNECTED, QUIT and several CONNECTING states. CONNECTING states will
be realized in the following patches. This patch implements CONNECTED
and QUIT.

QUIT means, that we should close the connection and fail all current
and further requests (like old quit = true).

CONNECTED means that connection is ok, we can send requests (like old
quit = false).

For receiving loop we use a comparison of the current state with QUIT,
because reconnect will be in the same loop, so it should be looping
until the end.

Opposite, for requests we use a comparison of the current state with
CONNECTED, as we don't want to send requests in CONNECTING states (
which are unreachable now, but will be reachable after the following
commits)

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h | 10 +++++++++-
 block/nbd-client.c | 55 ++++++++++++++++++++++++++++++++----------------------
 2 files changed, 42 insertions(+), 23 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index a93f2114b9..014015cd7e 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -23,6 +23,14 @@ typedef struct {
     bool receiving;         /* waiting for read_reply_co? */
 } NBDClientRequest;
 
+typedef enum NBDClientState {
+    NBD_CLIENT_CONNECTING_INIT,
+    NBD_CLIENT_CONNECTING_WAIT,
+    NBD_CLIENT_CONNECTING_NOWAIT,
+    NBD_CLIENT_CONNECTED,
+    NBD_CLIENT_QUIT
+} NBDClientState;
+
 typedef struct NBDClientSession {
     QIOChannelSocket *sioc; /* The master data channel */
     QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
@@ -32,10 +40,10 @@ typedef struct NBDClientSession {
     CoQueue free_sema;
     Coroutine *read_reply_co;
     int in_flight;
+    NBDClientState state;
 
     NBDClientRequest requests[MAX_NBD_REQUESTS];
     NBDReply reply;
-    bool quit;
 } NBDClientSession;
 
 NBDClientSession *nbd_get_client_session(BlockDriverState *bs);
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 1589ceb475..7a644e482f 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -34,6 +34,12 @@
 #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ (uint64_t)(intptr_t)(bs))
 #define INDEX_TO_HANDLE(bs, index)  ((index)  ^ (uint64_t)(intptr_t)(bs))
 
+/* @ret would be used for reconnect in future */
+static void nbd_channel_error(NBDClientSession *s, int ret)
+{
+    s->state = NBD_CLIENT_QUIT;
+}
+
 static void nbd_recv_coroutines_wake_all(NBDClientSession *s)
 {
     int i;
@@ -73,14 +79,15 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     int ret = 0;
     Error *local_err = NULL;
 
-    while (!s->quit) {
+    while (s->state != NBD_CLIENT_QUIT) {
         assert(s->reply.handle == 0);
         ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
         if (local_err) {
             error_report_err(local_err);
         }
         if (ret <= 0) {
-            break;
+            nbd_channel_error(s, ret ? ret : -EIO);
+            continue;
         }
 
         /* There's no need for a mutex on the receive side, because the
@@ -93,7 +100,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             !s->requests[i].receiving ||
             (nbd_reply_is_structured(&s->reply) && !s->info.structured_reply))
         {
-            break;
+            nbd_channel_error(s, -EINVAL);
+            continue;
         }
 
         /* We're woken up again by the request itself.  Note that there
@@ -111,7 +119,6 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
         qemu_coroutine_yield();
     }
 
-    s->quit = true;
     nbd_recv_coroutines_wake_all(s);
     s->read_reply_co = NULL;
 }
@@ -121,12 +128,18 @@ static int nbd_co_send_request(BlockDriverState *bs,
                                QEMUIOVector *qiov)
 {
     NBDClientSession *s = nbd_get_client_session(bs);
-    int rc, i;
+    int rc, i = -1;
 
     qemu_co_mutex_lock(&s->send_mutex);
     while (s->in_flight == MAX_NBD_REQUESTS) {
         qemu_co_queue_wait(&s->free_sema, &s->send_mutex);
     }
+
+    if (s->state != NBD_CLIENT_CONNECTED) {
+        rc = -EIO;
+        goto err;
+    }
+
     s->in_flight++;
 
     for (i = 0; i < MAX_NBD_REQUESTS; i++) {
@@ -144,16 +157,12 @@ static int nbd_co_send_request(BlockDriverState *bs,
 
     request->handle = INDEX_TO_HANDLE(s, i);
 
-    if (s->quit) {
-        rc = -EIO;
-        goto err;
-    }
     assert(s->ioc);
 
     if (qiov) {
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
-        if (rc >= 0 && !s->quit) {
+        if (rc >= 0 && s->state == NBD_CLIENT_CONNECTED) {
             if (qio_channel_writev_all(s->ioc, qiov->iov, qiov->niov,
                                        NULL) < 0) {
                 rc = -EIO;
@@ -168,9 +177,11 @@ static int nbd_co_send_request(BlockDriverState *bs,
 
 err:
     if (rc < 0) {
-        s->quit = true;
-        s->requests[i].coroutine = NULL;
-        s->in_flight--;
+        nbd_channel_error(s, rc);
+        if (i != -1) {
+            s->requests[i].coroutine = NULL;
+            s->in_flight--;
+        }
         qemu_co_queue_next(&s->free_sema);
     }
     qemu_co_mutex_unlock(&s->send_mutex);
@@ -421,7 +432,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    if (s->quit) {
+    if (s->state != NBD_CLIENT_CONNECTED) {
         error_setg(errp, "Connection closed");
         return -EIO;
     }
@@ -504,7 +515,7 @@ static coroutine_fn int nbd_co_receive_one_chunk(
                                           request_ret, qiov, payload, errp);
 
     if (ret < 0) {
-        s->quit = true;
+        nbd_channel_error(s, ret);
     } else {
         /* For assert at loop start in nbd_read_reply_entry */
         if (reply) {
@@ -570,7 +581,7 @@ static bool nbd_reply_chunk_iter_receive(NBDClientSession *s,
     NBDReply local_reply;
     NBDStructuredReplyChunk *chunk;
     Error *local_err = NULL;
-    if (s->quit) {
+    if (s->state != NBD_CLIENT_CONNECTED) {
         error_setg(&local_err, "Connection closed");
         nbd_iter_channel_error(iter, -EIO, &local_err);
         goto break_loop;
@@ -595,7 +606,7 @@ static bool nbd_reply_chunk_iter_receive(NBDClientSession *s,
     }
 
     /* Do not execute the body of NBD_FOREACH_REPLY_CHUNK for simple reply. */
-    if (nbd_reply_is_simple(reply) || s->quit) {
+    if (nbd_reply_is_simple(reply) || s->state != NBD_CLIENT_CONNECTED) {
         goto break_loop;
     }
 
@@ -667,14 +678,14 @@ static int nbd_co_receive_cmdread_reply(NBDClientSession *s, uint64_t handle,
             ret = nbd_parse_offset_hole_payload(&reply.structured, payload,
                                                 offset, qiov, &local_err);
             if (ret < 0) {
-                s->quit = true;
+                nbd_channel_error(s, ret);
                 nbd_iter_channel_error(&iter, ret, &local_err);
             }
             break;
         default:
             if (!nbd_reply_type_is_error(chunk->type)) {
                 /* not allowed reply type */
-                s->quit = true;
+                nbd_channel_error(s, -EINVAL);
                 error_setg(&local_err,
                            "Unexpected reply type: %d (%s) for CMD_READ",
                            chunk->type, nbd_reply_type_lookup(chunk->type));
@@ -714,7 +725,7 @@ static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
         switch (chunk->type) {
         case NBD_REPLY_TYPE_BLOCK_STATUS:
             if (received) {
-                s->quit = true;
+                nbd_channel_error(s, -EINVAL);
                 error_setg(&local_err, "Several BLOCK_STATUS chunks in reply");
                 nbd_iter_channel_error(&iter, -EINVAL, &local_err);
             }
@@ -724,13 +735,13 @@ static int nbd_co_receive_blockstatus_reply(NBDClientSession *s,
                                                 payload, length, extent,
                                                 &local_err);
             if (ret < 0) {
-                s->quit = true;
+                nbd_channel_error(s, ret);
                 nbd_iter_channel_error(&iter, ret, &local_err);
             }
             break;
         default:
             if (!nbd_reply_type_is_error(chunk->type)) {
-                s->quit = true;
+                nbd_channel_error(s, -EINVAL);
                 error_setg(&local_err,
                            "Unexpected reply type: %d (%s) "
                            "for CMD_BLOCK_STATUS",
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 07/11] block/nbd-client: rename read_reply_co to connection_co
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 06/11] block/nbd-client: move from quit to state Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 08/11] block/nbd-client: move connecting " Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

This coroutine will serve nbd reconnects, so, rename it to be something
more generic.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  4 ++--
 block/nbd-client.c | 24 ++++++++++++------------
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 014015cd7e..7cb31e72e6 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -20,7 +20,7 @@
 typedef struct {
     Coroutine *coroutine;
     uint64_t offset;        /* original offset of the request */
-    bool receiving;         /* waiting for read_reply_co? */
+    bool receiving;         /* waiting for connection_co? */
 } NBDClientRequest;
 
 typedef enum NBDClientState {
@@ -38,7 +38,7 @@ typedef struct NBDClientSession {
 
     CoMutex send_mutex;
     CoQueue free_sema;
-    Coroutine *read_reply_co;
+    Coroutine *connection_co;
     int in_flight;
     NBDClientState state;
 
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 7a644e482f..32c6f531de 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -63,7 +63,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     qio_channel_shutdown(client->ioc,
                          QIO_CHANNEL_SHUTDOWN_BOTH,
                          NULL);
-    BDRV_POLL_WHILE(bs, client->read_reply_co);
+    BDRV_POLL_WHILE(bs, client->connection_co);
 
     nbd_client_detach_aio_context(bs);
     object_unref(OBJECT(client->sioc));
@@ -72,7 +72,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     client->ioc = NULL;
 }
 
-static coroutine_fn void nbd_read_reply_entry(void *opaque)
+static coroutine_fn void nbd_connection_entry(void *opaque)
 {
     NBDClientSession *s = opaque;
     uint64_t i;
@@ -105,14 +105,14 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
         }
 
         /* We're woken up again by the request itself.  Note that there
-         * is no race between yielding and reentering read_reply_co.  This
+         * is no race between yielding and reentering connection_co.  This
          * is because:
          *
          * - if the request runs on the same AioContext, it is only
          *   entered after we yield
          *
          * - if the request runs on a different AioContext, reentering
-         *   read_reply_co happens through a bottom half, which can only
+         *   connection_co happens through a bottom half, which can only
          *   run after we yield.
          */
         aio_co_wake(s->requests[i].coroutine);
@@ -120,7 +120,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     }
 
     nbd_recv_coroutines_wake_all(s);
-    s->read_reply_co = NULL;
+    s->connection_co = NULL;
 }
 
 static int nbd_co_send_request(BlockDriverState *bs,
@@ -428,7 +428,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     }
     *request_ret = 0;
 
-    /* Wait until we're woken up by nbd_read_reply_entry.  */
+    /* Wait until we're woken up by nbd_connection_entry.  */
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
@@ -503,7 +503,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
 }
 
 /* nbd_co_receive_one_chunk
- * Read reply, wake up read_reply_co and set s->quit if needed.
+ * Read reply, wake up connection_co and set s->quit if needed.
  * Return value is a fatal error code or normal nbd reply error code
  */
 static coroutine_fn int nbd_co_receive_one_chunk(
@@ -517,15 +517,15 @@ static coroutine_fn int nbd_co_receive_one_chunk(
     if (ret < 0) {
         nbd_channel_error(s, ret);
     } else {
-        /* For assert at loop start in nbd_read_reply_entry */
+        /* For assert at loop start in nbd_connection_entry */
         if (reply) {
             *reply = s->reply;
         }
         s->reply.handle = 0;
     }
 
-    if (s->read_reply_co) {
-        aio_co_wake(s->read_reply_co);
+    if (s->connection_co) {
+        aio_co_wake(s->connection_co);
     }
 
     return ret;
@@ -966,7 +966,7 @@ void nbd_client_attach_aio_context(BlockDriverState *bs,
 {
     NBDClientSession *client = nbd_get_client_session(bs);
     qio_channel_attach_aio_context(QIO_CHANNEL(client->ioc), new_context);
-    aio_co_schedule(new_context, client->read_reply_co);
+    aio_co_schedule(new_context, client->connection_co);
 }
 
 void nbd_client_close(BlockDriverState *bs)
@@ -1063,7 +1063,7 @@ static int nbd_client_connect(BlockDriverState *bs,
     /* Now that we're connected, set the socket to be non-blocking and
      * kick the reply mechanism.  */
     qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
-    client->read_reply_co = qemu_coroutine_create(nbd_read_reply_entry, client);
+    client->connection_co = qemu_coroutine_create(nbd_connection_entry, client);
     nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
 
     logout("Established connection with NBD server\n");
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 08/11] block/nbd-client: move connecting to connection_co
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 07/11] block/nbd-client: rename read_reply_co to connection_co Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 09/11] block/nbd: add cmdline and qapi parameters for nbd reconnect Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

As a first step to nbd reconnect, move connection to connection_co
coroutine.

The key point in this patch is nbd_client_attach_aio_context() change:
We schedule to connection_co only if it is waiting for read from the
channel. We should not schedule it if it is some other yield, and if
it is currently executing (we call nbd_client_attach_aio_context from
connection code).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  3 +++
 block/nbd-client.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 7cb31e72e6..f6c8052573 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -41,6 +41,9 @@ typedef struct NBDClientSession {
     Coroutine *connection_co;
     int in_flight;
     NBDClientState state;
+    bool receiving;
+    int connect_status;
+    Error *connect_err;
 
     NBDClientRequest requests[MAX_NBD_REQUESTS];
     NBDReply reply;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 32c6f531de..44ac4ebc31 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -34,6 +34,13 @@
 #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ (uint64_t)(intptr_t)(bs))
 #define INDEX_TO_HANDLE(bs, index)  ((index)  ^ (uint64_t)(intptr_t)(bs))
 
+static int nbd_client_connect(BlockDriverState *bs,
+                              SocketAddress *saddr,
+                              const char *export,
+                              QCryptoTLSCreds *tlscreds,
+                              const char *hostname,
+                              Error **errp);
+
 /* @ret would be used for reconnect in future */
 static void nbd_channel_error(NBDClientSession *s, int ret)
 {
@@ -63,6 +70,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     qio_channel_shutdown(client->ioc,
                          QIO_CHANNEL_SHUTDOWN_BOTH,
                          NULL);
+    client->state = NBD_CLIENT_QUIT;
     BDRV_POLL_WHILE(bs, client->connection_co);
 
     nbd_client_detach_aio_context(bs);
@@ -72,16 +80,38 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     client->ioc = NULL;
 }
 
+typedef struct NBDConnection {
+    BlockDriverState *bs;
+    SocketAddress *saddr;
+    const char *export;
+    QCryptoTLSCreds *tlscreds;
+    const char *hostname;
+} NBDConnection;
+
 static coroutine_fn void nbd_connection_entry(void *opaque)
 {
-    NBDClientSession *s = opaque;
+    NBDConnection *con = opaque;
+    NBDClientSession *s = nbd_get_client_session(con->bs);
     uint64_t i;
     int ret = 0;
     Error *local_err = NULL;
 
+    s->connect_status = nbd_client_connect(con->bs, con->saddr,
+                                           con->export, con->tlscreds,
+                                           con->hostname, &s->connect_err);
+    if (s->connect_status < 0) {
+        nbd_channel_error(s, s->connect_status);
+        return;
+    }
+
+    /* successfully connected */
+    s->state = NBD_CLIENT_CONNECTED;
+
     while (s->state != NBD_CLIENT_QUIT) {
         assert(s->reply.handle == 0);
+        s->receiving = true;
         ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
+        s->receiving = false;
         if (local_err) {
             error_report_err(local_err);
         }
@@ -966,7 +996,9 @@ void nbd_client_attach_aio_context(BlockDriverState *bs,
 {
     NBDClientSession *client = nbd_get_client_session(bs);
     qio_channel_attach_aio_context(QIO_CHANNEL(client->ioc), new_context);
-    aio_co_schedule(new_context, client->connection_co);
+    if (client->receiving) {
+        aio_co_schedule(new_context, client->connection_co);
+    }
 }
 
 void nbd_client_close(BlockDriverState *bs)
@@ -1063,7 +1095,6 @@ static int nbd_client_connect(BlockDriverState *bs,
     /* Now that we're connected, set the socket to be non-blocking and
      * kick the reply mechanism.  */
     qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
-    client->connection_co = qemu_coroutine_create(nbd_connection_entry, client);
     nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
 
     logout("Established connection with NBD server\n");
@@ -1078,9 +1109,28 @@ int nbd_client_init(BlockDriverState *bs,
                     Error **errp)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
+    NBDConnection *con = g_new(NBDConnection, 1);
+
+    con->bs = bs;
+    con->saddr = saddr;
+    con->export = export;
+    con->tlscreds = tlscreds;
+    con->hostname = hostname;
 
     qemu_co_mutex_init(&client->send_mutex);
     qemu_co_queue_init(&client->free_sema);
+    client->state = NBD_CLIENT_CONNECTING_INIT;
 
-    return nbd_client_connect(bs, saddr, export, tlscreds, hostname, errp);
+    client->connection_co = qemu_coroutine_create(nbd_connection_entry, con);
+    aio_co_schedule(bdrv_get_aio_context(bs), client->connection_co);
+    BDRV_POLL_WHILE(bs, client->state == NBD_CLIENT_CONNECTING_INIT);
+
+    if (client->state != NBD_CLIENT_CONNECTED) {
+        assert(client->connect_status < 0 && client->connect_err);
+        error_propagate(errp, client->connect_err);
+        client->connect_err = NULL;
+        return client->connect_status;
+    }
+
+    return 0;
 }
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 09/11] block/nbd: add cmdline and qapi parameters for nbd reconnect
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (7 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 08/11] block/nbd-client: move connecting " Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 10/11] block/nbd-client: " Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Add two parameters:

reconnect-attempts, which defines maximum number of reconnects, after
  which:
  - open will fail
  - block operations will fail

Note: on open, we actually have reconnect-attempts+1 connection
attempts, the first one is not REconnect.

reconnect-timeout, timeout in nanoseconds between reconnects.

Realization of these options is in the following patch.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 qapi/block-core.json | 12 +++++++++++-
 block/nbd-client.h   |  2 ++
 block/nbd-client.c   | 13 +++++++++++++
 block/nbd.c          | 22 +++++++++++++++++++++-
 4 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 4b1de474a9..4abba9d38e 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -3356,12 +3356,22 @@
 #
 # @tls-creds:   TLS credentials ID
 #
+# @reconnect-attempts: number of connection attempts on disconnect.
+#                      Must be >= 0. Default is 0. On nbd disk open, there would
+#                      be maximum of @reconnect-attempts + 1 total tries to
+#                      connect
+#
+# @reconnect-timeout: timeout between reconnect attempts in nanoseconds. Default
+#                     is 1000000000 (one second)
+#
 # Since: 2.9
 ##
 { 'struct': 'BlockdevOptionsNbd',
   'data': { 'server': 'SocketAddress',
             '*export': 'str',
-            '*tls-creds': 'str' } }
+            '*tls-creds': 'str',
+            '*reconnect-attempts': 'uint64',
+            '*reconnect-timeout': 'uint64' } }
 
 ##
 # @BlockdevOptionsRaw:
diff --git a/block/nbd-client.h b/block/nbd-client.h
index f6c8052573..2561e1ea42 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -56,6 +56,8 @@ int nbd_client_init(BlockDriverState *bs,
                     const char *export_name,
                     QCryptoTLSCreds *tlscreds,
                     const char *hostname,
+                    uint64_t reconnect_attempts,
+                    uint64_t reconnect_timeout,
                     Error **errp);
 void nbd_client_close(BlockDriverState *bs);
 
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 44ac4ebc31..f22ed7f404 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -86,6 +86,8 @@ typedef struct NBDConnection {
     const char *export;
     QCryptoTLSCreds *tlscreds;
     const char *hostname;
+    uint64_t reconnect_attempts;
+    uint64_t reconnect_timeout;
 } NBDConnection;
 
 static coroutine_fn void nbd_connection_entry(void *opaque)
@@ -96,6 +98,13 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
     int ret = 0;
     Error *local_err = NULL;
 
+    if (con->reconnect_attempts != 0) {
+        error_setg(&s->connect_err, "Reconnect is not supported yet");
+        s->connect_status = -EINVAL;
+        nbd_channel_error(s, s->connect_status);
+        return;
+    }
+
     s->connect_status = nbd_client_connect(con->bs, con->saddr,
                                            con->export, con->tlscreds,
                                            con->hostname, &s->connect_err);
@@ -1106,6 +1115,8 @@ int nbd_client_init(BlockDriverState *bs,
                     const char *export,
                     QCryptoTLSCreds *tlscreds,
                     const char *hostname,
+                    uint64_t reconnect_attempts,
+                    uint64_t reconnect_timeout,
                     Error **errp)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
@@ -1116,6 +1127,8 @@ int nbd_client_init(BlockDriverState *bs,
     con->export = export;
     con->tlscreds = tlscreds;
     con->hostname = hostname;
+    con->reconnect_attempts = reconnect_attempts;
+    con->reconnect_timeout = reconnect_timeout;
 
     qemu_co_mutex_init(&client->send_mutex);
     qemu_co_queue_init(&client->free_sema);
diff --git a/block/nbd.c b/block/nbd.c
index a851b8cd68..8c81d4a151 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -364,6 +364,20 @@ static QemuOptsList nbd_runtime_opts = {
             .type = QEMU_OPT_STRING,
             .help = "ID of the TLS credentials to use",
         },
+        {
+            .name = "reconnect-attempts",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Number of connection attempts on disconnect. "
+                    "Must be >= 0. Default is 0. On nbd disk open, there would "
+                    "be maximum of @reconnect-attempts + 1 total tries to "
+                    "connect",
+        },
+        {
+            .name = "reconnect-timeout",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Timeout between reconnect attempts in nanoseconds. "
+                    "Default is 1000000000 (one second)",
+        },
         { /* end of list */ }
     },
 };
@@ -376,6 +390,8 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
     Error *local_err = NULL;
     QCryptoTLSCreds *tlscreds = NULL;
     const char *hostname = NULL;
+    uint64_t reconnect_attempts;
+    uint64_t reconnect_timeout;
     int ret = -EINVAL;
 
     opts = qemu_opts_create(&nbd_runtime_opts, NULL, 0, &error_abort);
@@ -413,8 +429,12 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         hostname = s->saddr->u.inet.host;
     }
 
+    reconnect_attempts = qemu_opt_get_number(opts, "reconnect-attempts", 0);
+    reconnect_timeout = qemu_opt_get_number(opts, "reconnect-timeout",
+                                            1000000000L);
     /* NBD handshake */
-    ret = nbd_client_init(bs, s->saddr, s->export, tlscreds, hostname, errp);
+    ret = nbd_client_init(bs, s->saddr, s->export, tlscreds, hostname,
+                          reconnect_attempts, reconnect_timeout, errp);
 
  error:
     if (tlscreds) {
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 10/11] block/nbd-client: nbd reconnect
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (8 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 09/11] block/nbd: add cmdline and qapi parameters for nbd reconnect Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-06-12 12:47   ` Vladimir Sementsov-Ogievskiy
  2018-06-12 14:51   ` Vladimir Sementsov-Ogievskiy
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 11/11] iotests: test " Vladimir Sementsov-Ogievskiy
  2018-07-03 13:46 ` [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
  11 siblings, 2 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Implement reconnect. To achieve this:

1. Move from quit bool variable to state. 4 states are introduced:
   connecting-wait: means, that reconnecting is in progress, and there
     were small number of reconnect attempts, so all requests are
     waiting for the connection.
   connecting-nowait: reconnecting is in progress, there were a lot of
     attempts of reconnect, all requests will return errors.
   connected: normal state
   quit: exiting after fatal error or on close

Possible transitions are:

   * -> quit
   connecting-* -> connected
   connecting-wait -> connecting-nowait
   connected -> connecting-wait

2. Implement reconnect in connection_co. So, in connecting-* mode,
    connection_co, tries to reconnect every NBD_RECONNECT_NS.
    Configuring of this parameter (as well as NBD_RECONNECT_ATTEMPTS,
    which specifies bound of transition from connecting-wait to
    connecting-nowait) may be done as a follow-up patch.

3. Retry nbd queries on channel error, if we are in connecting-wait
    state.

4. In init, wait until for connection until transition to
    connecting-nowait. So, NBD_RECONNECT_ATTEMPTS is a bound of fail
    for initial connection too.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |   2 +
 block/nbd-client.c | 170 ++++++++++++++++++++++++++++++++++++++---------------
 2 files changed, 123 insertions(+), 49 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 2561e1ea42..1249f2eb52 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -44,6 +44,8 @@ typedef struct NBDClientSession {
     bool receiving;
     int connect_status;
     Error *connect_err;
+    int connect_attempts;
+    bool wait_in_flight;
 
     NBDClientRequest requests[MAX_NBD_REQUESTS];
     NBDReply reply;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index f22ed7f404..49b1f67047 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -41,10 +41,16 @@ static int nbd_client_connect(BlockDriverState *bs,
                               const char *hostname,
                               Error **errp);
 
-/* @ret would be used for reconnect in future */
 static void nbd_channel_error(NBDClientSession *s, int ret)
 {
-    s->state = NBD_CLIENT_QUIT;
+    if (ret == -EIO) {
+        if (s->state == NBD_CLIENT_CONNECTED) {
+            s->state = NBD_CLIENT_CONNECTING_WAIT;
+            s->connect_attempts = 0;
+        }
+    } else {
+        s->state = NBD_CLIENT_QUIT;
+    }
 }
 
 static void nbd_recv_coroutines_wake_all(NBDClientSession *s)
@@ -90,6 +96,19 @@ typedef struct NBDConnection {
     uint64_t reconnect_timeout;
 } NBDConnection;
 
+static bool nbd_client_connecting(NBDClientSession *client)
+{
+    return client->state == NBD_CLIENT_CONNECTING_WAIT ||
+           client->state == NBD_CLIENT_CONNECTING_NOWAIT ||
+           client->state == NBD_CLIENT_CONNECTING_INIT;
+}
+
+static bool nbd_client_connecting_wait(NBDClientSession *client)
+{
+    return client->state == NBD_CLIENT_CONNECTING_WAIT ||
+           client->state == NBD_CLIENT_CONNECTING_INIT;
+}
+
 static coroutine_fn void nbd_connection_entry(void *opaque)
 {
     NBDConnection *con = opaque;
@@ -98,26 +117,55 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
     int ret = 0;
     Error *local_err = NULL;
 
-    if (con->reconnect_attempts != 0) {
-        error_setg(&s->connect_err, "Reconnect is not supported yet");
-        s->connect_status = -EINVAL;
-        nbd_channel_error(s, s->connect_status);
-        return;
-    }
+    while (s->state != NBD_CLIENT_QUIT) {
+        assert(s->reply.handle == 0);
 
-    s->connect_status = nbd_client_connect(con->bs, con->saddr,
-                                           con->export, con->tlscreds,
-                                           con->hostname, &s->connect_err);
-    if (s->connect_status < 0) {
-        nbd_channel_error(s, s->connect_status);
-        return;
-    }
+        if (nbd_client_connecting(s)) {
+            if (s->connect_attempts == con->reconnect_attempts) {
+                s->state = NBD_CLIENT_CONNECTING_NOWAIT;
+                qemu_co_queue_restart_all(&s->free_sema);
+            }
 
-    /* successfully connected */
-    s->state = NBD_CLIENT_CONNECTED;
+            qemu_co_mutex_lock(&s->send_mutex);
+
+            while (s->in_flight > 0) {
+                qemu_co_mutex_unlock(&s->send_mutex);
+                nbd_recv_coroutines_wake_all(s);
+                s->wait_in_flight = true;
+                qemu_coroutine_yield();
+                s->wait_in_flight = false;
+                qemu_co_mutex_lock(&s->send_mutex);
+            }
+
+            qemu_co_mutex_unlock(&s->send_mutex);
+
+            /* Now we are sure, that nobody accessing the channel now and nobody
+             * will try to access the channel, until we set state to CONNECTED
+             */
+
+            s->connect_status = nbd_client_connect(con->bs, con->saddr,
+                                                   con->export, con->tlscreds,
+                                                   con->hostname, &local_err);
+            s->connect_attempts++;
+            error_free(s->connect_err);
+            s->connect_err = NULL;
+            error_propagate(&s->connect_err, local_err);
+            local_err = NULL;
+            if (s->connect_status == -EINVAL) {
+                /* Protocol error or something like this */
+                nbd_channel_error(s, s->connect_status);
+                continue;
+            }
+            if (s->connect_status < 0) {
+                qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, con->reconnect_timeout);
+                continue;
+            }
+
+            /* successfully connected */
+            s->state = NBD_CLIENT_CONNECTED;
+            qemu_co_queue_restart_all(&s->free_sema);
+        }
 
-    while (s->state != NBD_CLIENT_QUIT) {
-        assert(s->reply.handle == 0);
         s->receiving = true;
         ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
         s->receiving = false;
@@ -158,6 +206,7 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
         qemu_coroutine_yield();
     }
 
+    qemu_co_queue_restart_all(&s->free_sema);
     nbd_recv_coroutines_wake_all(s);
     s->connection_co = NULL;
 }
@@ -170,7 +219,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
     int rc, i = -1;
 
     qemu_co_mutex_lock(&s->send_mutex);
-    while (s->in_flight == MAX_NBD_REQUESTS) {
+    while (s->in_flight == MAX_NBD_REQUESTS || nbd_client_connecting_wait(s)) {
         qemu_co_queue_wait(&s->free_sema, &s->send_mutex);
     }
 
@@ -221,7 +270,11 @@ err:
             s->requests[i].coroutine = NULL;
             s->in_flight--;
         }
-        qemu_co_queue_next(&s->free_sema);
+        if (s->in_flight == 0 && s->wait_in_flight) {
+            aio_co_wake(s->connection_co);
+        } else {
+            qemu_co_queue_next(&s->free_sema);
+        }
     }
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -671,7 +724,11 @@ break_loop:
 
     qemu_co_mutex_lock(&s->send_mutex);
     s->in_flight--;
-    qemu_co_queue_next(&s->free_sema);
+    if (s->in_flight == 0 && s->wait_in_flight) {
+        aio_co_wake(s->connection_co);
+    } else {
+        qemu_co_queue_next(&s->free_sema);
+    }
     qemu_co_mutex_unlock(&s->send_mutex);
 
     return false;
@@ -820,16 +877,21 @@ static int nbd_co_request(BlockDriverState *bs, NBDRequest *request,
     } else {
         assert(request->type != NBD_CMD_WRITE);
     }
-    ret = nbd_co_send_request(bs, request, write_qiov);
-    if (ret < 0) {
-        return ret;
-    }
 
-    ret = nbd_co_receive_return_code(client, request->handle,
-                                     &request_ret, &local_err);
-    if (local_err) {
-        error_report_err(local_err);
-    }
+    do {
+        ret = nbd_co_send_request(bs, request, write_qiov);
+        if (ret < 0) {
+            continue;
+        }
+
+        ret = nbd_co_receive_return_code(client, request->handle,
+                                         &request_ret, &local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            local_err = NULL;
+        }
+    } while (ret < 0 && nbd_client_connecting_wait(client));
+
     return ret ? ret : request_ret;
 }
 
@@ -851,16 +913,21 @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
     if (!bytes) {
         return 0;
     }
-    ret = nbd_co_send_request(bs, &request, NULL);
-    if (ret < 0) {
-        return ret;
-    }
 
-    ret = nbd_co_receive_cmdread_reply(client, request.handle, offset, qiov,
-                                       &request_ret, &local_err);
-    if (local_err) {
-        error_report_err(local_err);
-    }
+    do {
+        ret = nbd_co_send_request(bs, &request, NULL);
+        if (ret < 0) {
+            continue;
+        }
+
+        ret = nbd_co_receive_cmdread_reply(client, request.handle, offset, qiov,
+                                           &request_ret, &local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            local_err = NULL;
+        }
+    } while (ret < 0 && nbd_client_connecting_wait(client));
+
     return ret ? ret : request_ret;
 }
 
@@ -974,16 +1041,21 @@ int coroutine_fn nbd_client_co_block_status(BlockDriverState *bs,
         return BDRV_BLOCK_DATA;
     }
 
-    ret = nbd_co_send_request(bs, &request, NULL);
-    if (ret < 0) {
-        return ret;
-    }
+    do {
+        ret = nbd_co_send_request(bs, &request, NULL);
+        if (ret < 0) {
+            continue;
+        }
+
+        ret = nbd_co_receive_blockstatus_reply(client, request.handle, bytes,
+                                               &extent, &request_ret,
+                                               &local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            local_err = NULL;
+        }
+    } while (ret < 0 && nbd_client_connecting_wait(client));
 
-    ret = nbd_co_receive_blockstatus_reply(client, request.handle, bytes,
-                                           &extent, &request_ret, &local_err);
-    if (local_err) {
-        error_report_err(local_err);
-    }
     if (ret < 0 || request_ret < 0) {
         return ret ? ret : request_ret;
     }
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [PATCH v3 11/11] iotests: test nbd reconnect
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (9 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 10/11] block/nbd-client: " Vladimir Sementsov-Ogievskiy
@ 2018-06-09 15:32 ` Vladimir Sementsov-Ogievskiy
  2018-07-03 13:46 ` [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
  11 siblings, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-09 15:32 UTC (permalink / raw)
  To: qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, eblake, vsementsov, den

Add test, which starts backup to nbd target and restarts nbd server
during backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/220        | 68 +++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/220.out    |  7 +++++
 tests/qemu-iotests/group      |  1 +
 tests/qemu-iotests/iotests.py |  4 +++
 4 files changed, 80 insertions(+)
 create mode 100755 tests/qemu-iotests/220
 create mode 100644 tests/qemu-iotests/220.out

diff --git a/tests/qemu-iotests/220 b/tests/qemu-iotests/220
new file mode 100755
index 0000000000..f1c26a2796
--- /dev/null
+++ b/tests/qemu-iotests/220
@@ -0,0 +1,68 @@
+#!/usr/bin/env python
+#
+# Test nbd reconnect
+#
+# Copyright (c) 2018 Virtuozzo International GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import time
+
+import iotests
+from iotests import qemu_img_create, file_path, qemu_nbd_popen
+
+disk_a, disk_b, nbd_sock = file_path('disk_a', 'disk_b', 'nbd-sock')
+nbd_uri = 'nbd+unix:///exp?socket=' + nbd_sock
+
+qemu_img_create('-f', iotests.imgfmt, disk_a, '5M')
+qemu_img_create('-f', iotests.imgfmt, disk_b, '5M')
+srv = qemu_nbd_popen('-k', nbd_sock, '-x', 'exp', '-f', iotests.imgfmt, disk_b)
+time.sleep(1)
+
+vm = iotests.VM().add_drive(disk_a)
+vm.launch()
+vm.hmp_qemu_io('drive0', 'write 0 5M')
+
+print 'blockdev-add:', vm.qmp('blockdev-add', node_name='backup0', driver='raw',
+                              file={'driver':'nbd',
+                                    'reconnect-attempts':10,
+                                    'export': 'exp',
+                                    'server': {'type': 'unix',
+                                               'path': nbd_sock}})
+print 'blockdev-backup:', vm.qmp('blockdev-backup', device='drive0',
+                                 sync='full', target='backup0')
+
+time.sleep(1)
+print 'Kill NBD server'
+srv.kill()
+
+jobs = vm.qmp('query-block-jobs')['return']
+if jobs and jobs[0]['offset'] < jobs[0]['len']:
+    print 'Backup job is still in progress'
+
+time.sleep(1)
+
+print 'Start NBD server'
+srv = qemu_nbd_popen('-k', nbd_sock, '-x', 'exp', '-f', iotests.imgfmt, disk_b)
+
+try:
+    e = vm.event_wait('BLOCK_JOB_COMPLETED')
+    print e['event'], ':', e['data']
+except:
+    pass
+
+print 'blockdev-del:', vm.qmp('blockdev-del', node_name='backup0')
+srv.kill()
+vm.shutdown()
diff --git a/tests/qemu-iotests/220.out b/tests/qemu-iotests/220.out
new file mode 100644
index 0000000000..dae1a49d9f
--- /dev/null
+++ b/tests/qemu-iotests/220.out
@@ -0,0 +1,7 @@
+blockdev-add: {u'return': {}}
+blockdev-backup: {u'return': {}}
+Kill NBD server
+Backup job is still in progress
+Start NBD server
+BLOCK_JOB_COMPLETED : {u'device': u'drive0', u'type': u'backup', u'speed': 0, u'len': 5242880, u'offset': 5242880}
+blockdev-del: {u'return': {}}
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index 93f93d71ba..cc5a1cf2a4 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -217,3 +217,4 @@
 216 rw auto quick
 218 rw auto quick
 219 rw auto
+220 rw auto quick
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index fdbdd8b300..7071318c9e 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -175,6 +175,10 @@ def qemu_nbd(*args):
     '''Run qemu-nbd in daemon mode and return the parent's exit code'''
     return subprocess.call(qemu_nbd_args + ['--fork'] + list(args))
 
+def qemu_nbd_popen(*args):
+    '''Run qemu-nbd in daemon mode and return the parent's exit code'''
+    return subprocess.Popen(qemu_nbd_args + ['--persistent'] + list(args))
+
 def compare_images(img1, img2, fmt1=imgfmt, fmt2=imgfmt):
     '''Return True if two image files are identical'''
     return qemu_img('compare', '-f', fmt1,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v3 10/11] block/nbd-client: nbd reconnect
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 10/11] block/nbd-client: " Vladimir Sementsov-Ogievskiy
@ 2018-06-12 12:47   ` Vladimir Sementsov-Ogievskiy
  2018-06-12 14:51   ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-12 12:47 UTC (permalink / raw)
  To: qemu-devel, qemu-block; +Cc: armbru, mreitz, kwolf, pbonzini, eblake, den

09.06.2018 18:32, Vladimir Sementsov-Ogievskiy wrote:
> Implement reconnect. To achieve this:
>
> 1. Move from quit bool variable to state. 4 states are introduced:
>     connecting-wait: means, that reconnecting is in progress, and there
>       were small number of reconnect attempts, so all requests are
>       waiting for the connection.
>     connecting-nowait: reconnecting is in progress, there were a lot of
>       attempts of reconnect, all requests will return errors.
>     connected: normal state
>     quit: exiting after fatal error or on close
>
> Possible transitions are:
>
>     * -> quit
>     connecting-* -> connected
>     connecting-wait -> connecting-nowait
>     connected -> connecting-wait
>
> 2. Implement reconnect in connection_co. So, in connecting-* mode,
>      connection_co, tries to reconnect every NBD_RECONNECT_NS.
>      Configuring of this parameter (as well as NBD_RECONNECT_ATTEMPTS,
>      which specifies bound of transition from connecting-wait to
>      connecting-nowait) may be done as a follow-up patch.
>
> 3. Retry nbd queries on channel error, if we are in connecting-wait
>      state.
>
> 4. In init, wait until for connection until transition to
>      connecting-nowait. So, NBD_RECONNECT_ATTEMPTS is a bound of fail
>      for initial connection too.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

squash:

@@ -616,7 +617,10 @@ static coroutine_fn int nbd_co_receive_one_chunk(
          s->reply.handle = 0;
      }

-    if (s->connection_co) {
+    if (s->connection_co && !s->wait_in_flight) {
+        /* We must check s->wait_in_flight, because we may entered by
+         * nbd_recv_coroutines_wake_all(), int this case we should not
+         * wake connection_co here, it will woken by last request. */
          aio_co_wake(s->connection_co);
      }



-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v3 10/11] block/nbd-client: nbd reconnect
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 10/11] block/nbd-client: " Vladimir Sementsov-Ogievskiy
  2018-06-12 12:47   ` Vladimir Sementsov-Ogievskiy
@ 2018-06-12 14:51   ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-06-12 14:51 UTC (permalink / raw)
  To: qemu-devel, qemu-block; +Cc: armbru, mreitz, kwolf, pbonzini, eblake, den

09.06.2018 18:32, Vladimir Sementsov-Ogievskiy wrote:
> Implement reconnect. To achieve this:
>
> 1. Move from quit bool variable to state. 4 states are introduced:
>     connecting-wait: means, that reconnecting is in progress, and there
>       were small number of reconnect attempts, so all requests are
>       waiting for the connection.
>     connecting-nowait: reconnecting is in progress, there were a lot of
>       attempts of reconnect, all requests will return errors.
>     connected: normal state
>     quit: exiting after fatal error or on close
>
> Possible transitions are:
>
>     * -> quit
>     connecting-* -> connected
>     connecting-wait -> connecting-nowait
>     connected -> connecting-wait
>
> 2. Implement reconnect in connection_co. So, in connecting-* mode,
>      connection_co, tries to reconnect every NBD_RECONNECT_NS.
>      Configuring of this parameter (as well as NBD_RECONNECT_ATTEMPTS,
>      which specifies bound of transition from connecting-wait to
>      connecting-nowait) may be done as a follow-up patch.
>
> 3. Retry nbd queries on channel error, if we are in connecting-wait
>      state.
>
> 4. In init, wait until for connection until transition to
>      connecting-nowait. So, NBD_RECONNECT_ATTEMPTS is a bound of fail
>      for initial connection too.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   block/nbd-client.h |   2 +
>   block/nbd-client.c | 170 ++++++++++++++++++++++++++++++++++++++---------------
>   2 files changed, 123 insertions(+), 49 deletions(-)
>
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 2561e1ea42..1249f2eb52 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -44,6 +44,8 @@ typedef struct NBDClientSession {
>       bool receiving;
>       int connect_status;
>       Error *connect_err;
> +    int connect_attempts;
> +    bool wait_in_flight;
>   
>       NBDClientRequest requests[MAX_NBD_REQUESTS];
>       NBDReply reply;
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index f22ed7f404..49b1f67047 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -41,10 +41,16 @@ static int nbd_client_connect(BlockDriverState *bs,
>                                 const char *hostname,
>                                 Error **errp);
>   
> -/* @ret would be used for reconnect in future */
>   static void nbd_channel_error(NBDClientSession *s, int ret)
>   {
> -    s->state = NBD_CLIENT_QUIT;
> +    if (ret == -EIO) {
> +        if (s->state == NBD_CLIENT_CONNECTED) {
> +            s->state = NBD_CLIENT_CONNECTING_WAIT;
> +            s->connect_attempts = 0;
> +        }
> +    } else {
> +        s->state = NBD_CLIENT_QUIT;
> +    }
>   }
>   
>   static void nbd_recv_coroutines_wake_all(NBDClientSession *s)
> @@ -90,6 +96,19 @@ typedef struct NBDConnection {
>       uint64_t reconnect_timeout;
>   } NBDConnection;
>   
> +static bool nbd_client_connecting(NBDClientSession *client)
> +{
> +    return client->state == NBD_CLIENT_CONNECTING_WAIT ||
> +           client->state == NBD_CLIENT_CONNECTING_NOWAIT ||
> +           client->state == NBD_CLIENT_CONNECTING_INIT;
> +}
> +
> +static bool nbd_client_connecting_wait(NBDClientSession *client)
> +{
> +    return client->state == NBD_CLIENT_CONNECTING_WAIT ||
> +           client->state == NBD_CLIENT_CONNECTING_INIT;
> +}
> +
>   static coroutine_fn void nbd_connection_entry(void *opaque)
>   {
>       NBDConnection *con = opaque;
> @@ -98,26 +117,55 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
>       int ret = 0;
>       Error *local_err = NULL;
>   
> -    if (con->reconnect_attempts != 0) {
> -        error_setg(&s->connect_err, "Reconnect is not supported yet");
> -        s->connect_status = -EINVAL;
> -        nbd_channel_error(s, s->connect_status);
> -        return;
> -    }
> +    while (s->state != NBD_CLIENT_QUIT) {
> +        assert(s->reply.handle == 0);
>   
> -    s->connect_status = nbd_client_connect(con->bs, con->saddr,
> -                                           con->export, con->tlscreds,
> -                                           con->hostname, &s->connect_err);
> -    if (s->connect_status < 0) {
> -        nbd_channel_error(s, s->connect_status);
> -        return;
> -    }
> +        if (nbd_client_connecting(s)) {
> +            if (s->connect_attempts == con->reconnect_attempts) {
> +                s->state = NBD_CLIENT_CONNECTING_NOWAIT;
> +                qemu_co_queue_restart_all(&s->free_sema);
> +            }
>   
> -    /* successfully connected */
> -    s->state = NBD_CLIENT_CONNECTED;
> +            qemu_co_mutex_lock(&s->send_mutex);
> +
> +            while (s->in_flight > 0) {
> +                qemu_co_mutex_unlock(&s->send_mutex);
> +                nbd_recv_coroutines_wake_all(s);
> +                s->wait_in_flight = true;
> +                qemu_coroutine_yield();
> +                s->wait_in_flight = false;
> +                qemu_co_mutex_lock(&s->send_mutex);
> +            }
> +
> +            qemu_co_mutex_unlock(&s->send_mutex);
> +
> +            /* Now we are sure, that nobody accessing the channel now and nobody
> +             * will try to access the channel, until we set state to CONNECTED
> +             */
> +
> +            s->connect_status = nbd_client_connect(con->bs, con->saddr,
> +                                                   con->export, con->tlscreds,
> +                                                   con->hostname, &local_err);

previous s->ioc leaked here. closing previous connection needs more actions

> +            s->connect_attempts++;
> +            error_free(s->connect_err);
> +            s->connect_err = NULL;
> +            error_propagate(&s->connect_err, local_err);
> +            local_err = NULL;
> +            if (s->connect_status == -EINVAL) {
> +                /* Protocol error or something like this */
> +                nbd_channel_error(s, s->connect_status);
> +                continue;
> +            }
> +            if (s->connect_status < 0) {
> +                qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, con->reconnect_timeout);
> +                continue;
> +            }
> +
> +            /* successfully connected */
> +            s->state = NBD_CLIENT_CONNECTED;
> +            qemu_co_queue_restart_all(&s->free_sema);
> +        }
>   
> -    while (s->state != NBD_CLIENT_QUIT) {
> -        assert(s->reply.handle == 0);
>           s->receiving = true;
>           ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
>           s->receiving = false;
> @@ -158,6 +206,7 @@ stat



-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v3 00/11] NBD reconnect
  2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
                   ` (10 preceding siblings ...)
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 11/11] iotests: test " Vladimir Sementsov-Ogievskiy
@ 2018-07-03 13:46 ` Vladimir Sementsov-Ogievskiy
  2018-07-03 16:31   ` Eric Blake
  11 siblings, 1 reply; 17+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-07-03 13:46 UTC (permalink / raw)
  To: qemu-devel, qemu-block; +Cc: armbru, mreitz, kwolf, pbonzini, eblake, den

Hi all.

before v4 realization, I'd like to discuss some questions.

Our proposal for v4 is the following:

1. don't reconnect on nbd_open. So, on open we do only one connect 
attempt, and if it fails, open fails.
2. don't configure timeout between attempts. instead do the following:
     1s timeout, then 2s, then 4, 8, 16, and then always 16 until success
3. configure only time, after disconnect, during which requests are 
paused (and after this time, if not connected, they will return EIO). Or 
not configure it for now, make a default of 5 minutes.

Any ideas?


09.06.2018 18:32, Vladimir Sementsov-Ogievskiy wrote:
> Hi all.
>
> Here is NBD reconnect.
> The feature realized inside nbd-client driver and works as follows:
>
> There are two parameters: reconnect-attempts and reconnect-timeout.
> So, we will try to reconnect in case of initial connection failed or
> in case of connection lost. All current and new io operations will wait
> until we make reconnect-attempts tries to reconnect. After this, all
> requests will fail with EIO, but we will continue trying to reconnect.
>
> v3:
> 06: fix build error in function 'nbd_co_send_request':
>       error: 'i' may be used uninitialized in this function
>
> v2 notes:
> Here is v2 of NBD reconnect, but it is very very different from v1, so,
> forget about v1.
> The series includes my "NBD reconnect: preliminary refactoring", with
> changes in 05: leave asserts (Eric).
>
> Vladimir Sementsov-Ogievskiy (11):
>    block/nbd-client: split channel errors from export errors
>    block/nbd: move connection code from block/nbd to block/nbd-client
>    block/nbd-client: split connection from initialization
>    block/nbd-client: fix nbd_reply_chunk_iter_receive
>    block/nbd-client: don't check ioc
>    block/nbd-client: move from quit to state
>    block/nbd-client: rename read_reply_co to connection_co
>    block/nbd-client: move connecting to connection_co
>    block/nbd: add cmdline and qapi parameters for nbd reconnect
>    block/nbd-client: nbd reconnect
>    iotests: test nbd reconnect
>
>   qapi/block-core.json          |  12 +-
>   block/nbd-client.h            |  23 ++-
>   block/nbd-client.c            | 429 ++++++++++++++++++++++++++++++------------
>   block/nbd.c                   |  61 +++---
>   tests/qemu-iotests/220        |  68 +++++++
>   tests/qemu-iotests/220.out    |   7 +
>   tests/qemu-iotests/group      |   1 +
>   tests/qemu-iotests/iotests.py |   4 +
>   8 files changed, 445 insertions(+), 160 deletions(-)
>   create mode 100755 tests/qemu-iotests/220
>   create mode 100644 tests/qemu-iotests/220.out
>


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v3 00/11] NBD reconnect
  2018-07-03 13:46 ` [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
@ 2018-07-03 16:31   ` Eric Blake
  0 siblings, 0 replies; 17+ messages in thread
From: Eric Blake @ 2018-07-03 16:31 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, den

On 07/03/2018 08:46 AM, Vladimir Sementsov-Ogievskiy wrote:
> Hi all.
> 
> before v4 realization, I'd like to discuss some questions.
> 
> Our proposal for v4 is the following:
> 
> 1. don't reconnect on nbd_open. So, on open we do only one connect 
> attempt, and if it fails, open fails.
> 2. don't configure timeout between attempts. instead do the following:
>      1s timeout, then 2s, then 4, 8, 16, and then always 16 until success
> 3. configure only time, after disconnect, during which requests are 
> paused (and after this time, if not connected, they will return EIO). Or 
> not configure it for now, make a default of 5 minutes.
> 
> Any ideas?

I apologize that I haven't had time to review this series closely.  At 
this point, I think it's missed 3.0, and will have to be 3.1 material. 
Your proposal for exponential backoff on reconnect attempts makes sense; 
beyond that, I haven't reviewed closely enough to know if I have other 
suggestions on how many knobs to expose to the user, vs. how much to 
make automatic.

> 
> 
> 09.06.2018 18:32, Vladimir Sementsov-Ogievskiy wrote:
>> Hi all.
>>
>> Here is NBD reconnect.
>> The feature realized inside nbd-client driver and works as follows:
>>
>> There are two parameters: reconnect-attempts and reconnect-timeout.
>> So, we will try to reconnect in case of initial connection failed or
>> in case of connection lost. All current and new io operations will wait
>> until we make reconnect-attempts tries to reconnect. After this, all
>> requests will fail with EIO, but we will continue trying to reconnect.
>>
-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors
  2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors Vladimir Sementsov-Ogievskiy
@ 2018-07-20 20:14   ` Eric Blake
  0 siblings, 0 replies; 17+ messages in thread
From: Eric Blake @ 2018-07-20 20:14 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-devel, qemu-block
  Cc: armbru, mreitz, kwolf, pbonzini, den

On 06/09/2018 10:32 AM, Vladimir Sementsov-Ogievskiy wrote:
> To implement nbd reconnect in further patches, we need to distinguish
> error codes, returned by nbd server, from channel errors, to reconnect
> only in the latter case.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>   block/nbd-client.c | 83 +++++++++++++++++++++++++++++++-----------------------
>   1 file changed, 47 insertions(+), 36 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-07-20 20:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-09 15:32 [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 01/11] block/nbd-client: split channel errors from export errors Vladimir Sementsov-Ogievskiy
2018-07-20 20:14   ` Eric Blake
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 02/11] block/nbd: move connection code from block/nbd to block/nbd-client Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 03/11] block/nbd-client: split connection from initialization Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 04/11] block/nbd-client: fix nbd_reply_chunk_iter_receive Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 05/11] block/nbd-client: don't check ioc Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 06/11] block/nbd-client: move from quit to state Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 07/11] block/nbd-client: rename read_reply_co to connection_co Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 08/11] block/nbd-client: move connecting " Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 09/11] block/nbd: add cmdline and qapi parameters for nbd reconnect Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 10/11] block/nbd-client: " Vladimir Sementsov-Ogievskiy
2018-06-12 12:47   ` Vladimir Sementsov-Ogievskiy
2018-06-12 14:51   ` Vladimir Sementsov-Ogievskiy
2018-06-09 15:32 ` [Qemu-devel] [PATCH v3 11/11] iotests: test " Vladimir Sementsov-Ogievskiy
2018-07-03 13:46 ` [Qemu-devel] [PATCH v3 00/11] NBD reconnect Vladimir Sementsov-Ogievskiy
2018-07-03 16:31   ` Eric Blake

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.