All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing
@ 2017-08-04 15:14 Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go Vladimir Sementsov-Ogievskiy
                   ` (18 more replies)
  0 siblings, 19 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

A bit more refactoring and fixing before BLOCK_STATUS series.
I've tried to make individual patches simple enough, so there are
a lot of them.

Vladimir Sementsov-Ogievskiy (17):
  nbd/client: fix nbd_opt_go
  nbd/client: refactor nbd_read_eof
  nbd/client: refactor nbd_receive_reply
  nbd/client: fix nbd_send_request to return int
  block/nbd-client: get rid of ssize_t
  block/nbd-client: fix nbd_read_reply_entry
  block/nbd-client: refactor request send/receive
  block/nbd-client: rename nbd_recv_coroutines_enter_all
  block/nbd-client: move nbd_co_receive_reply content into
    nbd_co_request
  block/nbd-client: move nbd_coroutine_end content into nbd_co_request
  block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on
    error
  block/nbd-client: refactor nbd_co_request
  block/nbd-client: refactor NBDClientSession.recv_coroutine
  block/nbd-client: exit reply-reading coroutine on incorrect handle
  block/nbd-client: refactor reading reply
  block/nbd-client: drop reply field from NBDClientSession
  block/nbd-client: always return EIO on and after the first io channel
    error

 block/nbd-client.h         |   9 ++-
 include/block/nbd.h        |   4 +-
 nbd/nbd-internal.h         |  34 ++++++---
 block/nbd-client.c         | 173 ++++++++++++++++++---------------------------
 nbd/client.c               |  21 +++---
 tests/qemu-iotests/083.out |   4 +-
 6 files changed, 115 insertions(+), 130 deletions(-)

-- 
2.11.1

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-07 11:31   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof Vladimir Sementsov-Ogievskiy
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Do not send NBD_OPT_ABORT to the broken server. After sending
NBD_REP_ACK on NBD_OPT_GO server is most probably in transmission
phase, when option sending is finished.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 nbd/client.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/nbd/client.c b/nbd/client.c
index 0a17de80b5..f1c16b588f 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -399,12 +399,10 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
                phase, but make sure it sent flags */
             if (len) {
                 error_setg(errp, "server sent invalid NBD_REP_ACK");
-                nbd_send_opt_abort(ioc);
                 return -1;
             }
             if (!info->flags) {
                 error_setg(errp, "broken server omitted NBD_INFO_EXPORT");
-                nbd_send_opt_abort(ioc);
                 return -1;
             }
             trace_nbd_opt_go_success();
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-07 11:42   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 03/17] nbd/client: refactor nbd_receive_reply Vladimir Sementsov-Ogievskiy
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Refactor nbd_read_eof to return 1 on success, 0 on eof, when no
data was read and <0 for other cases, because returned size of
read data is not actually used.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 nbd/nbd-internal.h         | 34 +++++++++++++++++++++++++---------
 nbd/client.c               |  5 -----
 tests/qemu-iotests/083.out |  4 ++--
 3 files changed, 27 insertions(+), 16 deletions(-)

diff --git a/nbd/nbd-internal.h b/nbd/nbd-internal.h
index 396ddb4d3e..3fb0b6098a 100644
--- a/nbd/nbd-internal.h
+++ b/nbd/nbd-internal.h
@@ -77,21 +77,37 @@
 #define NBD_ESHUTDOWN  108
 
 /* nbd_read_eof
- * Tries to read @size bytes from @ioc. Returns number of bytes actually read.
- * May return a value >= 0 and < size only on EOF, i.e. when iteratively called
- * qio_channel_readv() returns 0. So, there is no need to call nbd_read_eof
- * iteratively.
+ * Tries to read @size bytes from @ioc.
+ * Returns 1 on success
+ *         0 on eof, when no data was read (errp is not set)
+ *         -EINVAL on eof, when some data < @size was read until eof
+ *         < 0 on read fail
  */
-static inline ssize_t nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
-                                   Error **errp)
+static inline int nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
+                               Error **errp)
 {
     struct iovec iov = { .iov_base = buffer, .iov_len = size };
+    ssize_t ret;
+
     /* Sockets are kept in blocking mode in the negotiation phase.  After
      * that, a non-readable socket simply means that another thread stole
      * our request/reply.  Synchronization is done with recv_coroutine, so
      * that this is coroutine-safe.
      */
-    return nbd_rwv(ioc, &iov, 1, size, true, errp);
+
+    assert(size > 0);
+
+    ret = nbd_rwv(ioc, &iov, 1, size, true, errp);
+    if (ret <= 0) {
+        return ret;
+    }
+
+    if (ret != size) {
+        error_setg(errp, "End of file");
+        return -EINVAL;
+    }
+
+    return 1;
 }
 
 /* nbd_read
@@ -100,9 +116,9 @@ static inline ssize_t nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
 static inline int nbd_read(QIOChannel *ioc, void *buffer, size_t size,
                            Error **errp)
 {
-    ssize_t ret = nbd_read_eof(ioc, buffer, size, errp);
+    int ret = nbd_read_eof(ioc, buffer, size, errp);
 
-    if (ret >= 0 && ret != size) {
+    if (ret == 0) {
         ret = -EINVAL;
         error_setg(errp, "End of file");
     }
diff --git a/nbd/client.c b/nbd/client.c
index f1c16b588f..4556056daa 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -925,11 +925,6 @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
         return ret;
     }
 
-    if (ret != sizeof(buf)) {
-        error_setg(errp, "read failed");
-        return -EINVAL;
-    }
-
     /* Reply
        [ 0 ..  3]    magic   (NBD_REPLY_MAGIC)
        [ 4 ..  7]    error   (0 == no error)
diff --git a/tests/qemu-iotests/083.out b/tests/qemu-iotests/083.out
index a24c6bfece..d3bea1b2f5 100644
--- a/tests/qemu-iotests/083.out
+++ b/tests/qemu-iotests/083.out
@@ -69,12 +69,12 @@ read failed: Input/output error
 
 === Check disconnect 4 reply ===
 
-read failed
+End of file
 read failed: Input/output error
 
 === Check disconnect 8 reply ===
 
-read failed
+End of file
 read failed: Input/output error
 
 === Check disconnect before data ===
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 03/17] nbd/client: refactor nbd_receive_reply
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-25 21:16   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int Vladimir Sementsov-Ogievskiy
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Refactor nbd_receive_reply to return 1 on success, 0 on eof, when no
data was read and <0 for other cases, because returned size of read
data is not actually used.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h |  2 +-
 nbd/client.c        | 12 +++++++++---
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 9c3d0a5868..f7450608b4 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -164,7 +164,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
 int nbd_init(int fd, QIOChannelSocket *sioc, NBDExportInfo *info,
              Error **errp);
 ssize_t nbd_send_request(QIOChannel *ioc, NBDRequest *request);
-ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp);
+int nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp);
 int nbd_client(int fd);
 int nbd_disconnect(int fd);
 
diff --git a/nbd/client.c b/nbd/client.c
index 4556056daa..a1758a1931 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -914,11 +914,16 @@ ssize_t nbd_send_request(QIOChannel *ioc, NBDRequest *request)
     return nbd_write(ioc, buf, sizeof(buf), NULL);
 }
 
-ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
+/* nbd_receive_reply
+ * Returns 1 on success
+ *         0 on eof, when no data was read from @ioc (errp is not set)
+ *         < 0 on fail
+ */
+int nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
 {
     uint8_t buf[NBD_REPLY_SIZE];
     uint32_t magic;
-    ssize_t ret;
+    int ret;
 
     ret = nbd_read_eof(ioc, buf, sizeof(buf), errp);
     if (ret <= 0) {
@@ -948,6 +953,7 @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
         error_setg(errp, "invalid magic (got 0x%" PRIx32 ")", magic);
         return -EINVAL;
     }
-    return sizeof(buf);
+
+    return 1;
 }
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 03/17] nbd/client: refactor nbd_receive_reply Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-07  8:23   ` Daniel P. Berrange
  2017-08-25 21:20   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t Vladimir Sementsov-Ogievskiy
                   ` (14 subsequent siblings)
  18 siblings, 2 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Fix nbd_send_request to return int, as it returns a return value
of nbd_write (which is int), and the only user of nbd_send_request's
return value (nbd_co_send_request) consider it as int too.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h | 2 +-
 nbd/client.c        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index f7450608b4..040cdd2e60 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -163,7 +163,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
                           Error **errp);
 int nbd_init(int fd, QIOChannelSocket *sioc, NBDExportInfo *info,
              Error **errp);
-ssize_t nbd_send_request(QIOChannel *ioc, NBDRequest *request);
+int nbd_send_request(QIOChannel *ioc, NBDRequest *request);
 int nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp);
 int nbd_client(int fd);
 int nbd_disconnect(int fd);
diff --git a/nbd/client.c b/nbd/client.c
index a1758a1931..00cba45853 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -896,7 +896,7 @@ int nbd_disconnect(int fd)
 }
 #endif
 
-ssize_t nbd_send_request(QIOChannel *ioc, NBDRequest *request)
+int nbd_send_request(QIOChannel *ioc, NBDRequest *request)
 {
     uint8_t buf[NBD_REQUEST_SIZE];
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-04 16:11   ` Daniel P. Berrange
  2017-08-25 21:25   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
                   ` (13 subsequent siblings)
  18 siblings, 2 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Use int variable for nbd_co_send_request return value (as
nbd_co_send_request returns int).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 25dd28406b..dc19894a7c 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -214,7 +214,7 @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
         .len = bytes,
     };
     NBDReply reply;
-    ssize_t ret;
+    int ret;
 
     assert(bytes <= NBD_MAX_BUFFER_SIZE);
     assert(!flags);
@@ -239,7 +239,7 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
         .len = bytes,
     };
     NBDReply reply;
-    ssize_t ret;
+    int ret;
 
     if (flags & BDRV_REQ_FUA) {
         assert(client->info.flags & NBD_FLAG_SEND_FUA);
@@ -261,7 +261,7 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
 int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
                                 int bytes, BdrvRequestFlags flags)
 {
-    ssize_t ret;
+    int ret;
     NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = {
         .type = NBD_CMD_WRITE_ZEROES,
@@ -297,7 +297,7 @@ int nbd_client_co_flush(BlockDriverState *bs)
     NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = { .type = NBD_CMD_FLUSH };
     NBDReply reply;
-    ssize_t ret;
+    int ret;
 
     if (!(client->info.flags & NBD_FLAG_SEND_FLUSH)) {
         return 0;
@@ -325,7 +325,7 @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int bytes)
         .len = bytes,
     };
     NBDReply reply;
-    ssize_t ret;
+    int ret;
 
     if (!(client->info.flags & NBD_FLAG_SEND_TRIM)) {
         return 0;
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-07 11:52   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive Vladimir Sementsov-Ogievskiy
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Set reply.handle to 0 on error path to prevent normal path of
nbd_co_receive_reply.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index dc19894a7c..0c88d84de6 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -107,6 +107,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
         qemu_coroutine_yield();
     }
 
+    s->reply.handle = 0;
     nbd_recv_coroutines_enter_all(s);
     s->read_reply_co = NULL;
 }
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-25 18:49   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all Vladimir Sementsov-Ogievskiy
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Move nbd_co_receive_reply and nbd_coroutine_end calls into
nbd_co_send_request and rename the latter to just nbd_co_request.

This removes code duplications in nbd_client_co_{pwrite,pread,...}
functions. Also this is needed for further refactoring.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 89 ++++++++++++++++++++----------------------------------
 1 file changed, 33 insertions(+), 56 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 0c88d84de6..c9ade9b517 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -112,12 +112,20 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     s->read_reply_co = NULL;
 }
 
-static int nbd_co_send_request(BlockDriverState *bs,
-                               NBDRequest *request,
-                               QEMUIOVector *qiov)
+static void nbd_co_receive_reply(NBDClientSession *s,
+                                 NBDRequest *request,
+                                 NBDReply *reply,
+                                 QEMUIOVector *qiov);
+static void nbd_coroutine_end(BlockDriverState *bs,
+                              NBDRequest *request);
+
+static int nbd_co_request(BlockDriverState *bs,
+                          NBDRequest *request,
+                          QEMUIOVector *qiov)
 {
     NBDClientSession *s = nbd_get_client_session(bs);
     int rc, ret, i;
+    NBDReply reply;
 
     qemu_co_mutex_lock(&s->send_mutex);
     while (s->in_flight == MAX_NBD_REQUESTS) {
@@ -141,7 +149,8 @@ static int nbd_co_send_request(BlockDriverState *bs,
         return -EPIPE;
     }
 
-    if (qiov) {
+    if (request->type == NBD_CMD_WRITE) {
+        assert(qiov != NULL);
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
         if (rc >= 0) {
@@ -156,6 +165,21 @@ static int nbd_co_send_request(BlockDriverState *bs,
         rc = nbd_send_request(s->ioc, request);
     }
     qemu_co_mutex_unlock(&s->send_mutex);
+
+    if (rc < 0) {
+        goto out;
+    }
+
+    if (request->type == NBD_CMD_READ) {
+        assert(qiov != NULL);
+    } else {
+        qiov = NULL;
+    }
+    nbd_co_receive_reply(s, request, &reply, qiov);
+    rc = -reply.error;
+
+out:
+    nbd_coroutine_end(bs, request);
     return rc;
 }
 
@@ -208,26 +232,16 @@ static void nbd_coroutine_end(BlockDriverState *bs,
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
                          uint64_t bytes, QEMUIOVector *qiov, int flags)
 {
-    NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = {
         .type = NBD_CMD_READ,
         .from = offset,
         .len = bytes,
     };
-    NBDReply reply;
-    int ret;
 
     assert(bytes <= NBD_MAX_BUFFER_SIZE);
     assert(!flags);
 
-    ret = nbd_co_send_request(bs, &request, NULL);
-    if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, &request, &reply, qiov);
-    }
-    nbd_coroutine_end(bs, &request);
-    return -reply.error;
+    return nbd_co_request(bs, &request, qiov);
 }
 
 int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
@@ -239,8 +253,6 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
         .from = offset,
         .len = bytes,
     };
-    NBDReply reply;
-    int ret;
 
     if (flags & BDRV_REQ_FUA) {
         assert(client->info.flags & NBD_FLAG_SEND_FUA);
@@ -249,27 +261,18 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
 
     assert(bytes <= NBD_MAX_BUFFER_SIZE);
 
-    ret = nbd_co_send_request(bs, &request, qiov);
-    if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, &request, &reply, NULL);
-    }
-    nbd_coroutine_end(bs, &request);
-    return -reply.error;
+    return nbd_co_request(bs, &request, qiov);
 }
 
 int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
                                 int bytes, BdrvRequestFlags flags)
 {
-    int ret;
     NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = {
         .type = NBD_CMD_WRITE_ZEROES,
         .from = offset,
         .len = bytes,
     };
-    NBDReply reply;
 
     if (!(client->info.flags & NBD_FLAG_SEND_WRITE_ZEROES)) {
         return -ENOTSUP;
@@ -283,22 +286,13 @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
         request.flags |= NBD_CMD_FLAG_NO_HOLE;
     }
 
-    ret = nbd_co_send_request(bs, &request, NULL);
-    if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, &request, &reply, NULL);
-    }
-    nbd_coroutine_end(bs, &request);
-    return -reply.error;
+    return nbd_co_request(bs, &request, NULL);
 }
 
 int nbd_client_co_flush(BlockDriverState *bs)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
     NBDRequest request = { .type = NBD_CMD_FLUSH };
-    NBDReply reply;
-    int ret;
 
     if (!(client->info.flags & NBD_FLAG_SEND_FLUSH)) {
         return 0;
@@ -307,14 +301,7 @@ int nbd_client_co_flush(BlockDriverState *bs)
     request.from = 0;
     request.len = 0;
 
-    ret = nbd_co_send_request(bs, &request, NULL);
-    if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, &request, &reply, NULL);
-    }
-    nbd_coroutine_end(bs, &request);
-    return -reply.error;
+    return nbd_co_request(bs, &request, NULL);
 }
 
 int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int bytes)
@@ -325,22 +312,12 @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int bytes)
         .from = offset,
         .len = bytes,
     };
-    NBDReply reply;
-    int ret;
 
     if (!(client->info.flags & NBD_FLAG_SEND_TRIM)) {
         return 0;
     }
 
-    ret = nbd_co_send_request(bs, &request, NULL);
-    if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, &request, &reply, NULL);
-    }
-    nbd_coroutine_end(bs, &request);
-    return -reply.error;
-
+    return nbd_co_request(bs, &request, NULL);
 }
 
 void nbd_client_detach_aio_context(BlockDriverState *bs)
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-25 18:43   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 09/17] block/nbd-client: move nbd_co_receive_reply content into nbd_co_request Vladimir Sementsov-Ogievskiy
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Rename nbd_recv_coroutines_enter_all to nbd_recv_coroutines_wake_all,
as it most probably just add all recv coroutines into co_queue_wakeup,
not directly enter them.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index c9ade9b517..8ad2264a40 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -34,7 +34,7 @@
 #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
 #define INDEX_TO_HANDLE(bs, index)  ((index)  ^ ((uint64_t)(intptr_t)bs))
 
-static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
+static void nbd_recv_coroutines_wake_all(NBDClientSession *s)
 {
     int i;
 
@@ -108,7 +108,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     }
 
     s->reply.handle = 0;
-    nbd_recv_coroutines_enter_all(s);
+    nbd_recv_coroutines_wake_all(s);
     s->read_reply_co = NULL;
 }
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 09/17] block/nbd-client: move nbd_co_receive_reply content into nbd_co_request
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (7 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-25 18:52   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 10/17] block/nbd-client: move nbd_coroutine_end " Vladimir Sementsov-Ogievskiy
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Move code from nbd_co_receive_reply into nbd_co_request. This simplify
things, makes further refactoring possible. Also, a function starting
with qemu_coroutine_yield is weird.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 33 ++++++++++-----------------------
 1 file changed, 10 insertions(+), 23 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 8ad2264a40..db1d7025fa 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -112,10 +112,6 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     s->read_reply_co = NULL;
 }
 
-static void nbd_co_receive_reply(NBDClientSession *s,
-                                 NBDRequest *request,
-                                 NBDReply *reply,
-                                 QEMUIOVector *qiov);
 static void nbd_coroutine_end(BlockDriverState *bs,
                               NBDRequest *request);
 
@@ -175,39 +171,30 @@ static int nbd_co_request(BlockDriverState *bs,
     } else {
         qiov = NULL;
     }
-    nbd_co_receive_reply(s, request, &reply, qiov);
-    rc = -reply.error;
-
-out:
-    nbd_coroutine_end(bs, request);
-    return rc;
-}
-
-static void nbd_co_receive_reply(NBDClientSession *s,
-                                 NBDRequest *request,
-                                 NBDReply *reply,
-                                 QEMUIOVector *qiov)
-{
-    int ret;
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
-    *reply = s->reply;
-    if (reply->handle != request->handle ||
+    reply = s->reply;
+    if (reply.handle != request->handle ||
         !s->ioc) {
-        reply->error = EIO;
+        reply.error = EIO;
     } else {
-        if (qiov && reply->error == 0) {
+        if (qiov && reply.error == 0) {
             ret = nbd_rwv(s->ioc, qiov->iov, qiov->niov, request->len, true,
                           NULL);
             if (ret != request->len) {
-                reply->error = EIO;
+                reply.error = EIO;
             }
         }
 
         /* Tell the read handler to read another header.  */
         s->reply.handle = 0;
     }
+    rc = -reply.error;
+
+out:
+    nbd_coroutine_end(bs, request);
+    return rc;
 }
 
 static void nbd_coroutine_end(BlockDriverState *bs,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 10/17] block/nbd-client: move nbd_coroutine_end content into nbd_co_request
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (8 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 09/17] block/nbd-client: move nbd_co_receive_reply content into nbd_co_request Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-25 21:57   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error Vladimir Sementsov-Ogievskiy
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Move code from nbd_coroutine_end into nbd_co_request. The function
nbd_coroutine_end is not needed separately, also it is better to
have in_flight-- in nbd_co_request as in_flight++ lives in nbd_co_request
too.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index db1d7025fa..d6965d24db 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -112,9 +112,6 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     s->read_reply_co = NULL;
 }
 
-static void nbd_coroutine_end(BlockDriverState *bs,
-                              NBDRequest *request);
-
 static int nbd_co_request(BlockDriverState *bs,
                           NBDRequest *request,
                           QEMUIOVector *qiov)
@@ -193,16 +190,6 @@ static int nbd_co_request(BlockDriverState *bs,
     rc = -reply.error;
 
 out:
-    nbd_coroutine_end(bs, request);
-    return rc;
-}
-
-static void nbd_coroutine_end(BlockDriverState *bs,
-                              NBDRequest *request)
-{
-    NBDClientSession *s = nbd_get_client_session(bs);
-    int i = HANDLE_TO_INDEX(s, request->handle);
-
     s->recv_coroutine[i] = NULL;
 
     /* Kick the read_reply_co to get the next reply.  */
@@ -214,6 +201,7 @@ static void nbd_coroutine_end(BlockDriverState *bs,
     s->in_flight--;
     qemu_co_queue_next(&s->free_sema);
     qemu_co_mutex_unlock(&s->send_mutex);
+    return rc;
 }
 
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (9 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 10/17] block/nbd-client: move nbd_coroutine_end " Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-07 11:55   ` Eric Blake
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 12/17] block/nbd-client: refactor nbd_co_request Vladimir Sementsov-Ogievskiy
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

We set s->reply.handle to 0 on one error path and don't set on another.
For consistancy and to avoid assert in nbd_read_reply_entry let's
set s->reply.handle to 0 in case of wrong handle too.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index d6965d24db..b84cab4079 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -183,13 +183,13 @@ static int nbd_co_request(BlockDriverState *bs,
                 reply.error = EIO;
             }
         }
-
-        /* Tell the read handler to read another header.  */
-        s->reply.handle = 0;
     }
     rc = -reply.error;
 
 out:
+    /* Tell the read handler to read another header.  */
+    s->reply.handle = 0;
+
     s->recv_coroutine[i] = NULL;
 
     /* Kick the read_reply_co to get the next reply.  */
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 12/17] block/nbd-client: refactor nbd_co_request
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (10 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 13/17] block/nbd-client: refactor NBDClientSession.recv_coroutine Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Reduce nesting, get rid of extra variable.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index b84cab4079..d6145c7db0 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -118,7 +118,6 @@ static int nbd_co_request(BlockDriverState *bs,
 {
     NBDClientSession *s = nbd_get_client_session(bs);
     int rc, ret, i;
-    NBDReply reply;
 
     qemu_co_mutex_lock(&s->send_mutex);
     while (s->in_flight == MAX_NBD_REQUESTS) {
@@ -171,20 +170,20 @@ static int nbd_co_request(BlockDriverState *bs,
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
-    reply = s->reply;
-    if (reply.handle != request->handle ||
-        !s->ioc) {
-        reply.error = EIO;
-    } else {
-        if (qiov && reply.error == 0) {
-            ret = nbd_rwv(s->ioc, qiov->iov, qiov->niov, request->len, true,
-                          NULL);
-            if (ret != request->len) {
-                reply.error = EIO;
-            }
+    if (s->reply.handle != request->handle || !s->ioc) {
+        rc = -EIO;
+        goto out;
+    }
+
+    if (qiov && s->reply.error == 0) {
+        ret = nbd_rwv(s->ioc, qiov->iov, qiov->niov, request->len, true, NULL);
+        if (ret != request->len) {
+            rc = -EIO;
+            goto out;
         }
     }
-    rc = -reply.error;
+
+    rc = -s->reply.error;
 
 out:
     /* Tell the read handler to read another header.  */
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 13/17] block/nbd-client: refactor NBDClientSession.recv_coroutine
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (11 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 12/17] block/nbd-client: refactor nbd_co_request Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 14/17] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Move from recv_coroutine[i] to requests[i].co. This is needed for
further refactoring, new fields will be added to created structure.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  4 +++-
 block/nbd-client.c | 20 ++++++++++----------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index df80771357..48e2559df6 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -27,7 +27,9 @@ typedef struct NBDClientSession {
     Coroutine *read_reply_co;
     int in_flight;
 
-    Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
+    struct {
+        Coroutine *co;
+    } requests[MAX_NBD_REQUESTS];
     NBDReply reply;
 } NBDClientSession;
 
diff --git a/block/nbd-client.c b/block/nbd-client.c
index d6145c7db0..5eb126c399 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -39,8 +39,8 @@ static void nbd_recv_coroutines_wake_all(NBDClientSession *s)
     int i;
 
     for (i = 0; i < MAX_NBD_REQUESTS; i++) {
-        if (s->recv_coroutine[i]) {
-            aio_co_wake(s->recv_coroutine[i]);
+        if (s->requests[i].co) {
+            aio_co_wake(s->requests[i].co);
         }
     }
 }
@@ -88,22 +88,22 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
          * one coroutine is called until the reply finishes.
          */
         i = HANDLE_TO_INDEX(s, s->reply.handle);
-        if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
+        if (i >= MAX_NBD_REQUESTS || !s->requests[i].co) {
             break;
         }
 
-        /* We're woken up by the recv_coroutine itself.  Note that there
+        /* We're woken up by the receiving coroutine itself.  Note that there
          * is no race between yielding and reentering read_reply_co.  This
          * is because:
          *
-         * - if recv_coroutine[i] runs on the same AioContext, it is only
+         * - if requests[i].co runs on the same AioContext, it is only
          *   entered after we yield
          *
-         * - if recv_coroutine[i] runs on a different AioContext, reentering
+         * - if requests[i].co runs on a different AioContext, reentering
          *   read_reply_co happens through a bottom half, which can only
          *   run after we yield.
          */
-        aio_co_wake(s->recv_coroutine[i]);
+        aio_co_wake(s->requests[i].co);
         qemu_coroutine_yield();
     }
 
@@ -126,8 +126,8 @@ static int nbd_co_request(BlockDriverState *bs,
     s->in_flight++;
 
     for (i = 0; i < MAX_NBD_REQUESTS; i++) {
-        if (s->recv_coroutine[i] == NULL) {
-            s->recv_coroutine[i] = qemu_coroutine_self();
+        if (s->requests[i].co == NULL) {
+            s->requests[i].co = qemu_coroutine_self();
             break;
         }
     }
@@ -189,7 +189,7 @@ out:
     /* Tell the read handler to read another header.  */
     s->reply.handle = 0;
 
-    s->recv_coroutine[i] = NULL;
+    s->requests[i].co = NULL;
 
     /* Kick the read_reply_co to get the next reply.  */
     if (s->read_reply_co) {
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 14/17] block/nbd-client: exit reply-reading coroutine on incorrect handle
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (12 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 13/17] block/nbd-client: refactor NBDClientSession.recv_coroutine Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 15/17] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Check reply-handle == request-handle in the same place, where
recv coroutine number is calculated from reply->handle and it's
correctness checked - in nbd_read_reply_entry.

Also finish nbd_read_reply_entry in case of reply-handle !=
request-handle in the same way as in case of incorrect reply-handle.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h | 1 +
 block/nbd-client.c | 9 +++++++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 48e2559df6..aa36be8950 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -29,6 +29,7 @@ typedef struct NBDClientSession {
 
     struct {
         Coroutine *co;
+        NBDRequest *request;
     } requests[MAX_NBD_REQUESTS];
     NBDReply reply;
 } NBDClientSession;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 5eb126c399..0e12db4be3 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -88,7 +88,9 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
          * one coroutine is called until the reply finishes.
          */
         i = HANDLE_TO_INDEX(s, s->reply.handle);
-        if (i >= MAX_NBD_REQUESTS || !s->requests[i].co) {
+        if (i >= MAX_NBD_REQUESTS || !s->requests[i].co ||
+            s->reply.handle != s->requests[i].request->handle)
+        {
             break;
         }
 
@@ -135,6 +137,7 @@ static int nbd_co_request(BlockDriverState *bs,
     g_assert(qemu_in_coroutine());
     assert(i < MAX_NBD_REQUESTS);
     request->handle = INDEX_TO_HANDLE(s, i);
+    s->requests[i].request = request;
 
     if (!s->ioc) {
         qemu_co_mutex_unlock(&s->send_mutex);
@@ -170,11 +173,13 @@ static int nbd_co_request(BlockDriverState *bs,
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
-    if (s->reply.handle != request->handle || !s->ioc) {
+    if (!s->ioc || s->reply.handle == 0) {
         rc = -EIO;
         goto out;
     }
 
+    assert(s->reply.handle == request->handle);
+
     if (qiov && s->reply.error == 0) {
         ret = nbd_rwv(s->ioc, qiov->iov, qiov->niov, request->len, true, NULL);
         if (ret != request->len) {
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 15/17] block/nbd-client: refactor reading reply
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (13 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 14/17] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 16/17] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Read the whole reply in one place - in nbd_read_reply_entry.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  1 +
 block/nbd-client.c | 27 +++++++++++++--------------
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index aa36be8950..0f84ccc073 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -30,6 +30,7 @@ typedef struct NBDClientSession {
     struct {
         Coroutine *co;
         NBDRequest *request;
+        QEMUIOVector *qiov;
     } requests[MAX_NBD_REQUESTS];
     NBDReply reply;
 } NBDClientSession;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 0e12db4be3..61780c5df9 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -94,6 +94,18 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             break;
         }
 
+        if (s->reply.error == 0 &&
+            s->requests[i].request->type == NBD_CMD_READ)
+        {
+            assert(s->requests[i].qiov != NULL);
+            ret = nbd_rwv(s->ioc, s->requests[i].qiov->iov,
+                          s->requests[i].qiov->niov,
+                          s->requests[i].request->len, true, NULL);
+            if (ret != s->requests[i].request->len) {
+                break;
+            }
+        }
+
         /* We're woken up by the receiving coroutine itself.  Note that there
          * is no race between yielding and reentering read_reply_co.  This
          * is because:
@@ -138,6 +150,7 @@ static int nbd_co_request(BlockDriverState *bs,
     assert(i < MAX_NBD_REQUESTS);
     request->handle = INDEX_TO_HANDLE(s, i);
     s->requests[i].request = request;
+    s->requests[i].qiov = qiov;
 
     if (!s->ioc) {
         qemu_co_mutex_unlock(&s->send_mutex);
@@ -165,12 +178,6 @@ static int nbd_co_request(BlockDriverState *bs,
         goto out;
     }
 
-    if (request->type == NBD_CMD_READ) {
-        assert(qiov != NULL);
-    } else {
-        qiov = NULL;
-    }
-
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
     if (!s->ioc || s->reply.handle == 0) {
@@ -180,14 +187,6 @@ static int nbd_co_request(BlockDriverState *bs,
 
     assert(s->reply.handle == request->handle);
 
-    if (qiov && s->reply.error == 0) {
-        ret = nbd_rwv(s->ioc, qiov->iov, qiov->niov, request->len, true, NULL);
-        if (ret != request->len) {
-            rc = -EIO;
-            goto out;
-        }
-    }
-
     rc = -s->reply.error;
 
 out:
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 16/17] block/nbd-client: drop reply field from NBDClientSession
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (14 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 15/17] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 17/17] block/nbd-client: always return EIO on and after the first io channel error Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Drop 'reply' from NBDClientSession. It's usage is not very transparent:

1. it is used to deliver error to receiving coroutine, and receiving
   coroutine must save or handle it somehow and then zero out
   it in NBDClientSession.
2. it is used to inform receiving coroutines that nbd_read_reply_entry
   is out for some reason (error or disconnect)

To simplify this scheme:
- drop NBDClientSession.reply
- introduce NBDClientSession.requests[...].ret for (1)
- introduce NBDClientSession.eio_to_all for (2)

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  3 ++-
 block/nbd-client.c | 25 ++++++++++---------------
 2 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 0f84ccc073..0b0aa67342 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -31,8 +31,9 @@ typedef struct NBDClientSession {
         Coroutine *co;
         NBDRequest *request;
         QEMUIOVector *qiov;
+        int ret;
     } requests[MAX_NBD_REQUESTS];
-    NBDReply reply;
+    bool eio_to_all;
 } NBDClientSession;
 
 NBDClientSession *nbd_get_client_session(BlockDriverState *bs);
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 61780c5df9..7c151b3dd3 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -72,10 +72,10 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     uint64_t i;
     int ret;
     Error *local_err = NULL;
+    NBDReply reply;
 
     for (;;) {
-        assert(s->reply.handle == 0);
-        ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
+        ret = nbd_receive_reply(s->ioc, &reply, &local_err);
         if (ret < 0) {
             error_report_err(local_err);
         }
@@ -87,16 +87,14 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
          * handler acts as a synchronization point and ensures that only
          * one coroutine is called until the reply finishes.
          */
-        i = HANDLE_TO_INDEX(s, s->reply.handle);
+        i = HANDLE_TO_INDEX(s, reply.handle);
         if (i >= MAX_NBD_REQUESTS || !s->requests[i].co ||
-            s->reply.handle != s->requests[i].request->handle)
+            reply.handle != s->requests[i].request->handle)
         {
             break;
         }
 
-        if (s->reply.error == 0 &&
-            s->requests[i].request->type == NBD_CMD_READ)
-        {
+        if (reply.error == 0 && s->requests[i].request->type == NBD_CMD_READ) {
             assert(s->requests[i].qiov != NULL);
             ret = nbd_rwv(s->ioc, s->requests[i].qiov->iov,
                           s->requests[i].qiov->niov,
@@ -106,6 +104,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             }
         }
 
+        s->requests[i].ret = -reply.error;
+
         /* We're woken up by the receiving coroutine itself.  Note that there
          * is no race between yielding and reentering read_reply_co.  This
          * is because:
@@ -121,7 +121,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
         qemu_coroutine_yield();
     }
 
-    s->reply.handle = 0;
+    s->eio_to_all = true;
     nbd_recv_coroutines_wake_all(s);
     s->read_reply_co = NULL;
 }
@@ -180,19 +180,14 @@ static int nbd_co_request(BlockDriverState *bs,
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
-    if (!s->ioc || s->reply.handle == 0) {
+    if (!s->ioc || s->eio_to_all) {
         rc = -EIO;
         goto out;
     }
 
-    assert(s->reply.handle == request->handle);
-
-    rc = -s->reply.error;
+    rc = s->requests[i].ret;
 
 out:
-    /* Tell the read handler to read another header.  */
-    s->reply.handle = 0;
-
     s->requests[i].co = NULL;
 
     /* Kick the read_reply_co to get the next reply.  */
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Qemu-devel] [PATCH 17/17] block/nbd-client: always return EIO on and after the first io channel error
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (15 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 16/17] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
@ 2017-08-04 15:14 ` Vladimir Sementsov-Ogievskiy
  2017-08-16 21:21 ` [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Eric Blake
  2017-08-25 22:10 ` Eric Blake
  18 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-04 15:14 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Do not communicate after the first error to avoid communicating throught
broken channel. The only exclusion is try to send NBD_CMD_DISC anyway on
in nbd_client_close.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 7c151b3dd3..6109abf8a7 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -49,6 +49,8 @@ static void nbd_teardown_connection(BlockDriverState *bs)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
 
+    client->eio_to_all = true;
+
     if (!client->ioc) { /* Already closed */
         return;
     }
@@ -75,11 +77,15 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     NBDReply reply;
 
     for (;;) {
+        if (s->eio_to_all) {
+            break;
+        }
+
         ret = nbd_receive_reply(s->ioc, &reply, &local_err);
         if (ret < 0) {
             error_report_err(local_err);
         }
-        if (ret <= 0) {
+        if (ret <= 0 || s->eio_to_all) {
             break;
         }
 
@@ -99,7 +105,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             ret = nbd_rwv(s->ioc, s->requests[i].qiov->iov,
                           s->requests[i].qiov->niov,
                           s->requests[i].request->len, true, NULL);
-            if (ret != s->requests[i].request->len) {
+            if (ret != s->requests[i].request->len || s->eio_to_all) {
                 break;
             }
         }
@@ -133,6 +139,10 @@ static int nbd_co_request(BlockDriverState *bs,
     NBDClientSession *s = nbd_get_client_session(bs);
     int rc, ret, i;
 
+    if (s->eio_to_all) {
+        return -EIO;
+    }
+
     qemu_co_mutex_lock(&s->send_mutex);
     while (s->in_flight == MAX_NBD_REQUESTS) {
         qemu_co_queue_wait(&s->free_sema, &s->send_mutex);
@@ -152,16 +162,16 @@ static int nbd_co_request(BlockDriverState *bs,
     s->requests[i].request = request;
     s->requests[i].qiov = qiov;
 
-    if (!s->ioc) {
+    if (s->eio_to_all) {
         qemu_co_mutex_unlock(&s->send_mutex);
-        return -EPIPE;
+        return -EIO;
     }
 
     if (request->type == NBD_CMD_WRITE) {
         assert(qiov != NULL);
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
-        if (rc >= 0) {
+        if (rc == 0 && !s->eio_to_all) {
             ret = nbd_rwv(s->ioc, qiov->iov, qiov->niov, request->len, false,
                           NULL);
             if (ret != request->len) {
@@ -174,13 +184,16 @@ static int nbd_co_request(BlockDriverState *bs,
     }
     qemu_co_mutex_unlock(&s->send_mutex);
 
+    if (s->eio_to_all) {
+        rc = -EIO;
+    }
     if (rc < 0) {
         goto out;
     }
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     qemu_coroutine_yield();
-    if (!s->ioc || s->eio_to_all) {
+    if (s->eio_to_all) {
         rc = -EIO;
         goto out;
     }
@@ -335,6 +348,7 @@ int nbd_client_init(BlockDriverState *bs,
     logout("session init %s\n", export);
     qio_channel_set_blocking(QIO_CHANNEL(sioc), true, NULL);
 
+    client->eio_to_all = false;
     client->info.request_sizes = true;
     ret = nbd_receive_negotiate(QIO_CHANNEL(sioc), export,
                                 tlscreds, hostname,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t Vladimir Sementsov-Ogievskiy
@ 2017-08-04 16:11   ` Daniel P. Berrange
  2017-08-07  6:57     ` Vladimir Sementsov-Ogievskiy
  2017-08-25 21:25   ` Eric Blake
  1 sibling, 1 reply; 50+ messages in thread
From: Daniel P. Berrange @ 2017-08-04 16:11 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, kwolf, mreitz, den, pbonzini

On Fri, Aug 04, 2017 at 06:14:28PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Use int variable for nbd_co_send_request return value (as
> nbd_co_send_request returns int).

Hmm, nbd_co_send_request() propagates return value of nbd_send_request,
which returns ssize_t.  IOW, I think we need to fix nbd_co_send_request
to also return ssize_t, rather than changing callers to use int.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t
  2017-08-04 16:11   ` Daniel P. Berrange
@ 2017-08-07  6:57     ` Vladimir Sementsov-Ogievskiy
  2017-08-07  8:24       ` Daniel P. Berrange
  0 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-07  6:57 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: qemu-block, qemu-devel, kwolf, mreitz, den, pbonzini

04.08.2017 19:11, Daniel P. Berrange wrote:
> On Fri, Aug 04, 2017 at 06:14:28PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Use int variable for nbd_co_send_request return value (as
>> nbd_co_send_request returns int).
> Hmm, nbd_co_send_request() propagates return value of nbd_send_request,
> which returns ssize_t.

See patch 04, nbd_send_request fixed in it to return int. It propagates 
return
value of nbd_write which is int.

> to also return ssize_t, rather than changing callers to use int.
>
>
> Regards,
> Daniel


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int Vladimir Sementsov-Ogievskiy
@ 2017-08-07  8:23   ` Daniel P. Berrange
  2017-08-07  8:57     ` Vladimir Sementsov-Ogievskiy
  2017-08-25 21:20   ` Eric Blake
  1 sibling, 1 reply; 50+ messages in thread
From: Daniel P. Berrange @ 2017-08-07  8:23 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, kwolf, mreitz, den, pbonzini

On Fri, Aug 04, 2017 at 06:14:27PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Fix nbd_send_request to return int, as it returns a return value
> of nbd_write (which is int), and the only user of nbd_send_request's
> return value (nbd_co_send_request) consider it as int too.

The real problem here is the return value of nbd_write() - it should be
a ssize_t, not an int, since it is propagating the return value of
nbd_rwv() which is ssize_t. Basically all functions that do I/O and
return the number of bytes read/written should be ssize_t - any use of
int is a bug.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t
  2017-08-07  6:57     ` Vladimir Sementsov-Ogievskiy
@ 2017-08-07  8:24       ` Daniel P. Berrange
  0 siblings, 0 replies; 50+ messages in thread
From: Daniel P. Berrange @ 2017-08-07  8:24 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, kwolf, mreitz, den, pbonzini

On Mon, Aug 07, 2017 at 09:57:30AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 04.08.2017 19:11, Daniel P. Berrange wrote:
> > On Fri, Aug 04, 2017 at 06:14:28PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > Use int variable for nbd_co_send_request return value (as
> > > nbd_co_send_request returns int).
> > Hmm, nbd_co_send_request() propagates return value of nbd_send_request,
> > which returns ssize_t.
> 
> See patch 04, nbd_send_request fixed in it to return int. It propagates
> return
> value of nbd_write which is int.

That's wrong too - nbd_write() should have been fixed to have ssize_t
as its return type, since its returning the value of a ssize_t variable.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int
  2017-08-07  8:23   ` Daniel P. Berrange
@ 2017-08-07  8:57     ` Vladimir Sementsov-Ogievskiy
  2017-08-07 11:49       ` Eric Blake
  2017-08-07 12:03       ` Daniel P. Berrange
  0 siblings, 2 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-07  8:57 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: qemu-block, qemu-devel, kwolf, mreitz, den, pbonzini

07.08.2017 11:23, Daniel P. Berrange wrote:
> On Fri, Aug 04, 2017 at 06:14:27PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Fix nbd_send_request to return int, as it returns a return value
>> of nbd_write (which is int), and the only user of nbd_send_request's
>> return value (nbd_co_send_request) consider it as int too.
> The real problem here is the return value of nbd_write() - it should be
> a ssize_t, not an int, since it is propagating the return value of
> nbd_rwv() which is ssize_t.

It was changed in f5d406fe86bb (sent in May)
The discussion actually was started half a year ago:
https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg00825.html


> Basically all functions that do I/O and
> return the number of bytes read/written should be ssize_t - any use of
> int is a bug.
>
> Regards,
> Daniel


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go Vladimir Sementsov-Ogievskiy
@ 2017-08-07 11:31   ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-07 11:31 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 671 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Do not send NBD_OPT_ABORT to the broken server. After sending
> NBD_REP_ACK on NBD_OPT_GO server is most probably in transmission
> phase, when option sending is finished.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  nbd/client.c | 2 --
>  1 file changed, 2 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

But buggy servers aren't a severe enough problem to warrant this being
in 2.10.  I'll save this for 2.11.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof Vladimir Sementsov-Ogievskiy
@ 2017-08-07 11:42   ` Eric Blake
  2017-08-07 12:05     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-07 11:42 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 4529 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Refactor nbd_read_eof to return 1 on success, 0 on eof, when no
> data was read and <0 for other cases, because returned size of
> read data is not actually used.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  nbd/nbd-internal.h         | 34 +++++++++++++++++++++++++---------
>  nbd/client.c               |  5 -----
>  tests/qemu-iotests/083.out |  4 ++--
>  3 files changed, 27 insertions(+), 16 deletions(-)
> 
> diff --git a/nbd/nbd-internal.h b/nbd/nbd-internal.h
> index 396ddb4d3e..3fb0b6098a 100644
> --- a/nbd/nbd-internal.h
> +++ b/nbd/nbd-internal.h
> @@ -77,21 +77,37 @@
>  #define NBD_ESHUTDOWN  108
>  
>  /* nbd_read_eof
> - * Tries to read @size bytes from @ioc. Returns number of bytes actually read.
> - * May return a value >= 0 and < size only on EOF, i.e. when iteratively called
> - * qio_channel_readv() returns 0. So, there is no need to call nbd_read_eof
> - * iteratively.
> + * Tries to read @size bytes from @ioc.
> + * Returns 1 on success
> + *         0 on eof, when no data was read (errp is not set)
> + *         -EINVAL on eof, when some data < @size was read until eof
> + *         < 0 on read fail

In general, mixing negative errno value and generic < 0 in the same
function is most likely ambiguous.

>   */
> -static inline ssize_t nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
> -                                   Error **errp)
> +static inline int nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
> +                               Error **errp)
>  {
>      struct iovec iov = { .iov_base = buffer, .iov_len = size };
> +    ssize_t ret;
> +
>      /* Sockets are kept in blocking mode in the negotiation phase.  After
>       * that, a non-readable socket simply means that another thread stole
>       * our request/reply.  Synchronization is done with recv_coroutine, so
>       * that this is coroutine-safe.
>       */
> -    return nbd_rwv(ioc, &iov, 1, size, true, errp);
> +
> +    assert(size > 0);

Effectively the same as assert(size != 0).

> +
> +    ret = nbd_rwv(ioc, &iov, 1, size, true, errp);
> +    if (ret <= 0) {
> +        return ret;
> +    }

So this is a negative errno (or 0 on EOF),

> +
> +    if (ret != size) {
> +        error_setg(errp, "End of file");
> +        return -EINVAL;

and so is this. Which makes the function documentation not quite
accurate; you aren't mixing a generic < 0.

> +    }
> +
> +    return 1;
>  }
>  
>  /* nbd_read
> @@ -100,9 +116,9 @@ static inline ssize_t nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
>  static inline int nbd_read(QIOChannel *ioc, void *buffer, size_t size,
>                             Error **errp)
>  {
> -    ssize_t ret = nbd_read_eof(ioc, buffer, size, errp);
> +    int ret = nbd_read_eof(ioc, buffer, size, errp);
>  
> -    if (ret >= 0 && ret != size) {
> +    if (ret == 0) {
>          ret = -EINVAL;
>          error_setg(errp, "End of file");

Why do we have to set errp here instead of in nbd_read_eof()? Is there
ever any case where hitting early EOF is not something that should be
treated as an error?

>      }
> diff --git a/nbd/client.c b/nbd/client.c
> index f1c16b588f..4556056daa 100644
> --- a/nbd/client.c
> +++ b/nbd/client.c
> @@ -925,11 +925,6 @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
>          return ret;
>      }
>  
> -    if (ret != sizeof(buf)) {
> -        error_setg(errp, "read failed");
> -        return -EINVAL;
> -    }
> -
>      /* Reply
>         [ 0 ..  3]    magic   (NBD_REPLY_MAGIC)
>         [ 4 ..  7]    error   (0 == no error)
> diff --git a/tests/qemu-iotests/083.out b/tests/qemu-iotests/083.out
> index a24c6bfece..d3bea1b2f5 100644
> --- a/tests/qemu-iotests/083.out
> +++ b/tests/qemu-iotests/083.out
> @@ -69,12 +69,12 @@ read failed: Input/output error
>  
>  === Check disconnect 4 reply ===
>  
> -read failed
> +End of file
>  read failed: Input/output error

At least you tracked that your changes tweak the error message.  But I'm
not yet convinced whether this change simplifies anything.  Is there a
later patch that is easier to write with the new semantics which was not
possible with the pre-patch semantics?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int
  2017-08-07  8:57     ` Vladimir Sementsov-Ogievskiy
@ 2017-08-07 11:49       ` Eric Blake
  2017-08-07 12:03       ` Daniel P. Berrange
  1 sibling, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-07 11:49 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, Daniel P. Berrange
  Cc: kwolf, qemu-block, qemu-devel, mreitz, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1828 bytes --]

On 08/07/2017 03:57 AM, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2017 11:23, Daniel P. Berrange wrote:
>> On Fri, Aug 04, 2017 at 06:14:27PM +0300, Vladimir Sementsov-Ogievskiy
>> wrote:
>>> Fix nbd_send_request to return int, as it returns a return value
>>> of nbd_write (which is int), and the only user of nbd_send_request's
>>> return value (nbd_co_send_request) consider it as int too.
>> The real problem here is the return value of nbd_write() - it should be
>> a ssize_t, not an int, since it is propagating the return value of
>> nbd_rwv() which is ssize_t.
> 
> It was changed in f5d406fe86bb (sent in May)
> The discussion actually was started half a year ago:
> https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg00825.html
> 
> 
>> Basically all functions that do I/O and
>> return the number of bytes read/written should be ssize_t - any use of
>> int is a bug.

If returning the size matters, then yes, not using ssize_t is a bug.
But if all we need is to know whether an entire expected length was read
(and treat ANY partial read the same as failure), then a mere int
becomes enough to encode the results.

Where I'm less certain is whether this change in semantics simplifies
later patches, or loses information that might have been useful.  But
intuitively, ANY encounter of EOF means that NBD needs to quit trying to
talk to the other end of the socket, whether or not the EOF occurred on
a nice message boundary; and the rest of the protocol is strongly tied
to messages, where we don't act until we have read the entire expected
message; so if a later patch is indeed simpler by not returning bytes
read, this patch might be okay.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
@ 2017-08-07 11:52   ` Eric Blake
  2017-08-07 12:56     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-07 11:52 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1102 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Set reply.handle to 0 on error path to prevent normal path of
> nbd_co_receive_reply.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 1 +
>  1 file changed, 1 insertion(+)

Can you document a case where not fixing this would be an observable bug
(even if it requires using gdb and single-stepping between client and
server to make what is otherwise a racy situation easy to see)?  I'm
trying to figure out if this is 2.10 material.

> 
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index dc19894a7c..0c88d84de6 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -107,6 +107,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>          qemu_coroutine_yield();
>      }
>  
> +    s->reply.handle = 0;
>      nbd_recv_coroutines_enter_all(s);
>      s->read_reply_co = NULL;
>  }
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error Vladimir Sementsov-Ogievskiy
@ 2017-08-07 11:55   ` Eric Blake
  2017-08-07 13:17     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-07 11:55 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 846 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> We set s->reply.handle to 0 on one error path and don't set on another.
> For consistancy and to avoid assert in nbd_read_reply_entry let's
> set s->reply.handle to 0 in case of wrong handle too.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)

Can this assertion be triggered now (presumably, with a broken server)?
I'm trying to figure out if this is 2.10 material.

[Urgh. If a broken server is able to cause an assertion failure that
causes a client to abort on an assertion failure, that probably deserves
a CVE]

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int
  2017-08-07  8:57     ` Vladimir Sementsov-Ogievskiy
  2017-08-07 11:49       ` Eric Blake
@ 2017-08-07 12:03       ` Daniel P. Berrange
  1 sibling, 0 replies; 50+ messages in thread
From: Daniel P. Berrange @ 2017-08-07 12:03 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, kwolf, mreitz, den, pbonzini

On Mon, Aug 07, 2017 at 11:57:12AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2017 11:23, Daniel P. Berrange wrote:
> > On Fri, Aug 04, 2017 at 06:14:27PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > Fix nbd_send_request to return int, as it returns a return value
> > > of nbd_write (which is int), and the only user of nbd_send_request's
> > > return value (nbd_co_send_request) consider it as int too.
> > The real problem here is the return value of nbd_write() - it should be
> > a ssize_t, not an int, since it is propagating the return value of
> > nbd_rwv() which is ssize_t.
> 
> It was changed in f5d406fe86bb (sent in May)
> The discussion actually was started half a year ago:
> https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg00825.html

Actually I misread the code.  nbd_rwv() is returning 0 on success, not
the number of bytes, so int is in fact ok


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof
  2017-08-07 11:42   ` Eric Blake
@ 2017-08-07 12:05     ` Vladimir Sementsov-Ogievskiy
  2017-08-25 19:22       ` Eric Blake
  0 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-07 12:05 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

07.08.2017 14:42, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Refactor nbd_read_eof to return 1 on success, 0 on eof, when no
>> data was read and <0 for other cases, because returned size of
>> read data is not actually used.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   nbd/nbd-internal.h         | 34 +++++++++++++++++++++++++---------
>>   nbd/client.c               |  5 -----
>>   tests/qemu-iotests/083.out |  4 ++--
>>   3 files changed, 27 insertions(+), 16 deletions(-)
>>
>> diff --git a/nbd/nbd-internal.h b/nbd/nbd-internal.h
>> index 396ddb4d3e..3fb0b6098a 100644
>> --- a/nbd/nbd-internal.h
>> +++ b/nbd/nbd-internal.h
>> @@ -77,21 +77,37 @@
>>   #define NBD_ESHUTDOWN  108
>>   
>>   /* nbd_read_eof
>> - * Tries to read @size bytes from @ioc. Returns number of bytes actually read.
>> - * May return a value >= 0 and < size only on EOF, i.e. when iteratively called
>> - * qio_channel_readv() returns 0. So, there is no need to call nbd_read_eof
>> - * iteratively.
>> + * Tries to read @size bytes from @ioc.
>> + * Returns 1 on success
>> + *         0 on eof, when no data was read (errp is not set)
>> + *         -EINVAL on eof, when some data < @size was read until eof
>> + *         < 0 on read fail
> In general, mixing negative errno value and generic < 0 in the same
> function is most likely ambiguous.

Hmm, but this is entirely what we do so often:

if (,,) return -EINVAL;

return some_other_func().

last two lines may be rewritten like this:
+ * < 0 on fail

>
>>    */
>> -static inline ssize_t nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
>> -                                   Error **errp)
>> +static inline int nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
>> +                               Error **errp)
>>   {
>>       struct iovec iov = { .iov_base = buffer, .iov_len = size };
>> +    ssize_t ret;
>> +
>>       /* Sockets are kept in blocking mode in the negotiation phase.  After
>>        * that, a non-readable socket simply means that another thread stole
>>        * our request/reply.  Synchronization is done with recv_coroutine, so
>>        * that this is coroutine-safe.
>>        */
>> -    return nbd_rwv(ioc, &iov, 1, size, true, errp);
>> +
>> +    assert(size > 0);
> Effectively the same as assert(size != 0).
>
>> +
>> +    ret = nbd_rwv(ioc, &iov, 1, size, true, errp);
>> +    if (ret <= 0) {
>> +        return ret;
>> +    }
> So this is a negative errno (or 0 on EOF),

if it is < 0, it can be only -EIO, specified in nbd_rwv "by hand". it is 
unrelated to read read/write errno's

>
>> +
>> +    if (ret != size) {
>> +        error_setg(errp, "End of file");
>> +        return -EINVAL;
> and so is this. Which makes the function documentation not quite
> accurate; you aren't mixing a generic < 0.

hmm.. my wordings are weird sometimes, sorry for that :(. and thank you 
for your patience.

>
>> +    }
>> +
>> +    return 1;
>>   }
>>   
>>   /* nbd_read
>> @@ -100,9 +116,9 @@ static inline ssize_t nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
>>   static inline int nbd_read(QIOChannel *ioc, void *buffer, size_t size,
>>                              Error **errp)
>>   {
>> -    ssize_t ret = nbd_read_eof(ioc, buffer, size, errp);
>> +    int ret = nbd_read_eof(ioc, buffer, size, errp);
>>   
>> -    if (ret >= 0 && ret != size) {
>> +    if (ret == 0) {
>>           ret = -EINVAL;
>>           error_setg(errp, "End of file");
> Why do we have to set errp here instead of in nbd_read_eof()? Is there
> ever any case where hitting early EOF is not something that should be
> treated as an error?

yes. it is the only usage of nbd_read_eof - in nbd_receive_reply. This 
used to understand that there no more replies to read.

>
>>       }
>> diff --git a/nbd/client.c b/nbd/client.c
>> index f1c16b588f..4556056daa 100644
>> --- a/nbd/client.c
>> +++ b/nbd/client.c
>> @@ -925,11 +925,6 @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
>>           return ret;
>>       }
>>   
>> -    if (ret != sizeof(buf)) {
>> -        error_setg(errp, "read failed");
>> -        return -EINVAL;
>> -    }
>> -
>>       /* Reply
>>          [ 0 ..  3]    magic   (NBD_REPLY_MAGIC)
>>          [ 4 ..  7]    error   (0 == no error)
>> diff --git a/tests/qemu-iotests/083.out b/tests/qemu-iotests/083.out
>> index a24c6bfece..d3bea1b2f5 100644
>> --- a/tests/qemu-iotests/083.out
>> +++ b/tests/qemu-iotests/083.out
>> @@ -69,12 +69,12 @@ read failed: Input/output error
>>   
>>   === Check disconnect 4 reply ===
>>   
>> -read failed
>> +End of file
>>   read failed: Input/output error
> At least you tracked that your changes tweak the error message.  But I'm
> not yet convinced whether this change simplifies anything.  Is there a
> later patch that is easier to write with the new semantics which was not
> possible with the pre-patch semantics?

This patch just moves error handling one level down (do not propagate 
unused information). And removes (with the following patch) last remains 
of ssize_t and returning number of bytes in nbd/client.c - for consistency.
Let nbd_rwv  to be the only function returning number of bytes.

>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-07 11:52   ` Eric Blake
@ 2017-08-07 12:56     ` Vladimir Sementsov-Ogievskiy
  2017-08-07 15:13       ` Eric Blake
  0 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-07 12:56 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

07.08.2017 14:52, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Set reply.handle to 0 on error path to prevent normal path of
>> nbd_co_receive_reply.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd-client.c | 1 +
>>   1 file changed, 1 insertion(+)
> Can you document a case where not fixing this would be an observable bug
> (even if it requires using gdb and single-stepping between client and
> server to make what is otherwise a racy situation easy to see)?  I'm
> trying to figure out if this is 2.10 material.

it is simple enough:

run qemu-nbd in gdb, set break on nbd_send_reply, and when it shoot s,
next up to "stl_be_p(buf, NBD_REPLY_MAGIC);" and after it do "call 
stl_be_p(buf, 1000)"

run qemu-io with some read in gdb, set break on
br block/nbd-client.c:83

( it is break; after failed nbd_receive_reply call)

and on
br block/nbd-client.c:170

(it is in nbd_co_receive_reply after yield)

on first break we will be sure that  nbd_receive_reply failed,
on second we will be sure by
(gdb) p s->reply
$1 = {handle = 93825000680144, error = 0}
(gdb) p request->handle
$2 = 93825000680144

that we are on normal receiving path.

>
>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>> index dc19894a7c..0c88d84de6 100644
>> --- a/block/nbd-client.c
>> +++ b/block/nbd-client.c
>> @@ -107,6 +107,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>>           qemu_coroutine_yield();
>>       }
>>   
>> +    s->reply.handle = 0;
>>       nbd_recv_coroutines_enter_all(s);
>>       s->read_reply_co = NULL;
>>   }
>>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error
  2017-08-07 11:55   ` Eric Blake
@ 2017-08-07 13:17     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-07 13:17 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

07.08.2017 14:55, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> We set s->reply.handle to 0 on one error path and don't set on another.
>> For consistancy and to avoid assert in nbd_read_reply_entry let's
>> set s->reply.handle to 0 in case of wrong handle too.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd-client.c | 6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
> Can this assertion be triggered now (presumably, with a broken server)?
> I'm trying to figure out if this is 2.10 material.
>
> [Urgh. If a broken server is able to cause an assertion failure that
> causes a client to abort on an assertion failure, that probably deserves
> a CVE]
>
Hmm looks like I've mistaken, if handle is wrong than read_reply_co 
should be already finished, so it's impossible


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-07 12:56     ` Vladimir Sementsov-Ogievskiy
@ 2017-08-07 15:13       ` Eric Blake
  2017-08-07 15:33         ` [Qemu-devel] [Qemu-block] " Eric Blake
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-07 15:13 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den, P J P

[-- Attachment #1: Type: text/plain, Size: 5355 bytes --]

On 08/07/2017 07:56 AM, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2017 14:52, Eric Blake wrote:
>> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> Set reply.handle to 0 on error path to prevent normal path of
>>> nbd_co_receive_reply.

Side note: in general, our server must allow a handle of 0 as valid (as
the handle is something chosen by the client); but our particular client
always uses non-zero handles (and therefore using 0 as a sentinel for an
error path is okay).

>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   block/nbd-client.c | 1 +
>>>   1 file changed, 1 insertion(+)
>> Can you document a case where not fixing this would be an observable bug
>> (even if it requires using gdb and single-stepping between client and
>> server to make what is otherwise a racy situation easy to see)?  I'm
>> trying to figure out if this is 2.10 material.
> 
> it is simple enough:
> 
> run qemu-nbd in gdb, set break on nbd_send_reply, and when it shoot s,
> next up to "stl_be_p(buf, NBD_REPLY_MAGIC);" and after it do "call
> stl_be_p(buf, 1000)"

Okay, so you have set up a malicious server intentionally sending bad
magic...

> 
> run qemu-io with some read in gdb, set break on
> br block/nbd-client.c:83
> 
> ( it is break; after failed nbd_receive_reply call)
> 
> and on
> br block/nbd-client.c:170
> 
> (it is in nbd_co_receive_reply after yield)
> 
> on first break we will be sure that  nbd_receive_reply failed,
> on second we will be sure by
> (gdb) p s->reply
> $1 = {handle = 93825000680144, error = 0}
> (gdb) p request->handle
> $2 = 93825000680144
> 
> that we are on normal receiving path.

...and the client is recognizing that the server sent garbage, but then
proceeds to handle the packet anyway.  The ideal reaction is that the
client should disconnect from the server, rather than handle the packet.
 But because it didn't do that, the client is now unable to receive any
future messages from the server.  Compare the traces of:

$  ./qemu-io -c 'r 0 1' -f raw nbd://localhost:10809/foo --trace nbd_\*

against a working server:

29103@1502118291.281180:nbd_opt_go_success Export is good to go
29103@1502118291.281397:nbd_send_request Sending request to server: {
.from = 0, .len = 1, .handle = 93860726319200, .flags = 0x0, .type = 0
(read) }
29103@1502118291.281705:nbd_receive_reply Got reply: { magic =
0x67446698, .error =  0, handle = 93860726319200 }
read 1/1 bytes at offset 0
1 bytes, 1 ops; 0.0003 sec (2.822 KiB/sec and 2890.1734 ops/sec)
29103@1502118291.281773:nbd_send_request Sending request to server: {
.from = 0, .len = 0, .handle = 93860726319200, .flags = 0x0, .type = 3
(flush) }
29103@1502118291.281902:nbd_receive_reply Got reply: { magic =
0x67446698, .error =  0, handle = 93860726319200 }
29103@1502118291.281939:nbd_send_request Sending request to server: {
.from = 0, .len = 0, .handle = 93860726319200, .flags = 0x0, .type = 3
(flush) }
29103@1502118291.282064:nbd_receive_reply Got reply: { magic =
0x67446698, .error =  0, handle = 93860726319200 }
29103@1502118291.282078:nbd_send_request Sending request to server: {
.from = 0, .len = 0, .handle = 0, .flags = 0x0, .type = 2 (discard) }

followed by a clean disconnect; vs. the buggy server:

29148@1502118384.385133:nbd_opt_go_success Export is good to go
29148@1502118384.385321:nbd_send_request Sending request to server: {
.from = 0, .len = 1, .handle = 94152262559840, .flags = 0x0, .type = 0
(read) }
29148@1502118396.494643:nbd_receive_reply Got reply: { magic =
0x1446698, .error =  0, handle = 94152262559840 }
invalid magic (got 0x1446698)
read 1/1 bytes at offset 0
1 bytes, 1 ops; 0:00:12.10 (0.082581 bytes/sec and 0.0826 ops/sec)
29148@1502118396.494746:nbd_send_request Sending request to server: {
.from = 0, .len = 0, .handle = 94152262559840, .flags = 0x0, .type = 3
(flush) }

where the client is now hung.  Thankfully, the client is blocked in an
idle loop (not eating CPU), so I don't know if this counts as the
ability for a malicious server to cause a denial of service against qemu
as an NBD client (in general, being unable to read further data from the
server because client internal state is now botched is not that much
different from being unable to read further data from the server because
the client hung up on the invalid server), unless there is some way to
cause qemu to die from an assertion failure rather than just get stuck.

It also looks like the client acts on at most one bad packet from the
server before it gets stuck - but the question is whether a malicious
server could, in that one bad packet reply, cause qemu to misbehave in
some other way.

I'm adding Prasad, to analyze whether this needs a CVE.

We don't have a good protocol fuzzer to simulate a bad client and/or bad
server as the partner over the socket - if you can find any more such
interactions where a bad partner can cause a hang or crash, let's get
those fixed in 2.10.

Meanwhile, I'll probably include this patch in 2.10 (after first
figuring out if it works in isolation or needs other patches from your
series).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-07 15:13       ` Eric Blake
@ 2017-08-07 15:33         ` Eric Blake
  2017-08-07 16:09           ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-07 15:33 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: kwolf, pbonzini, den, P J P, mreitz

[-- Attachment #1: Type: text/plain, Size: 3197 bytes --]

On 08/07/2017 10:13 AM, Eric Blake wrote:
> On 08/07/2017 07:56 AM, Vladimir Sementsov-Ogievskiy wrote:
>> 07.08.2017 14:52, Eric Blake wrote:
>>> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>>>> Set reply.handle to 0 on error path to prevent normal path of
>>>> nbd_co_receive_reply.
> 

> ...and the client is recognizing that the server sent garbage, but then
> proceeds to handle the packet anyway.  The ideal reaction is that the
> client should disconnect from the server, rather than handle the packet.
>  But because it didn't do that, the client is now unable to receive any
> future messages from the server.  Compare the traces of:
> 

> followed by a clean disconnect; vs. the buggy server:
> 
> 29148@1502118384.385133:nbd_opt_go_success Export is good to go
> 29148@1502118384.385321:nbd_send_request Sending request to server: {
> .from = 0, .len = 1, .handle = 94152262559840, .flags = 0x0, .type = 0
> (read) }
> 29148@1502118396.494643:nbd_receive_reply Got reply: { magic =
> 0x1446698, .error =  0, handle = 94152262559840 }
> invalid magic (got 0x1446698)
> read 1/1 bytes at offset 0
> 1 bytes, 1 ops; 0:00:12.10 (0.082581 bytes/sec and 0.0826 ops/sec)
> 29148@1502118396.494746:nbd_send_request Sending request to server: {
> .from = 0, .len = 0, .handle = 94152262559840, .flags = 0x0, .type = 3
> (flush) }
> 
> where the client is now hung.  Thankfully, the client is blocked in an
> idle loop (not eating CPU), so I don't know if this counts as the
> ability for a malicious server to cause a denial of service against qemu
> as an NBD client (in general, being unable to read further data from the
> server because client internal state is now botched is not that much
> different from being unable to read further data from the server because
> the client hung up on the invalid server), unless there is some way to
> cause qemu to die from an assertion failure rather than just get stuck.

With just patch 6/17 applied, the client still hangs, but this time with
a different trace:

30053@1502119637.604092:nbd_opt_go_success Export is good to go
30053@1502119637.604256:nbd_send_request Sending request to server: {
.from = 0, .len = 1, .handle = 94716063746144, .flags = 0x0, .type = 0
(read) }
30053@1502119649.070156:nbd_receive_reply Got reply: { magic =
0x1446698, .error =  0, handle = 94716063746144 }
invalid magic (got 0x1446698)
read failed: Input/output error
30053@1502119649.070243:nbd_send_request Sending request to server: {
.from = 0, .len = 0, .handle = 94716063746144, .flags = 0x0, .type = 3
(flush) }

The client still tried to send a flush request to the server, when it
should REALLY quit talking to the server at all and just disconnect,
because the moment the client recognizes that the server has sent
garbage is the moment that the client can no longer assume that the
server will react correctly to any further commands.

So I don't think your patch is quite correct, even if you have
identified a shortfall in our client code.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-07 15:33         ` [Qemu-devel] [Qemu-block] " Eric Blake
@ 2017-08-07 16:09           ` Vladimir Sementsov-Ogievskiy
  2017-08-07 16:18             ` Eric Blake
  0 siblings, 1 reply; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-07 16:09 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: kwolf, pbonzini, den, P J P, mreitz

07.08.2017 18:33, Eric Blake wrote:
> On 08/07/2017 10:13 AM, Eric Blake wrote:
>> On 08/07/2017 07:56 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> 07.08.2017 14:52, Eric Blake wrote:
>>>> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>>>>> Set reply.handle to 0 on error path to prevent normal path of
>>>>> nbd_co_receive_reply.
>> ...and the client is recognizing that the server sent garbage, but then
>> proceeds to handle the packet anyway.  The ideal reaction is that the
>> client should disconnect from the server, rather than handle the packet.
>>   But because it didn't do that, the client is now unable to receive any
>> future messages from the server.  Compare the traces of:
>>
>> followed by a clean disconnect; vs. the buggy server:
>>
>> 29148@1502118384.385133:nbd_opt_go_success Export is good to go
>> 29148@1502118384.385321:nbd_send_request Sending request to server: {
>> .from = 0, .len = 1, .handle = 94152262559840, .flags = 0x0, .type = 0
>> (read) }
>> 29148@1502118396.494643:nbd_receive_reply Got reply: { magic =
>> 0x1446698, .error =  0, handle = 94152262559840 }
>> invalid magic (got 0x1446698)
>> read 1/1 bytes at offset 0
>> 1 bytes, 1 ops; 0:00:12.10 (0.082581 bytes/sec and 0.0826 ops/sec)
>> 29148@1502118396.494746:nbd_send_request Sending request to server: {
>> .from = 0, .len = 0, .handle = 94152262559840, .flags = 0x0, .type = 3
>> (flush) }
>>
>> where the client is now hung.  Thankfully, the client is blocked in an
>> idle loop (not eating CPU), so I don't know if this counts as the
>> ability for a malicious server to cause a denial of service against qemu
>> as an NBD client (in general, being unable to read further data from the
>> server because client internal state is now botched is not that much
>> different from being unable to read further data from the server because
>> the client hung up on the invalid server), unless there is some way to
>> cause qemu to die from an assertion failure rather than just get stuck.
> With just patch 6/17 applied, the client still hangs, but this time with
> a different trace:
>
> 30053@1502119637.604092:nbd_opt_go_success Export is good to go
> 30053@1502119637.604256:nbd_send_request Sending request to server: {
> .from = 0, .len = 1, .handle = 94716063746144, .flags = 0x0, .type = 0
> (read) }
> 30053@1502119649.070156:nbd_receive_reply Got reply: { magic =
> 0x1446698, .error =  0, handle = 94716063746144 }
> invalid magic (got 0x1446698)
> read failed: Input/output error
> 30053@1502119649.070243:nbd_send_request Sending request to server: {
> .from = 0, .len = 0, .handle = 94716063746144, .flags = 0x0, .type = 3
> (flush) }
>
> The client still tried to send a flush request to the server, when it
> should REALLY quit talking to the server at all and just disconnect,
> because the moment the client recognizes that the server has sent
> garbage is the moment that the client can no longer assume that the
> server will react correctly to any further commands.
>
> So I don't think your patch is quite correct, even if you have
> identified a shortfall in our client code.
>
Why do you think so? It just does what it states in commit message.

Patch 17 should fix the case (but I doubt that it can be taken separately).


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry
  2017-08-07 16:09           ` Vladimir Sementsov-Ogievskiy
@ 2017-08-07 16:18             ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-07 16:18 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: kwolf, pbonzini, den, P J P, mreitz

[-- Attachment #1: Type: text/plain, Size: 1850 bytes --]

On 08/07/2017 11:09 AM, Vladimir Sementsov-Ogievskiy wrote:

>>>>> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> Set reply.handle to 0 on error path to prevent normal path of
>>>>>> nbd_co_receive_reply.

>>
>> The client still tried to send a flush request to the server, when it
>> should REALLY quit talking to the server at all and just disconnect,
>> because the moment the client recognizes that the server has sent
>> garbage is the moment that the client can no longer assume that the
>> server will react correctly to any further commands.
>>
>> So I don't think your patch is quite correct, even if you have
>> identified a shortfall in our client code.
>>
> Why do you think so? It just does what it states in commit message.

The commit message doesn't state much on the way of WHY.  It it the
subsequent discussion that says that the reason WHY we need to fix
something is to make the client robustly hang up when it encounters a
broken server - but once you couch it in those terms, this patch is NOT
making the client hang up gracefully (it is making the client skip ONE
invalid reply, but then immediately lets the client send another request
and gets stuck waiting for a reply to that second request).  A full fix
for the issue would make sure the client hangs up as soon as it detects
a bogus server.

> 
> Patch 17 should fix the case (but I doubt that it can be taken separately).

It's okay if it takes more than one patch to fix an issue, if the
earlier patches document that more work is still needed.  Or maybe we
can squash the best parts of 17 and this one.  I'm still playing with
the best way to minimally address the issue for 2.10.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (16 preceding siblings ...)
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 17/17] block/nbd-client: always return EIO on and after the first io channel error Vladimir Sementsov-Ogievskiy
@ 2017-08-16 21:21 ` Eric Blake
  2017-08-17  7:37   ` Vladimir Sementsov-Ogievskiy
  2017-08-25 22:10 ` Eric Blake
  18 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-16 21:21 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 2704 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> A bit more refactoring and fixing before BLOCK_STATUS series.
> I've tried to make individual patches simple enough, so there are
> a lot of them.

Is your BLOCK_STATUS series something that is in good enough shape to
post a preliminary version of it (the version you posted back in
February is now horribly out-of-date, with all the good cleanups you
have been doing in the meantime).  I want to get a running start at
reviewing what I can to make sure we get improved NBD functionality into
2.11.

Also, please feel free to offer your Reviewed-by on other patches
(whether NBD-related or not).  Speaking as the NBD maintainer, I welcome
any help I can get.  And from personal experience, reviews tend to be
one of the largest bottlenecks in open source software - if you are
writing patches but not offering reviews, then you are adding to the
bottleneck so reviewers tend to set your patches aside for when they
have more time; while if you are actively offering reviews, then it is
obvious that you care about the project and your patch contributions
tend to have an easier time getting in.  My personal rule of thumb is to
try and review at least 2 other patches for every one that I send,
although that is a rather ambitious goal and there's nothing wrong if
you can't commit to theh same level of effort.

> 
> Vladimir Sementsov-Ogievskiy (17):
>   nbd/client: fix nbd_opt_go
>   nbd/client: refactor nbd_read_eof
>   nbd/client: refactor nbd_receive_reply
>   nbd/client: fix nbd_send_request to return int
>   block/nbd-client: get rid of ssize_t
>   block/nbd-client: fix nbd_read_reply_entry
>   block/nbd-client: refactor request send/receive
>   block/nbd-client: rename nbd_recv_coroutines_enter_all
>   block/nbd-client: move nbd_co_receive_reply content into
>     nbd_co_request
>   block/nbd-client: move nbd_coroutine_end content into nbd_co_request
>   block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on
>     error
>   block/nbd-client: refactor nbd_co_request
>   block/nbd-client: refactor NBDClientSession.recv_coroutine
>   block/nbd-client: exit reply-reading coroutine on incorrect handle
>   block/nbd-client: refactor reading reply
>   block/nbd-client: drop reply field from NBDClientSession
>   block/nbd-client: always return EIO on and after the first io channel
>     error

Of course, parts of this will need rebasing based on what finally landed
in 2.10, but I can start reviewing what I can for this round.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing
  2017-08-16 21:21 ` [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Eric Blake
@ 2017-08-17  7:37   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 50+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-08-17  7:37 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

17.08.2017 00:21, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> A bit more refactoring and fixing before BLOCK_STATUS series.
>> I've tried to make individual patches simple enough, so there are
>> a lot of them.
> Is your BLOCK_STATUS series something that is in good enough shape to
> post a preliminary version of it (the version you posted back in
> February is now horribly out-of-date, with all the good cleanups you
> have been doing in the meantime).  I want to get a running start at
> reviewing what I can to make sure we get improved NBD functionality into
> 2.11.

Every time I want to produce a new version of BLOCK_STATUS, I stumble on
something and new 10-20 patches refactoring series appears)

>
> Also, please feel free to offer your Reviewed-by on other patches
> (whether NBD-related or not).  Speaking as the NBD maintainer, I welcome
> any help I can get.  And from personal experience, reviews tend to be
> one of the largest bottlenecks in open source software - if you are
> writing patches but not offering reviews, then you are adding to the
> bottleneck so reviewers tend to set your patches aside for when they
> have more time; while if you are actively offering reviews, then it is
> obvious that you care about the project and your patch contributions
> tend to have an easier time getting in.  My personal rule of thumb is to
> try and review at least 2 other patches for every one that I send,
> although that is a rather ambitious goal and there's nothing wrong if
> you can't commit to theh same level of effort.

Thanks, I get the point, I'll try to do better.


>
>> Vladimir Sementsov-Ogievskiy (17):
>>    nbd/client: fix nbd_opt_go
>>    nbd/client: refactor nbd_read_eof
>>    nbd/client: refactor nbd_receive_reply
>>    nbd/client: fix nbd_send_request to return int
>>    block/nbd-client: get rid of ssize_t
>>    block/nbd-client: fix nbd_read_reply_entry
>>    block/nbd-client: refactor request send/receive
>>    block/nbd-client: rename nbd_recv_coroutines_enter_all
>>    block/nbd-client: move nbd_co_receive_reply content into
>>      nbd_co_request
>>    block/nbd-client: move nbd_coroutine_end content into nbd_co_request
>>    block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on
>>      error
>>    block/nbd-client: refactor nbd_co_request
>>    block/nbd-client: refactor NBDClientSession.recv_coroutine
>>    block/nbd-client: exit reply-reading coroutine on incorrect handle
>>    block/nbd-client: refactor reading reply
>>    block/nbd-client: drop reply field from NBDClientSession
>>    block/nbd-client: always return EIO on and after the first io channel
>>      error
> Of course, parts of this will need rebasing based on what finally landed
> in 2.10, but I can start reviewing what I can for this round.
>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all Vladimir Sementsov-Ogievskiy
@ 2017-08-25 18:43   ` Eric Blake
  2017-08-25 21:48     ` Eric Blake
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-25 18:43 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1340 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Rename nbd_recv_coroutines_enter_all to nbd_recv_coroutines_wake_all,
> as it most probably just add all recv coroutines into co_queue_wakeup,
> not directly enter them.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

> 
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index c9ade9b517..8ad2264a40 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -34,7 +34,7 @@
>  #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
>  #define INDEX_TO_HANDLE(bs, index)  ((index)  ^ ((uint64_t)(intptr_t)bs))
>  
> -static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
> +static void nbd_recv_coroutines_wake_all(NBDClientSession *s)
>  {
>      int i;
>  
> @@ -108,7 +108,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>      }
>  
>      s->reply.handle = 0;
> -    nbd_recv_coroutines_enter_all(s);
> +    nbd_recv_coroutines_wake_all(s);
>      s->read_reply_co = NULL;
>  }
>  
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive Vladimir Sementsov-Ogievskiy
@ 2017-08-25 18:49   ` Eric Blake
  2017-08-25 19:08     ` Eric Blake
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-25 18:49 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1742 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Move nbd_co_receive_reply and nbd_coroutine_end calls into
> nbd_co_send_request and rename the latter to just nbd_co_request.
> 
> This removes code duplications in nbd_client_co_{pwrite,pread,...}
> functions. Also this is needed for further refactoring.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 89 ++++++++++++++++++++----------------------------------
>  1 file changed, 33 insertions(+), 56 deletions(-)

The diffstat shows this is a nice improvement.
> +++ b/block/nbd-client.c
> @@ -112,12 +112,20 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>      s->read_reply_co = NULL;
>  }
>  
> -static int nbd_co_send_request(BlockDriverState *bs,
> -                               NBDRequest *request,
> -                               QEMUIOVector *qiov)
> +static void nbd_co_receive_reply(NBDClientSession *s,
> +                                 NBDRequest *request,
> +                                 NBDReply *reply,
> +                                 QEMUIOVector *qiov);
> +static void nbd_coroutine_end(BlockDriverState *bs,
> +                              NBDRequest *request);

Is it possible to organize the functions in topological order so that we
don't need forward declarations of static functions?  (If there is
mutual recursion, you need the forward declaration; but other than that,
I like reading the building blocks first rather than skipping around)

Otherwise,
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 09/17] block/nbd-client: move nbd_co_receive_reply content into nbd_co_request
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 09/17] block/nbd-client: move nbd_co_receive_reply content into nbd_co_request Vladimir Sementsov-Ogievskiy
@ 2017-08-25 18:52   ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 18:52 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 683 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Move code from nbd_co_receive_reply into nbd_co_request. This simplify

s/simplify/simplifies/

> things, makes further refactoring possible. Also, a function starting

s/makes/and makes/

> with qemu_coroutine_yield is weird.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 33 ++++++++++-----------------------
>  1 file changed, 10 insertions(+), 23 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive
  2017-08-25 18:49   ` Eric Blake
@ 2017-08-25 19:08     ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 19:08 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: kwolf, pbonzini, den, mreitz

[-- Attachment #1: Type: text/plain, Size: 1742 bytes --]

On 08/25/2017 01:49 PM, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Move nbd_co_receive_reply and nbd_coroutine_end calls into
>> nbd_co_send_request and rename the latter to just nbd_co_request.
>>
>> This removes code duplications in nbd_client_co_{pwrite,pread,...}
>> functions. Also this is needed for further refactoring.
>>

>> -static int nbd_co_send_request(BlockDriverState *bs,
>> -                               NBDRequest *request,
>> -                               QEMUIOVector *qiov)
>> +static void nbd_co_receive_reply(NBDClientSession *s,
>> +                                 NBDRequest *request,
>> +                                 NBDReply *reply,
>> +                                 QEMUIOVector *qiov);
>> +static void nbd_coroutine_end(BlockDriverState *bs,
>> +                              NBDRequest *request);
> 
> Is it possible to organize the functions in topological order so that we
> don't need forward declarations of static functions?  (If there is
> mutual recursion, you need the forward declaration; but other than that,
> I like reading the building blocks first rather than skipping around)

Answering myself: patch 9 inlines nbd_co_receive_reply into its lone
caller, eliminating the need for a forward reference, and it is less
code churn to have a temporary forward declaration than it is to move
the function body up and then back down.  (Maybe the commit message
could hint that nbd_co_receive_reply will later be inlined)

> 
> Otherwise,
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof
  2017-08-07 12:05     ` Vladimir Sementsov-Ogievskiy
@ 2017-08-25 19:22       ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 19:22 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 2674 bytes --]

On 08/07/2017 07:05 AM, Vladimir Sementsov-Ogievskiy wrote:
> 07.08.2017 14:42, Eric Blake wrote:
>> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> Refactor nbd_read_eof to return 1 on success, 0 on eof, when no
>>> data was read and <0 for other cases, because returned size of
>>> read data is not actually used.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---

>>> + * Returns 1 on success
>>> + *         0 on eof, when no data was read (errp is not set)
>>> + *         -EINVAL on eof, when some data < @size was read until eof
>>> + *         < 0 on read fail
>> In general, mixing negative errno value and generic < 0 in the same
>> function is most likely ambiguous.
> 
> Hmm, but this is entirely what we do so often:
> 
> if (,,) return -EINVAL;
> 
> return some_other_func().
> 
> last two lines may be rewritten like this:
> + * < 0 on fail

Or even better as 'negative errno on failure'.

Here's what I'm proposing to squash in, at which point you have:

Reviewed-by: Eric Blake <eblake@redhat.com>

diff --git i/nbd/nbd-internal.h w/nbd/nbd-internal.h
index 3fb0b6098a..03549e3f39 100644
--- i/nbd/nbd-internal.h
+++ w/nbd/nbd-internal.h
@@ -80,8 +80,7 @@
  * Tries to read @size bytes from @ioc.
  * Returns 1 on success
  *         0 on eof, when no data was read (errp is not set)
- *         -EINVAL on eof, when some data < @size was read until eof
- *         < 0 on read fail
+ *         negative errno on failure (errp is set)
  */
 static inline int nbd_read_eof(QIOChannel *ioc, void *buffer, size_t size,
                                Error **errp)
@@ -95,7 +94,7 @@ static inline int nbd_read_eof(QIOChannel *ioc, void
*buffer, size_t size,
      * that this is coroutine-safe.
      */

-    assert(size > 0);
+    assert(size);

     ret = nbd_rwv(ioc, &iov, 1, size, true, errp);
     if (ret <= 0) {


>>> +
>>> +    if (ret != size) {
>>> +        error_setg(errp, "End of file");
>>> +        return -EINVAL;
>> and so is this. Which makes the function documentation not quite
>> accurate; you aren't mixing a generic < 0.
> 
> hmm.. my wordings are weird sometimes, sorry for that :(. and thank you
> for your patience.

Not a problem - I understand that English is not your native language,
so you are already a leg up on me (I'm nowhere near as competent in a
second language as you already are on English, even if review still
gives you grammar hints and other improvements).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] nbd/client: refactor nbd_receive_reply
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 03/17] nbd/client: refactor nbd_receive_reply Vladimir Sementsov-Ogievskiy
@ 2017-08-25 21:16   ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 21:16 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1663 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Refactor nbd_receive_reply to return 1 on success, 0 on eof, when no
> data was read and <0 for other cases, because returned size of read
> data is not actually used.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h |  2 +-
>  nbd/client.c        | 12 +++++++++---
>  2 files changed, 10 insertions(+), 4 deletions(-)
> 

> +++ b/nbd/client.c
> @@ -914,11 +914,16 @@ ssize_t nbd_send_request(QIOChannel *ioc, NBDRequest *request)
>      return nbd_write(ioc, buf, sizeof(buf), NULL);
>  }
>  
> -ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
> +/* nbd_receive_reply
> + * Returns 1 on success
> + *         0 on eof, when no data was read from @ioc (errp is not set)
> + *         < 0 on fail

Similar to the previous patch, I'd like to squash in:

diff --git i/nbd/client.c w/nbd/client.c
index a1758a1931..f8c213bc96 100644
--- i/nbd/client.c
+++ w/nbd/client.c
@@ -916,8 +916,8 @@ ssize_t nbd_send_request(QIOChannel *ioc, NBDRequest
*request)

 /* nbd_receive_reply
  * Returns 1 on success
- *         0 on eof, when no data was read from @ioc (errp is not set)
- *         < 0 on fail
+ *         0 on eof, when no data was read (errp is not set)
+ *         negative errno on failure (errp is set)
  */
 int nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
 {

With that,
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int Vladimir Sementsov-Ogievskiy
  2017-08-07  8:23   ` Daniel P. Berrange
@ 2017-08-25 21:20   ` Eric Blake
  1 sibling, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 21:20 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 644 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Fix nbd_send_request to return int, as it returns a return value
> of nbd_write (which is int), and the only user of nbd_send_request's
> return value (nbd_co_send_request) consider it as int too.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h | 2 +-
>  nbd/client.c        | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t Vladimir Sementsov-Ogievskiy
  2017-08-04 16:11   ` Daniel P. Berrange
@ 2017-08-25 21:25   ` Eric Blake
  1 sibling, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 21:25 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 514 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Use int variable for nbd_co_send_request return value (as
> nbd_co_send_request returns int).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all
  2017-08-25 18:43   ` Eric Blake
@ 2017-08-25 21:48     ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 21:48 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: kwolf, pbonzini, den, mreitz

[-- Attachment #1: Type: text/plain, Size: 681 bytes --]

On 08/25/2017 01:43 PM, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Rename nbd_recv_coroutines_enter_all to nbd_recv_coroutines_wake_all,
>> as it most probably just add all recv coroutines into co_queue_wakeup,

s/adds/

>> not directly enter them.

s/not/rather than/

>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>  block/nbd-client.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 10/17] block/nbd-client: move nbd_coroutine_end content into nbd_co_request
  2017-08-04 15:14 ` [Qemu-devel] [PATCH 10/17] block/nbd-client: move nbd_coroutine_end " Vladimir Sementsov-Ogievskiy
@ 2017-08-25 21:57   ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-25 21:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 647 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> Move code from nbd_coroutine_end into nbd_co_request. The function
> nbd_coroutine_end is not needed separately, also it is better to
> have in_flight-- in nbd_co_request as in_flight++ lives in nbd_co_request
> too.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 14 +-------------
>  1 file changed, 1 insertion(+), 13 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing
  2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (17 preceding siblings ...)
  2017-08-16 21:21 ` [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Eric Blake
@ 2017-08-25 22:10 ` Eric Blake
  2017-08-29 22:12   ` Eric Blake
  18 siblings, 1 reply; 50+ messages in thread
From: Eric Blake @ 2017-08-25 22:10 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den, Stefan Hajnoczi

[-- Attachment #1: Type: text/plain, Size: 2221 bytes --]

On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
> A bit more refactoring and fixing before BLOCK_STATUS series.
> I've tried to make individual patches simple enough, so there are
> a lot of them.
> 
> Vladimir Sementsov-Ogievskiy (17):
>   nbd/client: fix nbd_opt_go
>   nbd/client: refactor nbd_read_eof
>   nbd/client: refactor nbd_receive_reply
>   nbd/client: fix nbd_send_request to return int
>   block/nbd-client: get rid of ssize_t
>   block/nbd-client: fix nbd_read_reply_entry
>   block/nbd-client: refactor request send/receive
>   block/nbd-client: rename nbd_recv_coroutines_enter_all
>   block/nbd-client: move nbd_co_receive_reply content into
>     nbd_co_request
>   block/nbd-client: move nbd_coroutine_end content into nbd_co_request
>   block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on
>     error
>   block/nbd-client: refactor nbd_co_request
>   block/nbd-client: refactor NBDClientSession.recv_coroutine
>   block/nbd-client: exit reply-reading coroutine on incorrect handle
>   block/nbd-client: refactor reading reply
>   block/nbd-client: drop reply field from NBDClientSession
>   block/nbd-client: always return EIO on and after the first io channel
>     error

I've pushed 1-5 and 7-10 onto my NBD staging branch for 2.11:

  git://repo.or.cz/qemu/ericb.git nbd

with a couple of changes squashed in as mentioned in individual patches;
please double-check that it looks okay.  If so, then I will use that
branch as the starting point for all NBD commits destined for 2.11,
sending a pull request once the tree opens.

Patches 6 and 11 are somewhat subsumed by the work that went into 2.10,
and the remaining patches are starting to cause enough conflicts that
I'd prefer you complete the rebase of patches 12-17 and post a v2 on top
of my staging branch.

I'm also hoping that Stefan will rebase a v2 of his "nbd-client: enter
read_reply_co during init to avoid crash" on top of your preliminary
work, since Paolo had a good suggestion on improving the semantics in
the review of v1.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing
  2017-08-25 22:10 ` Eric Blake
@ 2017-08-29 22:12   ` Eric Blake
  0 siblings, 0 replies; 50+ messages in thread
From: Eric Blake @ 2017-08-29 22:12 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: kwolf, pbonzini, den, Stefan Hajnoczi, mreitz

[-- Attachment #1: Type: text/plain, Size: 2761 bytes --]

On 08/25/2017 05:10 PM, Eric Blake wrote:
> On 08/04/2017 10:14 AM, Vladimir Sementsov-Ogievskiy wrote:
>> A bit more refactoring and fixing before BLOCK_STATUS series.
>> I've tried to make individual patches simple enough, so there are
>> a lot of them.
>>
>> Vladimir Sementsov-Ogievskiy (17):
>>   nbd/client: fix nbd_opt_go
>>   nbd/client: refactor nbd_read_eof
>>   nbd/client: refactor nbd_receive_reply
>>   nbd/client: fix nbd_send_request to return int
>>   block/nbd-client: get rid of ssize_t
>>   block/nbd-client: fix nbd_read_reply_entry
>>   block/nbd-client: refactor request send/receive
>>   block/nbd-client: rename nbd_recv_coroutines_enter_all
>>   block/nbd-client: move nbd_co_receive_reply content into
>>     nbd_co_request
>>   block/nbd-client: move nbd_coroutine_end content into nbd_co_request
>>   block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on
>>     error
>>   block/nbd-client: refactor nbd_co_request
>>   block/nbd-client: refactor NBDClientSession.recv_coroutine
>>   block/nbd-client: exit reply-reading coroutine on incorrect handle
>>   block/nbd-client: refactor reading reply
>>   block/nbd-client: drop reply field from NBDClientSession
>>   block/nbd-client: always return EIO on and after the first io channel
>>     error
> 
> I've pushed 1-5 and 7-10 onto my NBD staging branch for 2.11:
> 
>   git://repo.or.cz/qemu/ericb.git nbd

Correction - I've decided to take Stefan's patches first (since his
patch 1/3 was a bit more elegant at doing the same thing as your patch
10); that caused rebase conflicts for your patch 7 (which I simplified
by creating nbd_co_request as a wrapper rather than trying to inline its
parts), and my change to your patch 7 obsoletes the need for 9 or 10.  I
also placed your patch 8 before your patch 7 (if you don't like what I
changed on patch 7, it is now last in my NBD staging area, so I can more
easily drop it from my tree if you'd prefer to respin it differently).

> with a couple of changes squashed in as mentioned in individual patches;
> please double-check that it looks okay.  If so, then I will use that
> branch as the starting point for all NBD commits destined for 2.11,
> sending a pull request once the tree opens.
> 
> Patches 6 and 11 are somewhat subsumed by the work that went into 2.10,
> and the remaining patches are starting to cause enough conflicts that
> I'd prefer you complete the rebase of patches 12-17 and post a v2 on top
> of my staging branch.

These statements still hold; you'll need to rebase the rest of your
series on top of my tree.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2017-08-29 22:12 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-04 15:14 [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 01/17] nbd/client: fix nbd_opt_go Vladimir Sementsov-Ogievskiy
2017-08-07 11:31   ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 02/17] nbd/client: refactor nbd_read_eof Vladimir Sementsov-Ogievskiy
2017-08-07 11:42   ` Eric Blake
2017-08-07 12:05     ` Vladimir Sementsov-Ogievskiy
2017-08-25 19:22       ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 03/17] nbd/client: refactor nbd_receive_reply Vladimir Sementsov-Ogievskiy
2017-08-25 21:16   ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 04/17] nbd/client: fix nbd_send_request to return int Vladimir Sementsov-Ogievskiy
2017-08-07  8:23   ` Daniel P. Berrange
2017-08-07  8:57     ` Vladimir Sementsov-Ogievskiy
2017-08-07 11:49       ` Eric Blake
2017-08-07 12:03       ` Daniel P. Berrange
2017-08-25 21:20   ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 05/17] block/nbd-client: get rid of ssize_t Vladimir Sementsov-Ogievskiy
2017-08-04 16:11   ` Daniel P. Berrange
2017-08-07  6:57     ` Vladimir Sementsov-Ogievskiy
2017-08-07  8:24       ` Daniel P. Berrange
2017-08-25 21:25   ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 06/17] block/nbd-client: fix nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
2017-08-07 11:52   ` Eric Blake
2017-08-07 12:56     ` Vladimir Sementsov-Ogievskiy
2017-08-07 15:13       ` Eric Blake
2017-08-07 15:33         ` [Qemu-devel] [Qemu-block] " Eric Blake
2017-08-07 16:09           ` Vladimir Sementsov-Ogievskiy
2017-08-07 16:18             ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 07/17] block/nbd-client: refactor request send/receive Vladimir Sementsov-Ogievskiy
2017-08-25 18:49   ` Eric Blake
2017-08-25 19:08     ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 08/17] block/nbd-client: rename nbd_recv_coroutines_enter_all Vladimir Sementsov-Ogievskiy
2017-08-25 18:43   ` Eric Blake
2017-08-25 21:48     ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 09/17] block/nbd-client: move nbd_co_receive_reply content into nbd_co_request Vladimir Sementsov-Ogievskiy
2017-08-25 18:52   ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 10/17] block/nbd-client: move nbd_coroutine_end " Vladimir Sementsov-Ogievskiy
2017-08-25 21:57   ` Eric Blake
2017-08-04 15:14 ` [Qemu-devel] [PATCH 11/17] block/nbd-client: fix nbd_co_request: set s->reply.handle to 0 on error Vladimir Sementsov-Ogievskiy
2017-08-07 11:55   ` Eric Blake
2017-08-07 13:17     ` Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 12/17] block/nbd-client: refactor nbd_co_request Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 13/17] block/nbd-client: refactor NBDClientSession.recv_coroutine Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 14/17] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 15/17] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 16/17] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
2017-08-04 15:14 ` [Qemu-devel] [PATCH 17/17] block/nbd-client: always return EIO on and after the first io channel error Vladimir Sementsov-Ogievskiy
2017-08-16 21:21 ` [Qemu-devel] [PATCH 00/17] nbd client refactoring and fixing Eric Blake
2017-08-17  7:37   ` Vladimir Sementsov-Ogievskiy
2017-08-25 22:10 ` Eric Blake
2017-08-29 22:12   ` Eric Blake

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.