All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing
@ 2017-09-18 13:59 Vladimir Sementsov-Ogievskiy
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply Vladimir Sementsov-Ogievskiy
                   ` (6 more replies)
  0 siblings, 7 replies; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

v2: Actually - tail of v1 series, with changes, additions and rebased
  on recent work in upstream (which includes some patches from v1 too)

Vladimir Sementsov-Ogievskiy (7):
  block/nbd-client: refactor nbd_co_receive_reply
  block/nbd-client: exit reply-reading coroutine on incorrect handle
  block/nbd-client: refactor reading reply
  block/nbd-client: drop reply field from NBDClientSession
  block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set
    in parallel
  block/nbd-client: early fail nbd_read_reply_entry if s->quit is set
  block/nbd-client: do not yield from nbd_read_reply_entry

 block/nbd-client.h |   4 ++-
 block/nbd-client.c | 103 ++++++++++++++++++++++++++---------------------------
 2 files changed, 54 insertions(+), 53 deletions(-)

-- 
2.11.1

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 15:38   ` Eric Blake
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

"NBDReply *reply" parameter of nbd_co_receive_reply is used only
to pass return value for nbd_co_request (reply.error). Remove it
and use function return value instead.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index ee7f758e68..acd8e5d007 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -178,26 +178,26 @@ err:
     return rc;
 }
 
-static void nbd_co_receive_reply(NBDClientSession *s,
-                                 NBDRequest *request,
-                                 NBDReply *reply,
-                                 QEMUIOVector *qiov)
+static int nbd_co_receive_reply(NBDClientSession *s,
+                                NBDRequest *request,
+                                QEMUIOVector *qiov)
 {
+    int ret;
     int i = HANDLE_TO_INDEX(s, request->handle);
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    *reply = s->reply;
-    if (reply->handle != request->handle || !s->ioc || s->quit) {
-        reply->error = EIO;
+    if (s->reply.handle != request->handle || !s->ioc || s->quit) {
+        ret = -EIO;
     } else {
-        if (qiov && reply->error == 0) {
+        ret = -s->reply.error;
+        if (qiov && s->reply.error == 0) {
             assert(request->len == iov_size(qiov->iov, qiov->niov));
             if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
                                       NULL) < 0) {
-                reply->error = EIO;
+                ret = -EIO;
                 s->quit = true;
             }
         }
@@ -217,6 +217,8 @@ static void nbd_co_receive_reply(NBDClientSession *s,
     s->in_flight--;
     qemu_co_queue_next(&s->free_sema);
     qemu_co_mutex_unlock(&s->send_mutex);
+
+    return ret;
 }
 
 static int nbd_co_request(BlockDriverState *bs,
@@ -224,7 +226,6 @@ static int nbd_co_request(BlockDriverState *bs,
                           QEMUIOVector *qiov)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
-    NBDReply reply;
     int ret;
 
     assert(!qiov || request->type == NBD_CMD_WRITE ||
@@ -232,12 +233,11 @@ static int nbd_co_request(BlockDriverState *bs,
     ret = nbd_co_send_request(bs, request,
                               request->type == NBD_CMD_WRITE ? qiov : NULL);
     if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, request, &reply,
-                             request->type == NBD_CMD_READ ? qiov : NULL);
+        return ret;
     }
-    return -reply.error;
+
+    return nbd_co_receive_reply(client, request,
+                                request->type == NBD_CMD_READ ? qiov : NULL);
 }
 
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 15:45   ` Eric Blake
  2017-09-18 19:50   ` Eric Blake
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Check reply-handle == request-handle in the same place, where
recv coroutine number is calculated from reply->handle and it's
correctness checked - in nbd_read_reply_entry.

Also finish nbd_read_reply_entry in case of reply-handle !=
request-handle in the same way as in case of incorrect reply-handle.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h | 1 +
 block/nbd-client.c | 8 ++++++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index b435754b82..b1900e6a6d 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -20,6 +20,7 @@
 typedef struct {
     Coroutine *coroutine;
     bool receiving;         /* waiting for read_reply_co? */
+    NBDRequest *request;
 } NBDClientRequest;
 
 typedef struct NBDClientSession {
diff --git a/block/nbd-client.c b/block/nbd-client.c
index acd8e5d007..5f241ecc22 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -92,7 +92,9 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
         i = HANDLE_TO_INDEX(s, s->reply.handle);
         if (i >= MAX_NBD_REQUESTS ||
             !s->requests[i].coroutine ||
-            !s->requests[i].receiving) {
+            !s->requests[i].receiving ||
+            s->reply.handle != s->requests[i].request->handle)
+        {
             break;
         }
 
@@ -142,6 +144,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
     s->requests[i].receiving = false;
 
     request->handle = INDEX_TO_HANDLE(s, i);
+    s->requests[i].request = request;
 
     if (s->quit) {
         rc = -EIO;
@@ -189,9 +192,10 @@ static int nbd_co_receive_reply(NBDClientSession *s,
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    if (s->reply.handle != request->handle || !s->ioc || s->quit) {
+    if (!s->ioc || s->quit) {
         ret = -EIO;
     } else {
+        assert(s->reply.handle == request->handle);
         ret = -s->reply.error;
         if (qiov && s->reply.error == 0) {
             assert(request->len == iov_size(qiov->iov, qiov->niov));
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply Vladimir Sementsov-Ogievskiy
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 15:43   ` Paolo Bonzini
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 4/7] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Read the whole reply in one place - in nbd_read_reply_entry.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  1 +
 block/nbd-client.c | 42 ++++++++++++++++++++++++------------------
 2 files changed, 25 insertions(+), 18 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index b1900e6a6d..3f97d31013 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -21,6 +21,7 @@ typedef struct {
     Coroutine *coroutine;
     bool receiving;         /* waiting for read_reply_co? */
     NBDRequest *request;
+    QEMUIOVector *qiov;
 } NBDClientRequest;
 
 typedef struct NBDClientSession {
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 5f241ecc22..f7b642f079 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -98,6 +98,21 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             break;
         }
 
+        if (s->reply.error == 0 &&
+            s->requests[i].request->type == NBD_CMD_READ)
+        {
+            QEMUIOVector *qiov = s->requests[i].qiov;
+            assert(qiov != NULL);
+            assert(s->requests[i].request->len ==
+                   iov_size(qiov->iov, qiov->niov));
+            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
+                                      NULL) < 0)
+            {
+                s->reply.error = EIO;
+                break;
+            }
+        }
+
         /* We're woken up again by the request itself.  Note that there
          * is no race between yielding and reentering read_reply_co.  This
          * is because:
@@ -118,6 +133,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     s->read_reply_co = NULL;
 }
 
+/* qiov should be provided for READ and WRITE */
 static int nbd_co_send_request(BlockDriverState *bs,
                                NBDRequest *request,
                                QEMUIOVector *qiov)
@@ -145,6 +161,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
 
     request->handle = INDEX_TO_HANDLE(s, i);
     s->requests[i].request = request;
+    s->requests[i].qiov = qiov;
 
     if (s->quit) {
         rc = -EIO;
@@ -155,7 +172,8 @@ static int nbd_co_send_request(BlockDriverState *bs,
         goto err;
     }
 
-    if (qiov) {
+    if (s->requests[i].request->type == NBD_CMD_WRITE) {
+        assert(qiov);
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
         if (rc >= 0 && !s->quit) {
@@ -181,9 +199,7 @@ err:
     return rc;
 }
 
-static int nbd_co_receive_reply(NBDClientSession *s,
-                                NBDRequest *request,
-                                QEMUIOVector *qiov)
+static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
 {
     int ret;
     int i = HANDLE_TO_INDEX(s, request->handle);
@@ -197,14 +213,6 @@ static int nbd_co_receive_reply(NBDClientSession *s,
     } else {
         assert(s->reply.handle == request->handle);
         ret = -s->reply.error;
-        if (qiov && s->reply.error == 0) {
-            assert(request->len == iov_size(qiov->iov, qiov->niov));
-            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
-                                      NULL) < 0) {
-                ret = -EIO;
-                s->quit = true;
-            }
-        }
 
         /* Tell the read handler to read another header.  */
         s->reply.handle = 0;
@@ -232,16 +240,14 @@ static int nbd_co_request(BlockDriverState *bs,
     NBDClientSession *client = nbd_get_client_session(bs);
     int ret;
 
-    assert(!qiov || request->type == NBD_CMD_WRITE ||
-           request->type == NBD_CMD_READ);
-    ret = nbd_co_send_request(bs, request,
-                              request->type == NBD_CMD_WRITE ? qiov : NULL);
+    assert((qiov == NULL) !=
+           (request->type == NBD_CMD_WRITE || request->type == NBD_CMD_READ));
+    ret = nbd_co_send_request(bs, request, qiov);
     if (ret < 0) {
         return ret;
     }
 
-    return nbd_co_receive_reply(client, request,
-                                request->type == NBD_CMD_READ ? qiov : NULL);
+    return nbd_co_receive_reply(client, request);
 }
 
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 4/7] block/nbd-client: drop reply field from NBDClientSession
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 22:00   ` Eric Blake
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Drop 'reply' from NBDClientSession. It's used to only deliver request
return code to receiving coroutine. Instead introduce per-request ret
variable to simplify the scheme and make further refactoring possible.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.h |  2 +-
 block/nbd-client.c | 22 +++++++++-------------
 2 files changed, 10 insertions(+), 14 deletions(-)

diff --git a/block/nbd-client.h b/block/nbd-client.h
index 3f97d31013..4bc8fe3993 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -22,6 +22,7 @@ typedef struct {
     bool receiving;         /* waiting for read_reply_co? */
     NBDRequest *request;
     QEMUIOVector *qiov;
+    int ret;
 } NBDClientRequest;
 
 typedef struct NBDClientSession {
@@ -35,7 +36,6 @@ typedef struct NBDClientSession {
     int in_flight;
 
     NBDClientRequest requests[MAX_NBD_REQUESTS];
-    NBDReply reply;
     bool quit;
 } NBDClientSession;
 
diff --git a/block/nbd-client.c b/block/nbd-client.c
index f7b642f079..f331f08a9a 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -74,10 +74,10 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     uint64_t i;
     int ret = 0;
     Error *local_err = NULL;
+    NBDReply reply;
 
     while (!s->quit) {
-        assert(s->reply.handle == 0);
-        ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
+        ret = nbd_receive_reply(s->ioc, &reply, &local_err);
         if (ret < 0) {
             error_report_err(local_err);
         }
@@ -89,18 +89,18 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
          * handler acts as a synchronization point and ensures that only
          * one coroutine is called until the reply finishes.
          */
-        i = HANDLE_TO_INDEX(s, s->reply.handle);
+        i = HANDLE_TO_INDEX(s, reply.handle);
         if (i >= MAX_NBD_REQUESTS ||
             !s->requests[i].coroutine ||
             !s->requests[i].receiving ||
-            s->reply.handle != s->requests[i].request->handle)
+            reply.handle != s->requests[i].request->handle)
         {
             break;
         }
 
-        if (s->reply.error == 0 &&
-            s->requests[i].request->type == NBD_CMD_READ)
-        {
+        s->requests[i].ret = -reply.error;
+
+        if (reply.error == 0 && s->requests[i].request->type == NBD_CMD_READ) {
             QEMUIOVector *qiov = s->requests[i].qiov;
             assert(qiov != NULL);
             assert(s->requests[i].request->len ==
@@ -108,7 +108,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
                                       NULL) < 0)
             {
-                s->reply.error = EIO;
+                s->requests[i].ret = -EIO;
                 break;
             }
         }
@@ -211,11 +211,7 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
     if (!s->ioc || s->quit) {
         ret = -EIO;
     } else {
-        assert(s->reply.handle == request->handle);
-        ret = -s->reply.error;
-
-        /* Tell the read handler to read another header.  */
-        s->reply.handle = 0;
+        ret = s->requests[i].ret;
     }
 
     s->requests[i].coroutine = NULL;
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 4/7] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 16:01   ` Eric Blake
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set Vladimir Sementsov-Ogievskiy
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
  6 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

It's incorrect to return success rc >= 0 if we skip qio_channel_writev_all()
call due to s->quit.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index f331f08a9a..280147e6a7 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -189,6 +189,9 @@ static int nbd_co_send_request(BlockDriverState *bs,
     }
 
 err:
+    if (rc >= 0 && s->quit) {
+        rc = -EIO;
+    }
     if (rc < 0) {
         s->quit = true;
         s->requests[i].coroutine = NULL;
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 22:27   ` Eric Blake
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
  6 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Do not continue any operation if s->quit is set in parallel.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index 280147e6a7..f80a4c5564 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -81,7 +81,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
         if (ret < 0) {
             error_report_err(local_err);
         }
-        if (ret <= 0) {
+        if (ret <= 0 || s->quit) {
             break;
         }
 
@@ -105,9 +105,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             assert(qiov != NULL);
             assert(s->requests[i].request->len ==
                    iov_size(qiov->iov, qiov->niov));
-            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
-                                      NULL) < 0)
-            {
+            ret = qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov, NULL);
+            if (ret < 0 || s->quit) {
                 s->requests[i].ret = -EIO;
                 break;
             }
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry
  2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set Vladimir Sementsov-Ogievskiy
@ 2017-09-18 13:59 ` Vladimir Sementsov-Ogievskiy
  2017-09-18 22:36   ` Eric Blake
  6 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-18 13:59 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, den, vsementsov

Refactor nbd client to not yield from nbd_read_reply_entry. It's
possible now as all reading is done in nbd_read_reply_entry and
all request-related data is stored in per-request s->requests[i].

We need here some additional work with s->requests[i].ret and
s->quit to not fail requests on normal disconnet and to not report
reading errors on normal disconnect.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index f80a4c5564..bf20d5d5e6 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -79,7 +79,11 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
     while (!s->quit) {
         ret = nbd_receive_reply(s->ioc, &reply, &local_err);
         if (ret < 0) {
-            error_report_err(local_err);
+            if (s->quit) {
+                error_free(local_err);
+            } else {
+                error_report_err(local_err);
+            }
         }
         if (ret <= 0 || s->quit) {
             break;
@@ -112,19 +116,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
             }
         }
 
-        /* We're woken up again by the request itself.  Note that there
-         * is no race between yielding and reentering read_reply_co.  This
-         * is because:
-         *
-         * - if the request runs on the same AioContext, it is only
-         *   entered after we yield
-         *
-         * - if the request runs on a different AioContext, reentering
-         *   read_reply_co happens through a bottom half, which can only
-         *   run after we yield.
-         */
+        s->requests[i].receiving = false;
         aio_co_wake(s->requests[i].coroutine);
-        qemu_coroutine_yield();
     }
 
     s->quit = true;
@@ -157,6 +150,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
 
     s->requests[i].coroutine = qemu_coroutine_self();
     s->requests[i].receiving = false;
+    s->requests[i].ret = -EIO;
 
     request->handle = INDEX_TO_HANDLE(s, i);
     s->requests[i].request = request;
@@ -210,7 +204,7 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    if (!s->ioc || s->quit) {
+    if (!s->ioc) {
         ret = -EIO;
     } else {
         ret = s->requests[i].ret;
@@ -218,11 +212,6 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
 
     s->requests[i].coroutine = NULL;
 
-    /* Kick the read_reply_co to get the next reply.  */
-    if (s->read_reply_co) {
-        aio_co_wake(s->read_reply_co);
-    }
-
     qemu_co_mutex_lock(&s->send_mutex);
     s->in_flight--;
     qemu_co_queue_next(&s->free_sema);
@@ -364,6 +353,8 @@ void nbd_client_close(BlockDriverState *bs)
 
     nbd_send_request(client->ioc, &request);
 
+    client->quit = true;
+
     nbd_teardown_connection(bs);
 }
 
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply Vladimir Sementsov-Ogievskiy
@ 2017-09-18 15:38   ` Eric Blake
  0 siblings, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 15:38 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 620 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> "NBDReply *reply" parameter of nbd_co_receive_reply is used only
> to pass return value for nbd_co_request (reply.error). Remove it
> and use function return value instead.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
@ 2017-09-18 15:43   ` Paolo Bonzini
  2017-09-18 15:54     ` Eric Blake
  2017-09-19  9:25     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2017-09-18 15:43 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, eblake, den

On 18/09/2017 15:59, Vladimir Sementsov-Ogievskiy wrote:
> Read the whole reply in one place - in nbd_read_reply_entry.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.h |  1 +
>  block/nbd-client.c | 42 ++++++++++++++++++++++++------------------
>  2 files changed, 25 insertions(+), 18 deletions(-)
> 
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index b1900e6a6d..3f97d31013 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -21,6 +21,7 @@ typedef struct {
>      Coroutine *coroutine;
>      bool receiving;         /* waiting for read_reply_co? */
>      NBDRequest *request;
> +    QEMUIOVector *qiov;
>  } NBDClientRequest;
>  
>  typedef struct NBDClientSession {
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 5f241ecc22..f7b642f079 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -98,6 +98,21 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>              break;
>          }
>  
> +        if (s->reply.error == 0 &&
> +            s->requests[i].request->type == NBD_CMD_READ)
> +        {
> +            QEMUIOVector *qiov = s->requests[i].qiov;
> +            assert(qiov != NULL);
> +            assert(s->requests[i].request->len ==
> +                   iov_size(qiov->iov, qiov->niov));
> +            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
> +                                      NULL) < 0)
> +            {
> +                s->reply.error = EIO;
> +                break;
> +            }
> +        }

I am not sure this is an improvement.  In principle you could have
commands that read replies a bit at a time without using a QEMUIOVector.

Paolo

>          /* We're woken up again by the request itself.  Note that there
>           * is no race between yielding and reentering read_reply_co.  This
>           * is because:
> @@ -118,6 +133,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>      s->read_reply_co = NULL;
>  }
>  
> +/* qiov should be provided for READ and WRITE */
>  static int nbd_co_send_request(BlockDriverState *bs,
>                                 NBDRequest *request,
>                                 QEMUIOVector *qiov)
> @@ -145,6 +161,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
>  
>      request->handle = INDEX_TO_HANDLE(s, i);
>      s->requests[i].request = request;
> +    s->requests[i].qiov = qiov;
>  
>      if (s->quit) {
>          rc = -EIO;
> @@ -155,7 +172,8 @@ static int nbd_co_send_request(BlockDriverState *bs,
>          goto err;
>      }
>  
> -    if (qiov) {
> +    if (s->requests[i].request->type == NBD_CMD_WRITE) {
> +        assert(qiov);
>          qio_channel_set_cork(s->ioc, true);
>          rc = nbd_send_request(s->ioc, request);
>          if (rc >= 0 && !s->quit) {
> @@ -181,9 +199,7 @@ err:
>      return rc;
>  }
>  
> -static int nbd_co_receive_reply(NBDClientSession *s,
> -                                NBDRequest *request,
> -                                QEMUIOVector *qiov)
> +static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>  {
>      int ret;
>      int i = HANDLE_TO_INDEX(s, request->handle);
> @@ -197,14 +213,6 @@ static int nbd_co_receive_reply(NBDClientSession *s,
>      } else {
>          assert(s->reply.handle == request->handle);
>          ret = -s->reply.error;
> -        if (qiov && s->reply.error == 0) {
> -            assert(request->len == iov_size(qiov->iov, qiov->niov));
> -            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
> -                                      NULL) < 0) {
> -                ret = -EIO;
> -                s->quit = true;
> -            }
> -        }
>  
>          /* Tell the read handler to read another header.  */
>          s->reply.handle = 0;
> @@ -232,16 +240,14 @@ static int nbd_co_request(BlockDriverState *bs,
>      NBDClientSession *client = nbd_get_client_session(bs);
>      int ret;
>  
> -    assert(!qiov || request->type == NBD_CMD_WRITE ||
> -           request->type == NBD_CMD_READ);
> -    ret = nbd_co_send_request(bs, request,
> -                              request->type == NBD_CMD_WRITE ? qiov : NULL);
> +    assert((qiov == NULL) !=
> +           (request->type == NBD_CMD_WRITE || request->type == NBD_CMD_READ));
> +    ret = nbd_co_send_request(bs, request, qiov);
>      if (ret < 0) {
>          return ret;
>      }
>  
> -    return nbd_co_receive_reply(client, request,
> -                                request->type == NBD_CMD_READ ? qiov : NULL);
> +    return nbd_co_receive_reply(client, request);
>  }
>  
>  int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
@ 2017-09-18 15:45   ` Eric Blake
  2017-09-18 19:50   ` Eric Blake
  1 sibling, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 15:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 779 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> Check reply-handle == request-handle in the same place, where

s/,//

> recv coroutine number is calculated from reply->handle and it's
> correctness checked - in nbd_read_reply_entry.
> 
> Also finish nbd_read_reply_entry in case of reply-handle !=
> request-handle in the same way as in case of incorrect reply-handle.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.h | 1 +
>  block/nbd-client.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-18 15:43   ` Paolo Bonzini
@ 2017-09-18 15:54     ` Eric Blake
  2017-09-19  9:25     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 15:54 UTC (permalink / raw)
  To: Paolo Bonzini, Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, den

[-- Attachment #1: Type: text/plain, Size: 1590 bytes --]

On 09/18/2017 10:43 AM, Paolo Bonzini wrote:
> On 18/09/2017 15:59, Vladimir Sementsov-Ogievskiy wrote:
>> Read the whole reply in one place - in nbd_read_reply_entry.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>  block/nbd-client.h |  1 +
>>  block/nbd-client.c | 42 ++++++++++++++++++++++++------------------
>>  2 files changed, 25 insertions(+), 18 deletions(-)
>>

> 
> I am not sure this is an improvement.  In principle you could have
> commands that read replies a bit at a time without using a QEMUIOVector.

Right now we don't, but the most likely point where this would be an
issue is the fact that we want to implement structured replies (the
server can send more than one response to a single request from the
client) in order to then implement block status queries (where the
server can send piecemeal information in response to a query, and the
client could very easily want to handle information as it comes in
rather than waiting for the entire server response, especially if the
amount of information returned by the server is not known a priori by
the client, the way the length is known in advance for NBD_CMD_READ, but
instead learned partway through the reply).  I guess the question
becomes a matter of whether we are over-constraining future additions by
making this refactoring, or whether we can still implement block status
queries using a single QEMUIOVector.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel Vladimir Sementsov-Ogievskiy
@ 2017-09-18 16:01   ` Eric Blake
  2017-09-18 19:16     ` Eric Blake
  2017-09-20 12:03     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 2 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 16:01 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1405 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> It's incorrect to return success rc >= 0 if we skip qio_channel_writev_all()
> call due to s->quit.

Does this need to cc: qemu-stable for 2.10.1 (or put another way, can we
come up with some scenario of EAGAIN or other handling that would
actually set s->quit in a parallel coroutine when a client sends out
multiple requests at once)?

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index f331f08a9a..280147e6a7 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -189,6 +189,9 @@ static int nbd_co_send_request(BlockDriverState *bs,
>      }
>  
>  err:
> +    if (rc >= 0 && s->quit) {
> +        rc = -EIO;
> +    }

I'm still not convinced this is in the right place.  This fails the
send_request regardless of whether we skipped qio_channel_writev_all();
shouldn't the rc be set only in the case that we actually skipped
writing the full command because s->quit was detected at that point in time?

>      if (rc < 0) {
>          s->quit = true;
>          s->requests[i].coroutine = NULL;
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel
  2017-09-18 16:01   ` Eric Blake
@ 2017-09-18 19:16     ` Eric Blake
  2017-09-20 12:03     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 19:16 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: kwolf, pbonzini, den, mreitz

[-- Attachment #1: Type: text/plain, Size: 747 bytes --]

On 09/18/2017 11:01 AM, Eric Blake wrote:
> On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
>> It's incorrect to return success rc >= 0 if we skip qio_channel_writev_all()
>> call due to s->quit.
> 
> Does this need to cc: qemu-stable for 2.10.1 (or put another way, can we
> come up with some scenario of EAGAIN or other handling that would
> actually set s->quit in a parallel coroutine when a client sends out
> multiple requests at once)?

Also, long subject line (longer than 80 columns!).  I'd suggest
something like:

nbd-client: fail with -EIO on early exit of nbd_co_send_request

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
  2017-09-18 15:45   ` Eric Blake
@ 2017-09-18 19:50   ` Eric Blake
  1 sibling, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 19:50 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 2516 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> Check reply-handle == request-handle in the same place, where
> recv coroutine number is calculated from reply->handle and it's
> correctness checked - in nbd_read_reply_entry.
> 
> Also finish nbd_read_reply_entry in case of reply-handle !=
> request-handle in the same way as in case of incorrect reply-handle.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.h | 1 +
>  block/nbd-client.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 2 deletions(-)

On second thought:

> +++ b/block/nbd-client.c
> @@ -92,7 +92,9 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>          i = HANDLE_TO_INDEX(s, s->reply.handle);
>          if (i >= MAX_NBD_REQUESTS ||
>              !s->requests[i].coroutine ||
> -            !s->requests[i].receiving) {
> +            !s->requests[i].receiving ||
> +            s->reply.handle != s->requests[i].request->handle)

How can this condition ever be false?  s->reply.handle only ever
contains two values: 0 when it is being reused for the next iteration of
the loop, or the (munged) address of the request pointer.  But we are
careful that we never write s->reply.handle = 0 until after
s->requests[i].receiving is false.  So we will never see s->reply.handle
equal to 0 here.  (It may have been possible prior to the introduction
of reply.receiving, in commit 40f4a218, but I'm not seeing it now)

If I'm right, then this patch can be simplified - we don't need to track
s.requests[i].request, and can merely:

> @@ -189,9 +192,10 @@ static int nbd_co_receive_reply(NBDClientSession *s,
>      s->requests[i].receiving = true;
>      qemu_coroutine_yield();
>      s->requests[i].receiving = false;
> -    if (s->reply.handle != request->handle || !s->ioc || s->quit) {

shorten this conditional to remove the now-dead condition.

> +    if (!s->ioc || s->quit) {
>          ret = -EIO;
>      } else {
> +        assert(s->reply.handle == request->handle);
>          ret = -s->reply.error;
>          if (qiov && s->reply.error == 0) {
>              assert(request->len == iov_size(qiov->iov, qiov->niov));
> 

[Oh, and I just noticed HANDLE_TO_INDEX() and INDEX_TO_HANDLE() are
improperly parenthesized, although to no ill effect with current clients]

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/7] block/nbd-client: drop reply field from NBDClientSession
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 4/7] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
@ 2017-09-18 22:00   ` Eric Blake
  0 siblings, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-18 22:00 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1656 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> Drop 'reply' from NBDClientSession. It's used to only deliver request
> return code to receiving coroutine. Instead introduce per-request ret
> variable to simplify the scheme and make further refactoring possible.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.h |  2 +-
>  block/nbd-client.c | 22 +++++++++-------------
>  2 files changed, 10 insertions(+), 14 deletions(-)
> 
> diff --git a/block/nbd-client.h b/block/nbd-client.h
> index 3f97d31013..4bc8fe3993 100644
> --- a/block/nbd-client.h
> +++ b/block/nbd-client.h
> @@ -22,6 +22,7 @@ typedef struct {
>      bool receiving;         /* waiting for read_reply_co? */
>      NBDRequest *request;
>      QEMUIOVector *qiov;
> +    int ret;
>  } NBDClientRequest;

I like this idea.  However, I note that:

> @@ -211,11 +211,7 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>      if (!s->ioc || s->quit) {
>          ret = -EIO;
>      } else {
> -        assert(s->reply.handle == request->handle);
> -        ret = -s->reply.error;
> -
> -        /* Tell the read handler to read another header.  */
> -        s->reply.handle = 0;
> +        ret = s->requests[i].ret;

you are removing the assert you added in 2/7, where I questioned whether
we needed NBDClientRequest.request in the first place.  So there may be
some rebase churn, depending on how that conversation pans out.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set Vladimir Sementsov-Ogievskiy
@ 2017-09-18 22:27   ` Eric Blake
  2017-09-19  9:43     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Blake @ 2017-09-18 22:27 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 3109 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> Do not continue any operation if s->quit is set in parallel.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index 280147e6a7..f80a4c5564 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -81,7 +81,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>          if (ret < 0) {
>              error_report_err(local_err);
>          }
> -        if (ret <= 0) {
> +        if (ret <= 0 || s->quit) {
>              break;
>          }

I'm still not convinced this helps: either nbd_receive_reply() already
returned an error, or we got a successful reply header, at which point
either the command is done (no need to abort the command early - it
succeeded) or it is a read command (and we should detect at the point
where we try to populate the qiov that we don't want to read any more).
It depends on your 3/7 patch for the fact that we even try to read the
qiov directly in this while loop rather than in the coroutine handler,
where Paolo questioned whether we need that change.

> @@ -105,9 +105,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>              assert(qiov != NULL);
>              assert(s->requests[i].request->len ==
>                     iov_size(qiov->iov, qiov->niov));
> -            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
> -                                      NULL) < 0)
> -            {
> +            ret = qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov, NULL);
> +            if (ret < 0 || s->quit) {
>                  s->requests[i].ret = -EIO;
>                  break;
>              }

The placement here looks odd. Either we should not attempt the read
because s->quit was already set (okay, your first addition makes sense
in that light), or we succeeded at the read (at which point, why do we
need to claim it failed?).

I'm trying to look forward to structured reads, where we will have to
deal with more than one server message in reply to a single client
request.  For read, we just piece together portions of the qiov until
the server has sent us all the pieces.  But for block query, I really do
think we'll want to defer to specialized handlers for doing further
reads (the amount of data to be read from the server is not known by the
client a priori, so it is a two-part read, one to get the length, and
another to read that remaining length); if we need some sort of callback
function rather than cramming all the logic here in the main read loop,
then the qio_channel_readv_all for read commands would belong in the
callback, and it is more a question of the main read loop checking
whether there is an early quit condition before calling into the callback.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry
  2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
@ 2017-09-18 22:36   ` Eric Blake
  2017-09-19 10:00     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Blake @ 2017-09-18 22:36 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 2873 bytes --]

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> Refactor nbd client to not yield from nbd_read_reply_entry. It's
> possible now as all reading is done in nbd_read_reply_entry and
> all request-related data is stored in per-request s->requests[i].
> 
> We need here some additional work with s->requests[i].ret and
> s->quit to not fail requests on normal disconnet and to not report
> reading errors on normal disconnect.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd-client.c | 29 ++++++++++-------------------
>  1 file changed, 10 insertions(+), 19 deletions(-)

The diffstat may have merit, but I'm wondering:

> 
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index f80a4c5564..bf20d5d5e6 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -79,7 +79,11 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>      while (!s->quit) {
>          ret = nbd_receive_reply(s->ioc, &reply, &local_err);
>          if (ret < 0) {
> -            error_report_err(local_err);
> +            if (s->quit) {
> +                error_free(local_err);

This discards errors on all remaining coroutines after we detect an
early exit. Are we sure the coroutine that set s->quit will have
reported an appropriate error, and thus explaining why we can discard
all secondary errors?  A comment in the code would be helpful.

> +            } else {
> +                error_report_err(local_err);
> +            }
>          }

> @@ -210,7 +204,7 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>      s->requests[i].receiving = true;
>      qemu_coroutine_yield();
>      s->requests[i].receiving = false;
> -    if (!s->ioc || s->quit) {
> +    if (!s->ioc) {
>          ret = -EIO;

Why don't we need to check s->quit any more?  That was just added
earlier in the series; why the churn?

>      } else {
>          ret = s->requests[i].ret;
> @@ -218,11 +212,6 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>  
>      s->requests[i].coroutine = NULL;
>  
> -    /* Kick the read_reply_co to get the next reply.  */
> -    if (s->read_reply_co) {
> -        aio_co_wake(s->read_reply_co);
> -    }
> -
>      qemu_co_mutex_lock(&s->send_mutex);
>      s->in_flight--;
>      qemu_co_queue_next(&s->free_sema);
> @@ -364,6 +353,8 @@ void nbd_client_close(BlockDriverState *bs)
>  
>      nbd_send_request(client->ioc, &request);
>  
> +    client->quit = true;

Previously, client->quit was only set when detecting a broken server,
now it is also set for a clean exit.  Do we need to change any
documentation of the field?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-18 15:43   ` Paolo Bonzini
  2017-09-18 15:54     ` Eric Blake
@ 2017-09-19  9:25     ` Vladimir Sementsov-Ogievskiy
  2017-09-19 10:01       ` Paolo Bonzini
  1 sibling, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-19  9:25 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-block, qemu-devel; +Cc: mreitz, kwolf, eblake, den

18.09.2017 18:43, Paolo Bonzini wrote:
> On 18/09/2017 15:59, Vladimir Sementsov-Ogievskiy wrote:
>> Read the whole reply in one place - in nbd_read_reply_entry.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd-client.h |  1 +
>>   block/nbd-client.c | 42 ++++++++++++++++++++++++------------------
>>   2 files changed, 25 insertions(+), 18 deletions(-)
>>
>> diff --git a/block/nbd-client.h b/block/nbd-client.h
>> index b1900e6a6d..3f97d31013 100644
>> --- a/block/nbd-client.h
>> +++ b/block/nbd-client.h
>> @@ -21,6 +21,7 @@ typedef struct {
>>       Coroutine *coroutine;
>>       bool receiving;         /* waiting for read_reply_co? */
>>       NBDRequest *request;
>> +    QEMUIOVector *qiov;
>>   } NBDClientRequest;
>>   
>>   typedef struct NBDClientSession {
>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>> index 5f241ecc22..f7b642f079 100644
>> --- a/block/nbd-client.c
>> +++ b/block/nbd-client.c
>> @@ -98,6 +98,21 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>>               break;
>>           }
>>   
>> +        if (s->reply.error == 0 &&
>> +            s->requests[i].request->type == NBD_CMD_READ)
>> +        {
>> +            QEMUIOVector *qiov = s->requests[i].qiov;
>> +            assert(qiov != NULL);
>> +            assert(s->requests[i].request->len ==
>> +                   iov_size(qiov->iov, qiov->niov));
>> +            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>> +                                      NULL) < 0)
>> +            {
>> +                s->reply.error = EIO;
>> +                break;
>> +            }
>> +        }
> I am not sure this is an improvement.  In principle you could have
> commands that read replies a bit at a time without using a QEMUIOVector.
>
> Paolo

Hm, what do you mean? In this patch I don't change "how do we read it", 
I only move the reading code to one coroutine, to make it simple to 
extend it in future (block status, etc). nbd_read_reply_enty has all 
information it need (s->requests[i].request) to handle the whole reply.. 
It's an improvement, because it leads to separating reply_entry 
coroutine - it don't need any synchronization (yield/wake) more with 
request coroutines.

>
>>           /* We're woken up again by the request itself.  Note that there
>>            * is no race between yielding and reentering read_reply_co.  This
>>            * is because:
>> @@ -118,6 +133,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>>       s->read_reply_co = NULL;
>>   }
>>   
>> +/* qiov should be provided for READ and WRITE */
>>   static int nbd_co_send_request(BlockDriverState *bs,
>>                                  NBDRequest *request,
>>                                  QEMUIOVector *qiov)
>> @@ -145,6 +161,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>   
>>       request->handle = INDEX_TO_HANDLE(s, i);
>>       s->requests[i].request = request;
>> +    s->requests[i].qiov = qiov;
>>   
>>       if (s->quit) {
>>           rc = -EIO;
>> @@ -155,7 +172,8 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>           goto err;
>>       }
>>   
>> -    if (qiov) {
>> +    if (s->requests[i].request->type == NBD_CMD_WRITE) {
>> +        assert(qiov);
>>           qio_channel_set_cork(s->ioc, true);
>>           rc = nbd_send_request(s->ioc, request);
>>           if (rc >= 0 && !s->quit) {
>> @@ -181,9 +199,7 @@ err:
>>       return rc;
>>   }
>>   
>> -static int nbd_co_receive_reply(NBDClientSession *s,
>> -                                NBDRequest *request,
>> -                                QEMUIOVector *qiov)
>> +static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>>   {
>>       int ret;
>>       int i = HANDLE_TO_INDEX(s, request->handle);
>> @@ -197,14 +213,6 @@ static int nbd_co_receive_reply(NBDClientSession *s,
>>       } else {
>>           assert(s->reply.handle == request->handle);
>>           ret = -s->reply.error;
>> -        if (qiov && s->reply.error == 0) {
>> -            assert(request->len == iov_size(qiov->iov, qiov->niov));
>> -            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>> -                                      NULL) < 0) {
>> -                ret = -EIO;
>> -                s->quit = true;
>> -            }
>> -        }
>>   
>>           /* Tell the read handler to read another header.  */
>>           s->reply.handle = 0;
>> @@ -232,16 +240,14 @@ static int nbd_co_request(BlockDriverState *bs,
>>       NBDClientSession *client = nbd_get_client_session(bs);
>>       int ret;
>>   
>> -    assert(!qiov || request->type == NBD_CMD_WRITE ||
>> -           request->type == NBD_CMD_READ);
>> -    ret = nbd_co_send_request(bs, request,
>> -                              request->type == NBD_CMD_WRITE ? qiov : NULL);
>> +    assert((qiov == NULL) !=
>> +           (request->type == NBD_CMD_WRITE || request->type == NBD_CMD_READ));
>> +    ret = nbd_co_send_request(bs, request, qiov);
>>       if (ret < 0) {
>>           return ret;
>>       }
>>   
>> -    return nbd_co_receive_reply(client, request,
>> -                                request->type == NBD_CMD_READ ? qiov : NULL);
>> +    return nbd_co_receive_reply(client, request);
>>   }
>>   
>>   int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
>>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set
  2017-09-18 22:27   ` Eric Blake
@ 2017-09-19  9:43     ` Vladimir Sementsov-Ogievskiy
  2017-09-19 10:03       ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-19  9:43 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

19.09.2017 01:27, Eric Blake wrote:
> On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Do not continue any operation if s->quit is set in parallel.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd-client.c | 7 +++----
>>   1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>> index 280147e6a7..f80a4c5564 100644
>> --- a/block/nbd-client.c
>> +++ b/block/nbd-client.c
>> @@ -81,7 +81,7 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>>           if (ret < 0) {
>>               error_report_err(local_err);
>>           }
>> -        if (ret <= 0) {
>> +        if (ret <= 0 || s->quit) {
>>               break;
>>           }
> I'm still not convinced this helps: either nbd_receive_reply() already
> returned an error, or we got a successful reply header, at which point
> either the command is done (no need to abort the command early - it
> succeeded) or it is a read command (and we should detect at the point
> where we try to populate the qiov that we don't want to read any more).
> It depends on your 3/7 patch for the fact that we even try to read the
> qiov directly in this while loop rather than in the coroutine handler,
> where Paolo questioned whether we need that change.
>
>> @@ -105,9 +105,8 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>>               assert(qiov != NULL);
>>               assert(s->requests[i].request->len ==
>>                      iov_size(qiov->iov, qiov->niov));
>> -            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>> -                                      NULL) < 0)
>> -            {
>> +            ret = qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov, NULL);
>> +            if (ret < 0 || s->quit) {
>>                   s->requests[i].ret = -EIO;
>>                   break;
>>               }
> The placement here looks odd. Either we should not attempt the read
> because s->quit was already set (okay, your first addition makes sense
> in that light), or we succeeded at the read (at which point, why do we
> need to claim it failed?).
>
> I'm trying to look forward to structured reads, where we will have to
> deal with more than one server message in reply to a single client
> request.  For read, we just piece together portions of the qiov until
> the server has sent us all the pieces.  But for block query, I really do
> think we'll want to defer to specialized handlers for doing further
> reads (the amount of data to be read from the server is not known by the
> client a priori, so it is a two-part read, one to get the length, and
> another to read that remaining length); if we need some sort of callback
> function rather than cramming all the logic here in the main read loop,

by patch 3 I do not mean in any way that I want to have all reading 
staff in one function, ofcourse it should be refactored
for block-status addition. I just want to have all reading staff in one 
coroutine. Reading from ioc is sequential, so it's
native to do it sequentially in one coroutine, without switches.

> then the qio_channel_readv_all for read commands would belong in the
> callback, and it is more a question of the main read loop checking
> whether there is an early quit condition before calling into the callback.
>

If I understand correctly, the discussion is:

me: if something fails - fail all other parallel operations - it's 
simple and clear, but we need to check s->quit after every possible yield

you: .... _not_ all other parallel operations. - may be it is better, 
but in this case we need carefully define, which parallel operation we fail

  on s->quit and which not and understand all effects of this division.

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry
  2017-09-18 22:36   ` Eric Blake
@ 2017-09-19 10:00     ` Vladimir Sementsov-Ogievskiy
  2017-09-19 13:45       ` Eric Blake
  0 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-19 10:00 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

19.09.2017 01:36, Eric Blake wrote:
> On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Refactor nbd client to not yield from nbd_read_reply_entry. It's
>> possible now as all reading is done in nbd_read_reply_entry and
>> all request-related data is stored in per-request s->requests[i].
>>
>> We need here some additional work with s->requests[i].ret and
>> s->quit to not fail requests on normal disconnet and to not report
>> reading errors on normal disconnect.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd-client.c | 29 ++++++++++-------------------
>>   1 file changed, 10 insertions(+), 19 deletions(-)
> The diffstat may have merit, but I'm wondering:
>
>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>> index f80a4c5564..bf20d5d5e6 100644
>> --- a/block/nbd-client.c
>> +++ b/block/nbd-client.c
>> @@ -79,7 +79,11 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>>       while (!s->quit) {
>>           ret = nbd_receive_reply(s->ioc, &reply, &local_err);
>>           if (ret < 0) {
>> -            error_report_err(local_err);
>> +            if (s->quit) {
>> +                error_free(local_err);
> This discards errors on all remaining coroutines after we detect an
> early exit. Are we sure the coroutine that set s->quit will have
> reported an appropriate error, and thus explaining why we can discard
> all secondary errors?  A comment in the code would be helpful.

I'll think about it.

>
>> +            } else {
>> +                error_report_err(local_err);
>> +            }
>>           }
>> @@ -210,7 +204,7 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>>       s->requests[i].receiving = true;
>>       qemu_coroutine_yield();
>>       s->requests[i].receiving = false;
>> -    if (!s->ioc || s->quit) {
>> +    if (!s->ioc) {
>>           ret = -EIO;
> Why don't we need to check s->quit any more?  That was just added
> earlier in the series; why the churn?

it is "to not fail requests on normal disconnet". Because reply_entry 
may be already finished.
Initializing "+    s->requests[i].ret = -EIO;" is enough.

>
>>       } else {
>>           ret = s->requests[i].ret;
>> @@ -218,11 +212,6 @@ static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest *request)
>>   
>>       s->requests[i].coroutine = NULL;
>>   
>> -    /* Kick the read_reply_co to get the next reply.  */
>> -    if (s->read_reply_co) {
>> -        aio_co_wake(s->read_reply_co);
>> -    }
>> -
>>       qemu_co_mutex_lock(&s->send_mutex);
>>       s->in_flight--;
>>       qemu_co_queue_next(&s->free_sema);
>> @@ -364,6 +353,8 @@ void nbd_client_close(BlockDriverState *bs)
>>   
>>       nbd_send_request(client->ioc, &request);
>>   
>> +    client->quit = true;
> Previously, client->quit was only set when detecting a broken server,
> now it is also set for a clean exit.  Do we need to change any
> documentation of the field?

It has documentation?

>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-19  9:25     ` Vladimir Sementsov-Ogievskiy
@ 2017-09-19 10:01       ` Paolo Bonzini
  2017-09-19 11:03         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2017-09-19 10:01 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, eblake, den

On 19/09/2017 11:25, Vladimir Sementsov-Ogievskiy wrote:
> 18.09.2017 18:43, Paolo Bonzini wrote:
>> On 18/09/2017 15:59, Vladimir Sementsov-Ogievskiy wrote:
>>> Read the whole reply in one place - in nbd_read_reply_entry.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   block/nbd-client.h |  1 +
>>>   block/nbd-client.c | 42 ++++++++++++++++++++++++------------------
>>>   2 files changed, 25 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/block/nbd-client.h b/block/nbd-client.h
>>> index b1900e6a6d..3f97d31013 100644
>>> --- a/block/nbd-client.h
>>> +++ b/block/nbd-client.h
>>> @@ -21,6 +21,7 @@ typedef struct {
>>>       Coroutine *coroutine;
>>>       bool receiving;         /* waiting for read_reply_co? */
>>>       NBDRequest *request;
>>> +    QEMUIOVector *qiov;
>>>   } NBDClientRequest;
>>>     typedef struct NBDClientSession {
>>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>>> index 5f241ecc22..f7b642f079 100644
>>> --- a/block/nbd-client.c
>>> +++ b/block/nbd-client.c
>>> @@ -98,6 +98,21 @@ static coroutine_fn void nbd_read_reply_entry(void
>>> *opaque)
>>>               break;
>>>           }
>>>   +        if (s->reply.error == 0 &&
>>> +            s->requests[i].request->type == NBD_CMD_READ)
>>> +        {
>>> +            QEMUIOVector *qiov = s->requests[i].qiov;
>>> +            assert(qiov != NULL);
>>> +            assert(s->requests[i].request->len ==
>>> +                   iov_size(qiov->iov, qiov->niov));
>>> +            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>>> +                                      NULL) < 0)
>>> +            {
>>> +                s->reply.error = EIO;
>>> +                break;
>>> +            }
>>> +        }
>> I am not sure this is an improvement.  In principle you could have
>> commands that read replies a bit at a time without using a QEMUIOVector.
>>
>> Paolo
> 
> Hm, what do you mean? In this patch I don't change "how do we read it",
> I only move the reading code to one coroutine, to make it simple to
> extend it in future (block status, etc).

I disagree that it is easier to extend it in the future.  If some
commands in the future need a different "how do we read it" (e.g. for
structured reply), nbd_read_reply_entry may not have all the information
it needs anymore.

In fact, you're pushing knowledge of the commands from nbd_co_request to
nbd_read_reply_entry:

+        if (s->reply.error == 0 &&
+            s->requests[i].request->type == NBD_CMD_READ)

and knowledge of NBD_CMD_READ simply doesn't belong there.  So there is
an example of that right now, it is not some arbitrary bad thing that
could happen in the future.

Paolo


> nbd_read_reply_enty has all
> information it need (s->requests[i].request) to handle the whole reply..
> It's an improvement, because it leads to separating reply_entry
> coroutine - it don't need any synchronization (yield/wake) more with
> request coroutines.
> 
>>
>>>           /* We're woken up again by the request itself.  Note that
>>> there
>>>            * is no race between yielding and reentering
>>> read_reply_co.  This
>>>            * is because:
>>> @@ -118,6 +133,7 @@ static coroutine_fn void
>>> nbd_read_reply_entry(void *opaque)
>>>       s->read_reply_co = NULL;
>>>   }
>>>   +/* qiov should be provided for READ and WRITE */
>>>   static int nbd_co_send_request(BlockDriverState *bs,
>>>                                  NBDRequest *request,
>>>                                  QEMUIOVector *qiov)
>>> @@ -145,6 +161,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>>         request->handle = INDEX_TO_HANDLE(s, i);
>>>       s->requests[i].request = request;
>>> +    s->requests[i].qiov = qiov;
>>>         if (s->quit) {
>>>           rc = -EIO;
>>> @@ -155,7 +172,8 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>>           goto err;
>>>       }
>>>   -    if (qiov) {
>>> +    if (s->requests[i].request->type == NBD_CMD_WRITE) {
>>> +        assert(qiov);
>>>           qio_channel_set_cork(s->ioc, true);
>>>           rc = nbd_send_request(s->ioc, request);
>>>           if (rc >= 0 && !s->quit) {
>>> @@ -181,9 +199,7 @@ err:
>>>       return rc;
>>>   }
>>>   -static int nbd_co_receive_reply(NBDClientSession *s,
>>> -                                NBDRequest *request,
>>> -                                QEMUIOVector *qiov)
>>> +static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest
>>> *request)
>>>   {
>>>       int ret;
>>>       int i = HANDLE_TO_INDEX(s, request->handle);
>>> @@ -197,14 +213,6 @@ static int nbd_co_receive_reply(NBDClientSession
>>> *s,
>>>       } else {
>>>           assert(s->reply.handle == request->handle);
>>>           ret = -s->reply.error;
>>> -        if (qiov && s->reply.error == 0) {
>>> -            assert(request->len == iov_size(qiov->iov, qiov->niov));
>>> -            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>>> -                                      NULL) < 0) {
>>> -                ret = -EIO;
>>> -                s->quit = true;
>>> -            }
>>> -        }
>>>             /* Tell the read handler to read another header.  */
>>>           s->reply.handle = 0;
>>> @@ -232,16 +240,14 @@ static int nbd_co_request(BlockDriverState *bs,
>>>       NBDClientSession *client = nbd_get_client_session(bs);
>>>       int ret;
>>>   -    assert(!qiov || request->type == NBD_CMD_WRITE ||
>>> -           request->type == NBD_CMD_READ);
>>> -    ret = nbd_co_send_request(bs, request,
>>> -                              request->type == NBD_CMD_WRITE ? qiov
>>> : NULL);
>>> +    assert((qiov == NULL) !=
>>> +           (request->type == NBD_CMD_WRITE || request->type ==
>>> NBD_CMD_READ));
>>> +    ret = nbd_co_send_request(bs, request, qiov);
>>>       if (ret < 0) {
>>>           return ret;
>>>       }
>>>   -    return nbd_co_receive_reply(client, request,
>>> -                                request->type == NBD_CMD_READ ? qiov
>>> : NULL);
>>> +    return nbd_co_receive_reply(client, request);
>>>   }
>>>     int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
>>>
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set
  2017-09-19  9:43     ` Vladimir Sementsov-Ogievskiy
@ 2017-09-19 10:03       ` Paolo Bonzini
  2017-09-19 11:07         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2017-09-19 10:03 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, Eric Blake, qemu-block, qemu-devel
  Cc: mreitz, kwolf, den

On 19/09/2017 11:43, Vladimir Sementsov-Ogievskiy wrote:
>>
>> I'm trying to look forward to structured reads, where we will have to
>> deal with more than one server message in reply to a single client
>> request.  For read, we just piece together portions of the qiov until
>> the server has sent us all the pieces.  But for block query, I really do
>> think we'll want to defer to specialized handlers for doing further
>> reads (the amount of data to be read from the server is not known by the
>> client a priori, so it is a two-part read, one to get the length, and
>> another to read that remaining length); if we need some sort of callback
>> function rather than cramming all the logic here in the main read loop,
> 
> by patch 3 I do not mean in any way that I want to have all reading
> staff in one function, ofcourse it should be refactored
> for block-status addition. I just want to have all reading staff in one
> coroutine. Reading from ioc is sequential, so it's
> native to do it sequentially in one coroutine, without switches.

But why is that a goal?  See it as a state machine that goes between the
"waiting for header" and "waiting for payload" states.  Reading header
corresponds to read_reply_co waiting on the socket (and reading when
it's ready); reading payload corresponds to the request coroutine
waiting on the socket and reading when it's ready.

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-19 10:01       ` Paolo Bonzini
@ 2017-09-19 11:03         ` Vladimir Sementsov-Ogievskiy
  2017-09-19 12:50           ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-19 11:03 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-block, qemu-devel; +Cc: mreitz, kwolf, eblake, den

19.09.2017 13:01, Paolo Bonzini wrote:
> On 19/09/2017 11:25, Vladimir Sementsov-Ogievskiy wrote:
>> 18.09.2017 18:43, Paolo Bonzini wrote:
>>> On 18/09/2017 15:59, Vladimir Sementsov-Ogievskiy wrote:
>>>> Read the whole reply in one place - in nbd_read_reply_entry.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>    block/nbd-client.h |  1 +
>>>>    block/nbd-client.c | 42 ++++++++++++++++++++++++------------------
>>>>    2 files changed, 25 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/block/nbd-client.h b/block/nbd-client.h
>>>> index b1900e6a6d..3f97d31013 100644
>>>> --- a/block/nbd-client.h
>>>> +++ b/block/nbd-client.h
>>>> @@ -21,6 +21,7 @@ typedef struct {
>>>>        Coroutine *coroutine;
>>>>        bool receiving;         /* waiting for read_reply_co? */
>>>>        NBDRequest *request;
>>>> +    QEMUIOVector *qiov;
>>>>    } NBDClientRequest;
>>>>      typedef struct NBDClientSession {
>>>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>>>> index 5f241ecc22..f7b642f079 100644
>>>> --- a/block/nbd-client.c
>>>> +++ b/block/nbd-client.c
>>>> @@ -98,6 +98,21 @@ static coroutine_fn void nbd_read_reply_entry(void
>>>> *opaque)
>>>>                break;
>>>>            }
>>>>    +        if (s->reply.error == 0 &&
>>>> +            s->requests[i].request->type == NBD_CMD_READ)
>>>> +        {
>>>> +            QEMUIOVector *qiov = s->requests[i].qiov;
>>>> +            assert(qiov != NULL);
>>>> +            assert(s->requests[i].request->len ==
>>>> +                   iov_size(qiov->iov, qiov->niov));
>>>> +            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>>>> +                                      NULL) < 0)
>>>> +            {
>>>> +                s->reply.error = EIO;
>>>> +                break;
>>>> +            }
>>>> +        }
>>> I am not sure this is an improvement.  In principle you could have
>>> commands that read replies a bit at a time without using a QEMUIOVector.
>>>
>>> Paolo
>> Hm, what do you mean? In this patch I don't change "how do we read it",
>> I only move the reading code to one coroutine, to make it simple to
>> extend it in future (block status, etc).
> I disagree that it is easier to extend it in the future.  If some
> commands in the future need a different "how do we read it" (e.g. for
> structured reply), nbd_read_reply_entry may not have all the information
> it needs anymore.

how is it possible? all information is in requests[i].

>
> In fact, you're pushing knowledge of the commands from nbd_co_request to
> nbd_read_reply_entry:
>
> +        if (s->reply.error == 0 &&
> +            s->requests[i].request->type == NBD_CMD_READ)
>
> and knowledge of NBD_CMD_READ simply doesn't belong there.  So there is
> an example of that right now, it is not some arbitrary bad thing that
> could happen in the future.
>
> Paolo

I can't agree that my point of view is wrong, it's just different.
You want commands, reading from socket, each knows what it should read.
I want the receiver - one sequential reader, that can read all kinds of 
replies and just
wake requesting coroutines when all reading is done. For me it looks simpler
than switching.

We can add reading callbacks for commands which have some payload, like
s->requests[i].read_payload(ioc, request) or may be 
s->requests[i].request->read_payload(...),
and call them from reply_entry instead of if (s->... == NBD_CMD_READ).

How do you see the refactoring for introducing structured reply?


>
>
>> nbd_read_reply_enty has all
>> information it need (s->requests[i].request) to handle the whole reply..
>> It's an improvement, because it leads to separating reply_entry
>> coroutine - it don't need any synchronization (yield/wake) more with
>> request coroutines.
>>
>>>>            /* We're woken up again by the request itself.  Note that
>>>> there
>>>>             * is no race between yielding and reentering
>>>> read_reply_co.  This
>>>>             * is because:
>>>> @@ -118,6 +133,7 @@ static coroutine_fn void
>>>> nbd_read_reply_entry(void *opaque)
>>>>        s->read_reply_co = NULL;
>>>>    }
>>>>    +/* qiov should be provided for READ and WRITE */
>>>>    static int nbd_co_send_request(BlockDriverState *bs,
>>>>                                   NBDRequest *request,
>>>>                                   QEMUIOVector *qiov)
>>>> @@ -145,6 +161,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>>>          request->handle = INDEX_TO_HANDLE(s, i);
>>>>        s->requests[i].request = request;
>>>> +    s->requests[i].qiov = qiov;
>>>>          if (s->quit) {
>>>>            rc = -EIO;
>>>> @@ -155,7 +172,8 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>>>            goto err;
>>>>        }
>>>>    -    if (qiov) {
>>>> +    if (s->requests[i].request->type == NBD_CMD_WRITE) {
>>>> +        assert(qiov);
>>>>            qio_channel_set_cork(s->ioc, true);
>>>>            rc = nbd_send_request(s->ioc, request);
>>>>            if (rc >= 0 && !s->quit) {
>>>> @@ -181,9 +199,7 @@ err:
>>>>        return rc;
>>>>    }
>>>>    -static int nbd_co_receive_reply(NBDClientSession *s,
>>>> -                                NBDRequest *request,
>>>> -                                QEMUIOVector *qiov)
>>>> +static int nbd_co_receive_reply(NBDClientSession *s, NBDRequest
>>>> *request)
>>>>    {
>>>>        int ret;
>>>>        int i = HANDLE_TO_INDEX(s, request->handle);
>>>> @@ -197,14 +213,6 @@ static int nbd_co_receive_reply(NBDClientSession
>>>> *s,
>>>>        } else {
>>>>            assert(s->reply.handle == request->handle);
>>>>            ret = -s->reply.error;
>>>> -        if (qiov && s->reply.error == 0) {
>>>> -            assert(request->len == iov_size(qiov->iov, qiov->niov));
>>>> -            if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
>>>> -                                      NULL) < 0) {
>>>> -                ret = -EIO;
>>>> -                s->quit = true;
>>>> -            }
>>>> -        }
>>>>              /* Tell the read handler to read another header.  */
>>>>            s->reply.handle = 0;
>>>> @@ -232,16 +240,14 @@ static int nbd_co_request(BlockDriverState *bs,
>>>>        NBDClientSession *client = nbd_get_client_session(bs);
>>>>        int ret;
>>>>    -    assert(!qiov || request->type == NBD_CMD_WRITE ||
>>>> -           request->type == NBD_CMD_READ);
>>>> -    ret = nbd_co_send_request(bs, request,
>>>> -                              request->type == NBD_CMD_WRITE ? qiov
>>>> : NULL);
>>>> +    assert((qiov == NULL) !=
>>>> +           (request->type == NBD_CMD_WRITE || request->type ==
>>>> NBD_CMD_READ));
>>>> +    ret = nbd_co_send_request(bs, request, qiov);
>>>>        if (ret < 0) {
>>>>            return ret;
>>>>        }
>>>>    -    return nbd_co_receive_reply(client, request,
>>>> -                                request->type == NBD_CMD_READ ? qiov
>>>> : NULL);
>>>> +    return nbd_co_receive_reply(client, request);
>>>>    }
>>>>      int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
>>>>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set
  2017-09-19 10:03       ` Paolo Bonzini
@ 2017-09-19 11:07         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-19 11:07 UTC (permalink / raw)
  To: Paolo Bonzini, Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, den

19.09.2017 13:03, Paolo Bonzini wrote:
> On 19/09/2017 11:43, Vladimir Sementsov-Ogievskiy wrote:
>>> I'm trying to look forward to structured reads, where we will have to
>>> deal with more than one server message in reply to a single client
>>> request.  For read, we just piece together portions of the qiov until
>>> the server has sent us all the pieces.  But for block query, I really do
>>> think we'll want to defer to specialized handlers for doing further
>>> reads (the amount of data to be read from the server is not known by the
>>> client a priori, so it is a two-part read, one to get the length, and
>>> another to read that remaining length); if we need some sort of callback
>>> function rather than cramming all the logic here in the main read loop,
>> by patch 3 I do not mean in any way that I want to have all reading
>> staff in one function, ofcourse it should be refactored
>> for block-status addition. I just want to have all reading staff in one
>> coroutine. Reading from ioc is sequential, so it's
>> native to do it sequentially in one coroutine, without switches.
> But why is that a goal?  See it as a state machine that goes between the
> "waiting for header" and "waiting for payload" states.  Reading header
> corresponds to read_reply_co waiting on the socket (and reading when
> it's ready); reading payload corresponds to the request coroutine
> waiting on the socket and reading when it's ready.
>
> Paolo


I just dislike public access to ioc for commands and extra coroutine 
switching and synchronization.



-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply
  2017-09-19 11:03         ` Vladimir Sementsov-Ogievskiy
@ 2017-09-19 12:50           ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2017-09-19 12:50 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, eblake, den

On 19/09/2017 13:03, Vladimir Sementsov-Ogievskiy wrote:
>>>
>> I disagree that it is easier to extend it in the future.  If some
>> commands in the future need a different "how do we read it" (e.g. for
>> structured reply), nbd_read_reply_entry may not have all the information
>> it needs anymore.
> 
> how is it possible? all information is in requests[i].

If you are okay with violating information hiding, then it is.

>>
>> In fact, you're pushing knowledge of the commands from nbd_co_request to
>> nbd_read_reply_entry:
>>
>> +        if (s->reply.error == 0 &&
>> +            s->requests[i].request->type == NBD_CMD_READ)
>>
>> and knowledge of NBD_CMD_READ simply doesn't belong there.  So there is
>> an example of that right now, it is not some arbitrary bad thing that
>> could happen in the future.
> 
> I can't agree that my point of view is wrong, it's just different.
> You want commands, reading from socket, each knows what it should read.
> I want the receiver - one sequential reader, that can read all kinds of
> replies and just wake requesting coroutines when all reading is done.
> For me it looks simpler than switching.

It may look simpler, but I think it goes against the principle of
coroutines which is to have related code in the same function.  Here you
have the command function that takes care of sending the request payload
but not receiving the reply payload (except that for commands that
aren't as simple as read or write, it will have to _process_ the reply
payload).  What to do with the reply payload is in another function that
has to peek at the command in order to find out what to do.  It doesn't
seem like a better design, honestly.

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry
  2017-09-19 10:00     ` Vladimir Sementsov-Ogievskiy
@ 2017-09-19 13:45       ` Eric Blake
  0 siblings, 0 replies; 29+ messages in thread
From: Eric Blake @ 2017-09-19 13:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel
  Cc: mreitz, kwolf, pbonzini, den

[-- Attachment #1: Type: text/plain, Size: 1236 bytes --]

On 09/19/2017 05:00 AM, Vladimir Sementsov-Ogievskiy wrote:
> 19.09.2017 01:36, Eric Blake wrote:
>> On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> Refactor nbd client to not yield from nbd_read_reply_entry. It's
>>> possible now as all reading is done in nbd_read_reply_entry and
>>> all request-related data is stored in per-request s->requests[i].
>>>
>>> We need here some additional work with s->requests[i].ret and
>>> s->quit to not fail requests on normal disconnet and to not report
>>> reading errors on normal disconnect.
>>>

>>> @@ -364,6 +353,8 @@ void nbd_client_close(BlockDriverState *bs)
>>>         nbd_send_request(client->ioc, &request);
>>>   +    client->quit = true;
>> Previously, client->quit was only set when detecting a broken server,
>> now it is also set for a clean exit.  Do we need to change any
>> documentation of the field?
> 
> It has documentation?

Touche.  But it probably should, especially if we are changing its
semantics, to make it easier to understand from looking at the struct
what semantics to expect.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel
  2017-09-18 16:01   ` Eric Blake
  2017-09-18 19:16     ` Eric Blake
@ 2017-09-20 12:03     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-09-20 12:03 UTC (permalink / raw)
  To: Eric Blake, qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, den

18.09.2017 19:01, Eric Blake wrote:
> On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
>> It's incorrect to return success rc >= 0 if we skip qio_channel_writev_all()
>> call due to s->quit.
> Does this need to cc: qemu-stable for 2.10.1 (or put another way, can we
> come up with some scenario of EAGAIN or other handling that would
> actually set s->quit in a parallel coroutine when a client sends out
> multiple requests at once)?

Intuitively, I think it should not be possible, first nbd_send_request 
should not do
something serious (should not yield) after set_cork..

>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd-client.c | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/block/nbd-client.c b/block/nbd-client.c
>> index f331f08a9a..280147e6a7 100644
>> --- a/block/nbd-client.c
>> +++ b/block/nbd-client.c
>> @@ -189,6 +189,9 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>       }
>>   
>>   err:
>> +    if (rc >= 0 && s->quit) {
>> +        rc = -EIO;
>> +    }
> I'm still not convinced this is in the right place.  This fails the
> send_request regardless of whether we skipped qio_channel_writev_all();
> shouldn't the rc be set only in the case that we actually skipped
> writing the full command because s->quit was detected at that point in time?
>
>>       if (rc < 0) {
>>           s->quit = true;
>>           s->requests[i].coroutine = NULL;
>>

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply
  2017-10-09 17:27 [Qemu-devel] [PATCH v2 00/10] nbd minimal structured read Vladimir Sementsov-Ogievskiy
@ 2017-10-09 17:27 ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 29+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-10-09 17:27 UTC (permalink / raw)
  To: qemu-block, qemu-devel; +Cc: mreitz, kwolf, pbonzini, eblake, vsementsov, den

"NBDReply *reply" parameter of nbd_co_receive_reply is used only
to pass return value for nbd_co_request (reply.error). Remove it
and use function return value instead.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd-client.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/block/nbd-client.c b/block/nbd-client.c
index ee7f758e68..acd8e5d007 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -178,26 +178,26 @@ err:
     return rc;
 }
 
-static void nbd_co_receive_reply(NBDClientSession *s,
-                                 NBDRequest *request,
-                                 NBDReply *reply,
-                                 QEMUIOVector *qiov)
+static int nbd_co_receive_reply(NBDClientSession *s,
+                                NBDRequest *request,
+                                QEMUIOVector *qiov)
 {
+    int ret;
     int i = HANDLE_TO_INDEX(s, request->handle);
 
     /* Wait until we're woken up by nbd_read_reply_entry.  */
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    *reply = s->reply;
-    if (reply->handle != request->handle || !s->ioc || s->quit) {
-        reply->error = EIO;
+    if (s->reply.handle != request->handle || !s->ioc || s->quit) {
+        ret = -EIO;
     } else {
-        if (qiov && reply->error == 0) {
+        ret = -s->reply.error;
+        if (qiov && s->reply.error == 0) {
             assert(request->len == iov_size(qiov->iov, qiov->niov));
             if (qio_channel_readv_all(s->ioc, qiov->iov, qiov->niov,
                                       NULL) < 0) {
-                reply->error = EIO;
+                ret = -EIO;
                 s->quit = true;
             }
         }
@@ -217,6 +217,8 @@ static void nbd_co_receive_reply(NBDClientSession *s,
     s->in_flight--;
     qemu_co_queue_next(&s->free_sema);
     qemu_co_mutex_unlock(&s->send_mutex);
+
+    return ret;
 }
 
 static int nbd_co_request(BlockDriverState *bs,
@@ -224,7 +226,6 @@ static int nbd_co_request(BlockDriverState *bs,
                           QEMUIOVector *qiov)
 {
     NBDClientSession *client = nbd_get_client_session(bs);
-    NBDReply reply;
     int ret;
 
     assert(!qiov || request->type == NBD_CMD_WRITE ||
@@ -232,12 +233,11 @@ static int nbd_co_request(BlockDriverState *bs,
     ret = nbd_co_send_request(bs, request,
                               request->type == NBD_CMD_WRITE ? qiov : NULL);
     if (ret < 0) {
-        reply.error = -ret;
-    } else {
-        nbd_co_receive_reply(client, request, &reply,
-                             request->type == NBD_CMD_READ ? qiov : NULL);
+        return ret;
     }
-    return -reply.error;
+
+    return nbd_co_receive_reply(client, request,
+                                request->type == NBD_CMD_READ ? qiov : NULL);
 }
 
 int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
-- 
2.11.1

^ permalink raw reply related	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2017-10-09 17:27 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-18 13:59 [Qemu-devel] [PATCH v2 0/7] nbd client refactoring and fixing Vladimir Sementsov-Ogievskiy
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply Vladimir Sementsov-Ogievskiy
2017-09-18 15:38   ` Eric Blake
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 2/7] block/nbd-client: exit reply-reading coroutine on incorrect handle Vladimir Sementsov-Ogievskiy
2017-09-18 15:45   ` Eric Blake
2017-09-18 19:50   ` Eric Blake
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 3/7] block/nbd-client: refactor reading reply Vladimir Sementsov-Ogievskiy
2017-09-18 15:43   ` Paolo Bonzini
2017-09-18 15:54     ` Eric Blake
2017-09-19  9:25     ` Vladimir Sementsov-Ogievskiy
2017-09-19 10:01       ` Paolo Bonzini
2017-09-19 11:03         ` Vladimir Sementsov-Ogievskiy
2017-09-19 12:50           ` Paolo Bonzini
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 4/7] block/nbd-client: drop reply field from NBDClientSession Vladimir Sementsov-Ogievskiy
2017-09-18 22:00   ` Eric Blake
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 5/7] block/nbd-client: nbd_co_send_request: return -EIO if s->quit was set in parallel Vladimir Sementsov-Ogievskiy
2017-09-18 16:01   ` Eric Blake
2017-09-18 19:16     ` Eric Blake
2017-09-20 12:03     ` Vladimir Sementsov-Ogievskiy
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 6/7] block/nbd-client: early fail nbd_read_reply_entry if s->quit is set Vladimir Sementsov-Ogievskiy
2017-09-18 22:27   ` Eric Blake
2017-09-19  9:43     ` Vladimir Sementsov-Ogievskiy
2017-09-19 10:03       ` Paolo Bonzini
2017-09-19 11:07         ` Vladimir Sementsov-Ogievskiy
2017-09-18 13:59 ` [Qemu-devel] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry Vladimir Sementsov-Ogievskiy
2017-09-18 22:36   ` Eric Blake
2017-09-19 10:00     ` Vladimir Sementsov-Ogievskiy
2017-09-19 13:45       ` Eric Blake
2017-10-09 17:27 [Qemu-devel] [PATCH v2 00/10] nbd minimal structured read Vladimir Sementsov-Ogievskiy
2017-10-09 17:27 ` [Qemu-devel] [PATCH v2 1/7] block/nbd-client: refactor nbd_co_receive_reply Vladimir Sementsov-Ogievskiy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.