All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/33] block/nbd: rework client connection
@ 2021-04-16  8:08 Vladimir Sementsov-Ogievskiy
  2021-04-16  8:08 ` [PATCH v3 01/33] block/nbd: fix channel object leak Vladimir Sementsov-Ogievskiy
                   ` (33 more replies)
  0 siblings, 34 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

The series substitutes "[PATCH v2 00/10] block/nbd: move connection code to separate file"
Supersedes: <20210408140827.332915-1-vsementsov@virtuozzo.com>
so it's called v3

block/nbd.c is overcomplicated. These series is a big refactoring, which
finally drops all the complications around drained sections and context
switching, including abuse of bs->in_flight counter.

Also, at the end of the series we don't cancel reconnect on drained
sections (and don't cancel requests waiting for reconnect on drained
section begin), which fixes a problem reported by Roman.

The series is also available at tag up-nbd-client-connection-v3 in
git https://src.openvz.org/scm/~vsementsov/qemu.git 

v3:
Changes in first part of the series (main thing is not using refcnt, but instead (modified) Roman's patch):

01-04: new
05: add Roman's r-b
06: new
07: now, new aio_co_schedule(NULL, thr->wait_co) is used
08: reworked, we now need also bool detached, as we don't have refcnt
09,10: add Roman's r-b
11: rebased, don't modify nbd_free_connect_thread() name at this point
12: add Roman's r-b
13: new
14: rebased

Other patches are new.

Roman Kagan (2):
  block/nbd: fix channel object leak
  block/nbd: ensure ->connection_thread is always valid

Vladimir Sementsov-Ogievskiy (31):
  block/nbd: fix how state is cleared on nbd_open() failure paths
  block/nbd: nbd_client_handshake(): fix leak of s->ioc
  block/nbd: BDRVNBDState: drop unused connect_err and connect_status
  util/async: aio_co_schedule(): support reschedule in same ctx
  block/nbd: simplify waking of nbd_co_establish_connection()
  block/nbd: drop thr->state
  block/nbd: bs-independent interface for nbd_co_establish_connection()
  block/nbd: make nbd_co_establish_connection_cancel() bs-independent
  block/nbd: rename NBDConnectThread to NBDClientConnection
  block/nbd: introduce nbd_client_connection_new()
  block/nbd: introduce nbd_client_connection_release()
  nbd: move connection code from block/nbd to nbd/client-connection
  nbd/client-connection: use QEMU_LOCK_GUARD
  nbd/client-connection: add possibility of negotiation
  nbd/client-connection: implement connection retry
  nbd/client-connection: shutdown connection on release
  block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
  block/nbd: use negotiation of NBDClientConnection
  qemu-socket: pass monitor link to socket_get_fd directly
  block/nbd: pass monitor directly to connection thread
  block/nbd: nbd_teardown_connection() don't touch s->sioc
  block/nbd: drop BDRVNBDState::sioc
  nbd/client-connection: return only one io channel
  block-coroutine-wrapper: allow non bdrv_ prefix
  block/nbd: split nbd_co_do_establish_connection out of
    nbd_reconnect_attempt
  nbd/client-connection: do qio_channel_set_delay(false)
  nbd/client-connection: add option for non-blocking connection attempt
  block/nbd: reuse nbd_co_do_establish_connection() in nbd_open()
  block/nbd: add nbd_clinent_connected() helper
  block/nbd: safer transition to receiving request
  block/nbd: drop connection_co

 block/coroutines.h                 |   6 +
 include/block/aio.h                |   2 +-
 include/block/nbd.h                |  19 +
 include/io/channel-socket.h        |  20 +
 include/qemu/sockets.h             |   2 +-
 block/nbd.c                        | 908 +++++++----------------------
 io/channel-socket.c                |  17 +-
 nbd/client-connection.c            | 364 ++++++++++++
 nbd/client.c                       |   2 -
 tests/unit/test-util-sockets.c     |  16 +-
 util/async.c                       |   8 +
 util/qemu-sockets.c                |  10 +-
 nbd/meson.build                    |   1 +
 scripts/block-coroutine-wrapper.py |   7 +-
 14 files changed, 666 insertions(+), 716 deletions(-)
 create mode 100644 nbd/client-connection.c

-- 
2.29.2



^ permalink raw reply	[flat|nested] 121+ messages in thread

* [PATCH v3 01/33] block/nbd: fix channel object leak
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-05-24 21:31   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths Vladimir Sementsov-Ogievskiy
                   ` (32 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

From: Roman Kagan <rvkagan@yandex-team.ru>

nbd_free_connect_thread leaks the channel object if it hasn't been
stolen.

Unref it and fix the leak.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
 block/nbd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/nbd.c b/block/nbd.c
index 1d4668d42d..739ae2941f 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -385,6 +385,7 @@ static void nbd_free_connect_thread(NBDConnectThread *thr)
 {
     if (thr->sioc) {
         qio_channel_close(QIO_CHANNEL(thr->sioc), NULL);
+        object_unref(OBJECT(thr->sioc));
     }
     error_free(thr->err);
     qapi_free_SocketAddress(thr->saddr);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
  2021-04-16  8:08 ` [PATCH v3 01/33] block/nbd: fix channel object leak Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-21 14:00   ` Roman Kagan
  2021-06-01 21:39   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 03/33] block/nbd: ensure ->connection_thread is always valid Vladimir Sementsov-Ogievskiy
                   ` (31 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We have two "return error" paths in nbd_open() after
nbd_process_options(). Actually we should call nbd_clear_bdrvstate()
on these paths. Interesting that nbd_process_options() calls
nbd_clear_bdrvstate() by itself.

Let's fix leaks and refactor things to be more obvious:

- intialize yank at top of nbd_open()
- move yank cleanup to nbd_clear_bdrvstate()
- refactor nbd_open() so that all failure paths except for
  yank-register goes through nbd_clear_bdrvstate()

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 36 ++++++++++++++++++------------------
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 739ae2941f..a407a3814b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -152,8 +152,12 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
 static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
 static void nbd_yank(void *opaque);
 
-static void nbd_clear_bdrvstate(BDRVNBDState *s)
+static void nbd_clear_bdrvstate(BlockDriverState *bs)
 {
+    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
+
+    yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
+
     object_unref(OBJECT(s->tlscreds));
     qapi_free_SocketAddress(s->saddr);
     s->saddr = NULL;
@@ -2279,9 +2283,6 @@ static int nbd_process_options(BlockDriverState *bs, QDict *options,
     ret = 0;
 
  error:
-    if (ret < 0) {
-        nbd_clear_bdrvstate(s);
-    }
     qemu_opts_del(opts);
     return ret;
 }
@@ -2292,11 +2293,6 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
     int ret;
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
 
-    ret = nbd_process_options(bs, options, errp);
-    if (ret < 0) {
-        return ret;
-    }
-
     s->bs = bs;
     qemu_co_mutex_init(&s->send_mutex);
     qemu_co_queue_init(&s->free_sema);
@@ -2305,20 +2301,23 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         return -EEXIST;
     }
 
+    ret = nbd_process_options(bs, options, errp);
+    if (ret < 0) {
+        goto fail;
+    }
+
     /*
      * establish TCP connection, return error if it fails
      * TODO: Configurable retry-until-timeout behaviour.
      */
     if (nbd_establish_connection(bs, s->saddr, errp) < 0) {
-        yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
-        return -ECONNREFUSED;
+        ret = -ECONNREFUSED;
+        goto fail;
     }
 
     ret = nbd_client_handshake(bs, errp);
     if (ret < 0) {
-        yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
-        nbd_clear_bdrvstate(s);
-        return ret;
+        goto fail;
     }
     /* successfully connected */
     s->state = NBD_CLIENT_CONNECTED;
@@ -2330,6 +2329,10 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
     aio_co_schedule(bdrv_get_aio_context(bs), s->connection_co);
 
     return 0;
+
+fail:
+    nbd_clear_bdrvstate(bs);
+    return ret;
 }
 
 static int nbd_co_flush(BlockDriverState *bs)
@@ -2373,11 +2376,8 @@ static void nbd_refresh_limits(BlockDriverState *bs, Error **errp)
 
 static void nbd_close(BlockDriverState *bs)
 {
-    BDRVNBDState *s = bs->opaque;
-
     nbd_client_close(bs);
-    yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
-    nbd_clear_bdrvstate(s);
+    nbd_clear_bdrvstate(bs);
 }
 
 /*
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 03/33] block/nbd: ensure ->connection_thread is always valid
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
  2021-04-16  8:08 ` [PATCH v3 01/33] block/nbd: fix channel object leak Vladimir Sementsov-Ogievskiy
  2021-04-16  8:08 ` [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-06-01 21:41   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 04/33] block/nbd: nbd_client_handshake(): fix leak of s->ioc Vladimir Sementsov-Ogievskiy
                   ` (30 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

From: Roman Kagan <rvkagan@yandex-team.ru>

Simplify lifetime management of BDRVNBDState->connect_thread by
delaying the possible cleanup of it until the BDRVNBDState itself goes
away.

This also reverts
 0267101af6 "block/nbd: fix possible use after free of s->connect_thread"
as now s->connect_thread can't be cleared until the very end.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
 [vsementsov: rebase, revert 0267101af6 changes]
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 56 ++++++++++++++++++++---------------------------------
 1 file changed, 21 insertions(+), 35 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index a407a3814b..272af60b44 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -144,17 +144,31 @@ typedef struct BDRVNBDState {
     NBDConnectThread *connect_thread;
 } BDRVNBDState;
 
+static void nbd_free_connect_thread(NBDConnectThread *thr);
 static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
                                     Error **errp);
 static int nbd_co_establish_connection(BlockDriverState *bs, Error **errp);
-static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
-                                               bool detach);
+static void nbd_co_establish_connection_cancel(BlockDriverState *bs);
 static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
 static void nbd_yank(void *opaque);
 
 static void nbd_clear_bdrvstate(BlockDriverState *bs)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
+    NBDConnectThread *thr = s->connect_thread;
+    bool thr_running;
+
+    qemu_mutex_lock(&thr->mutex);
+    thr_running = thr->state == CONNECT_THREAD_RUNNING;
+    if (thr_running) {
+        thr->state = CONNECT_THREAD_RUNNING_DETACHED;
+    }
+    qemu_mutex_unlock(&thr->mutex);
+
+    /* the runaway thread will clean it up itself */
+    if (!thr_running) {
+        nbd_free_connect_thread(thr);
+    }
 
     yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
 
@@ -297,7 +311,7 @@ static void coroutine_fn nbd_client_co_drain_begin(BlockDriverState *bs)
         qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
     }
 
-    nbd_co_establish_connection_cancel(bs, false);
+    nbd_co_establish_connection_cancel(bs);
 
     reconnect_delay_timer_del(s);
 
@@ -337,7 +351,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
         if (s->connection_co_sleep_ns_state) {
             qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
         }
-        nbd_co_establish_connection_cancel(bs, true);
+        nbd_co_establish_connection_cancel(bs);
     }
     if (qemu_in_coroutine()) {
         s->teardown_co = qemu_coroutine_self();
@@ -448,11 +462,6 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
     BDRVNBDState *s = bs->opaque;
     NBDConnectThread *thr = s->connect_thread;
 
-    if (!thr) {
-        /* detached */
-        return -1;
-    }
-
     qemu_mutex_lock(&thr->mutex);
 
     switch (thr->state) {
@@ -496,12 +505,6 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
     s->wait_connect = true;
     qemu_coroutine_yield();
 
-    if (!s->connect_thread) {
-        /* detached */
-        return -1;
-    }
-    assert(thr == s->connect_thread);
-
     qemu_mutex_lock(&thr->mutex);
 
     switch (thr->state) {
@@ -549,18 +552,12 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
  * nbd_co_establish_connection_cancel
  * Cancel nbd_co_establish_connection asynchronously: it will finish soon, to
  * allow drained section to begin.
- *
- * If detach is true, also cleanup the state (or if thread is running, move it
- * to CONNECT_THREAD_RUNNING_DETACHED state). s->connect_thread becomes NULL if
- * detach is true.
  */
-static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
-                                               bool detach)
+static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
 {
     BDRVNBDState *s = bs->opaque;
     NBDConnectThread *thr = s->connect_thread;
     bool wake = false;
-    bool do_free = false;
 
     qemu_mutex_lock(&thr->mutex);
 
@@ -571,21 +568,10 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
             s->wait_connect = false;
             wake = true;
         }
-        if (detach) {
-            thr->state = CONNECT_THREAD_RUNNING_DETACHED;
-            s->connect_thread = NULL;
-        }
-    } else if (detach) {
-        do_free = true;
     }
 
     qemu_mutex_unlock(&thr->mutex);
 
-    if (do_free) {
-        nbd_free_connect_thread(thr);
-        s->connect_thread = NULL;
-    }
-
     if (wake) {
         aio_co_wake(s->connection_co);
     }
@@ -2306,6 +2292,8 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         goto fail;
     }
 
+    nbd_init_connect_thread(s);
+
     /*
      * establish TCP connection, return error if it fails
      * TODO: Configurable retry-until-timeout behaviour.
@@ -2322,8 +2310,6 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
     /* successfully connected */
     s->state = NBD_CLIENT_CONNECTED;
 
-    nbd_init_connect_thread(s);
-
     s->connection_co = qemu_coroutine_create(nbd_connection_entry, s);
     bdrv_inc_in_flight(bs);
     aio_co_schedule(bdrv_get_aio_context(bs), s->connection_co);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 04/33] block/nbd: nbd_client_handshake(): fix leak of s->ioc
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 03/33] block/nbd: ensure ->connection_thread is always valid Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-22  8:59   ` Roman Kagan
  2021-04-16  8:08 ` [PATCH v3 05/33] block/nbd: BDRVNBDState: drop unused connect_err and connect_status Vladimir Sementsov-Ogievskiy
                   ` (29 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/block/nbd.c b/block/nbd.c
index 272af60b44..6efa11a185 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -1891,6 +1891,8 @@ static int nbd_client_handshake(BlockDriverState *bs, Error **errp)
                                  nbd_yank, bs);
         object_unref(OBJECT(s->sioc));
         s->sioc = NULL;
+        object_unref(OBJECT(s->ioc));
+        s->ioc = NULL;
 
         return ret;
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 05/33] block/nbd: BDRVNBDState: drop unused connect_err and connect_status
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 04/33] block/nbd: nbd_client_handshake(): fix leak of s->ioc Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-06-01 21:43   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx Vladimir Sementsov-Ogievskiy
                   ` (28 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

These fields are write-only. Drop them.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
---
 block/nbd.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 6efa11a185..d67556c7ee 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -121,8 +121,6 @@ typedef struct BDRVNBDState {
     bool wait_drained_end;
     int in_flight;
     NBDClientState state;
-    int connect_status;
-    Error *connect_err;
     bool wait_in_flight;
 
     QEMUTimer *reconnect_delay_timer;
@@ -580,7 +578,6 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     int ret;
-    Error *local_err = NULL;
 
     if (!nbd_client_connecting(s)) {
         return;
@@ -621,14 +618,14 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    if (nbd_co_establish_connection(s->bs, &local_err) < 0) {
+    if (nbd_co_establish_connection(s->bs, NULL) < 0) {
         ret = -ECONNREFUSED;
         goto out;
     }
 
     bdrv_dec_in_flight(s->bs);
 
-    ret = nbd_client_handshake(s->bs, &local_err);
+    ret = nbd_client_handshake(s->bs, NULL);
 
     if (s->drained) {
         s->wait_drained_end = true;
@@ -643,11 +640,6 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
     bdrv_inc_in_flight(s->bs);
 
 out:
-    s->connect_status = ret;
-    error_free(s->connect_err);
-    s->connect_err = NULL;
-    error_propagate(&s->connect_err, local_err);
-
     if (ret >= 0) {
         /* successfully connected */
         s->state = NBD_CLIENT_CONNECTED;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 05/33] block/nbd: BDRVNBDState: drop unused connect_err and connect_status Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-23 10:09   ` Roman Kagan
  2021-05-12  6:56   ` Paolo Bonzini
  2021-04-16  8:08 ` [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
                   ` (27 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den,
	Stefan Hajnoczi, Fam Zheng

With the following patch we want to call wake coroutine from thread.
And it doesn't work with aio_co_wake:
Assume we have no iothreads.
Assume we have a coroutine A, which waits in the yield point for
external aio_co_wake(), and no progress can be done until it happen.
Main thread is in blocking aio_poll() (for example, in blk_read()).

Now, in a separate thread we do aio_co_wake(). It calls  aio_co_enter(),
which goes through last "else" branch and do aio_context_acquire(ctx).

Now we have a deadlock, as aio_poll() will not release the context lock
until some progress is done, and progress can't be done until
aio_co_wake() wake the coroutine A. And it can't because it wait for
aio_context_acquire().

Still, aio_co_schedule() works well in parallel with blocking
aio_poll(). So we want use it. Let's add a possibility of rescheduling
coroutine in same ctx where it was yield'ed.

Fetch co->ctx in same way as it is done in aio_co_wake().

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/aio.h | 2 +-
 util/async.c        | 8 ++++++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/include/block/aio.h b/include/block/aio.h
index 5f342267d5..744b695525 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -643,7 +643,7 @@ static inline bool aio_node_check(AioContext *ctx, bool is_external)
 
 /**
  * aio_co_schedule:
- * @ctx: the aio context
+ * @ctx: the aio context, if NULL, the current ctx of @co will be used.
  * @co: the coroutine
  *
  * Start a coroutine on a remote AioContext.
diff --git a/util/async.c b/util/async.c
index 674dbefb7c..750be555c6 100644
--- a/util/async.c
+++ b/util/async.c
@@ -545,6 +545,14 @@ fail:
 
 void aio_co_schedule(AioContext *ctx, Coroutine *co)
 {
+    if (!ctx) {
+        /*
+         * Read coroutine before co->ctx.  Matches smp_wmb in
+         * qemu_coroutine_enter.
+         */
+        smp_read_barrier_depends();
+        ctx = qatomic_read(&co->ctx);
+    }
     trace_aio_co_schedule(ctx, co);
     const char *scheduled = qatomic_cmpxchg(&co->scheduled, NULL,
                                            __func__);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection()
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-27 21:44   ` Roman Kagan
  2021-06-02 19:05   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 08/33] block/nbd: drop thr->state Vladimir Sementsov-Ogievskiy
                   ` (26 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Instead of connect_bh, bh_ctx and wait_connect fields we can live with
only one link to waiting coroutine, protected by mutex.

So new logic is:

nbd_co_establish_connection() sets wait_co under mutex, release the
mutex and do yield(). Note, that wait_co may be scheduled by thread
immediately after unlocking the mutex. Still, in main thread (or
iothread) we'll not reach the code for entering the coroutine until the
yield() so we are safe.

Both connect_thread_func() and nbd_co_establish_connection_cancel() do
the following to handle wait_co:

Under mutex, if thr->wait_co is not NULL, call aio_co_wake() (which
never tries to acquire aio context since previous commit, so we are
safe to do it under thr->mutex) and set thr->wait_co to NULL.
This way we protect ourselves of scheduling it twice.

Also this commit make nbd_co_establish_connection() less connected to
bs (we have generic pointer to the coroutine, not use s->connection_co
directly). So, we are on the way of splitting connection API out of
nbd.c (which is overcomplicated now).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 49 +++++++++----------------------------------------
 1 file changed, 9 insertions(+), 40 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index d67556c7ee..e1f39eda6c 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -87,12 +87,6 @@ typedef enum NBDConnectThreadState {
 typedef struct NBDConnectThread {
     /* Initialization constants */
     SocketAddress *saddr; /* address to connect to */
-    /*
-     * Bottom half to schedule on completion. Scheduled only if bh_ctx is not
-     * NULL
-     */
-    QEMUBHFunc *bh_func;
-    void *bh_opaque;
 
     /*
      * Result of last attempt. Valid in FAIL and SUCCESS states.
@@ -101,10 +95,10 @@ typedef struct NBDConnectThread {
     QIOChannelSocket *sioc;
     Error *err;
 
-    /* state and bh_ctx are protected by mutex */
     QemuMutex mutex;
+    /* All further fields are protected by mutex */
     NBDConnectThreadState state; /* current state of the thread */
-    AioContext *bh_ctx; /* where to schedule bh (NULL means don't schedule) */
+    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
 } NBDConnectThread;
 
 typedef struct BDRVNBDState {
@@ -138,7 +132,6 @@ typedef struct BDRVNBDState {
     char *x_dirty_bitmap;
     bool alloc_depth;
 
-    bool wait_connect;
     NBDConnectThread *connect_thread;
 } BDRVNBDState;
 
@@ -374,15 +367,6 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
     return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
 }
 
-static void connect_bh(void *opaque)
-{
-    BDRVNBDState *state = opaque;
-
-    assert(state->wait_connect);
-    state->wait_connect = false;
-    aio_co_wake(state->connection_co);
-}
-
 static void nbd_init_connect_thread(BDRVNBDState *s)
 {
     s->connect_thread = g_new(NBDConnectThread, 1);
@@ -390,8 +374,6 @@ static void nbd_init_connect_thread(BDRVNBDState *s)
     *s->connect_thread = (NBDConnectThread) {
         .saddr = QAPI_CLONE(SocketAddress, s->saddr),
         .state = CONNECT_THREAD_NONE,
-        .bh_func = connect_bh,
-        .bh_opaque = s,
     };
 
     qemu_mutex_init(&s->connect_thread->mutex);
@@ -429,11 +411,9 @@ static void *connect_thread_func(void *opaque)
     switch (thr->state) {
     case CONNECT_THREAD_RUNNING:
         thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
-        if (thr->bh_ctx) {
-            aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
-
-            /* play safe, don't reuse bh_ctx on further connection attempts */
-            thr->bh_ctx = NULL;
+        if (thr->wait_co) {
+            aio_co_schedule(NULL, thr->wait_co);
+            thr->wait_co = NULL;
         }
         break;
     case CONNECT_THREAD_RUNNING_DETACHED:
@@ -487,20 +467,14 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
         abort();
     }
 
-    thr->bh_ctx = qemu_get_current_aio_context();
+    thr->wait_co = qemu_coroutine_self();
 
     qemu_mutex_unlock(&thr->mutex);
 
-
     /*
      * We are going to wait for connect-thread finish, but
      * nbd_client_co_drain_begin() can interrupt.
-     *
-     * Note that wait_connect variable is not visible for connect-thread. It
-     * doesn't need mutex protection, it used only inside home aio context of
-     * bs.
      */
-    s->wait_connect = true;
     qemu_coroutine_yield();
 
     qemu_mutex_lock(&thr->mutex);
@@ -555,24 +529,19 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
 {
     BDRVNBDState *s = bs->opaque;
     NBDConnectThread *thr = s->connect_thread;
-    bool wake = false;
 
     qemu_mutex_lock(&thr->mutex);
 
     if (thr->state == CONNECT_THREAD_RUNNING) {
         /* We can cancel only in running state, when bh is not yet scheduled */
-        thr->bh_ctx = NULL;
-        if (s->wait_connect) {
-            s->wait_connect = false;
-            wake = true;
+        if (thr->wait_co) {
+            aio_co_schedule(NULL, thr->wait_co);
+            thr->wait_co = NULL;
         }
     }
 
     qemu_mutex_unlock(&thr->mutex);
 
-    if (wake) {
-        aio_co_wake(s->connection_co);
-    }
 }
 
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 08/33] block/nbd: drop thr->state
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-27 22:23   ` Roman Kagan
  2021-04-16  8:08 ` [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
                   ` (25 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We don't need all these states. The code refactored to use two boolean
variables looks simpler.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 125 ++++++++++++++--------------------------------------
 1 file changed, 34 insertions(+), 91 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index e1f39eda6c..2b26a033a4 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -66,24 +66,6 @@ typedef enum NBDClientState {
     NBD_CLIENT_QUIT
 } NBDClientState;
 
-typedef enum NBDConnectThreadState {
-    /* No thread, no pending results */
-    CONNECT_THREAD_NONE,
-
-    /* Thread is running, no results for now */
-    CONNECT_THREAD_RUNNING,
-
-    /*
-     * Thread is running, but requestor exited. Thread should close
-     * the new socket and free the connect state on exit.
-     */
-    CONNECT_THREAD_RUNNING_DETACHED,
-
-    /* Thread finished, results are stored in a state */
-    CONNECT_THREAD_FAIL,
-    CONNECT_THREAD_SUCCESS
-} NBDConnectThreadState;
-
 typedef struct NBDConnectThread {
     /* Initialization constants */
     SocketAddress *saddr; /* address to connect to */
@@ -97,7 +79,8 @@ typedef struct NBDConnectThread {
 
     QemuMutex mutex;
     /* All further fields are protected by mutex */
-    NBDConnectThreadState state; /* current state of the thread */
+    bool running; /* thread is running now */
+    bool detached; /* thread is detached and should cleanup the state */
     Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
 } NBDConnectThread;
 
@@ -147,17 +130,17 @@ static void nbd_clear_bdrvstate(BlockDriverState *bs)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     NBDConnectThread *thr = s->connect_thread;
-    bool thr_running;
+    bool do_free;
 
     qemu_mutex_lock(&thr->mutex);
-    thr_running = thr->state == CONNECT_THREAD_RUNNING;
-    if (thr_running) {
-        thr->state = CONNECT_THREAD_RUNNING_DETACHED;
+    if (thr->running) {
+        thr->detached = true;
     }
+    do_free = !thr->running && !thr->detached;
     qemu_mutex_unlock(&thr->mutex);
 
     /* the runaway thread will clean it up itself */
-    if (!thr_running) {
+    if (do_free) {
         nbd_free_connect_thread(thr);
     }
 
@@ -373,7 +356,6 @@ static void nbd_init_connect_thread(BDRVNBDState *s)
 
     *s->connect_thread = (NBDConnectThread) {
         .saddr = QAPI_CLONE(SocketAddress, s->saddr),
-        .state = CONNECT_THREAD_NONE,
     };
 
     qemu_mutex_init(&s->connect_thread->mutex);
@@ -408,20 +390,13 @@ static void *connect_thread_func(void *opaque)
 
     qemu_mutex_lock(&thr->mutex);
 
-    switch (thr->state) {
-    case CONNECT_THREAD_RUNNING:
-        thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
-        if (thr->wait_co) {
-            aio_co_schedule(NULL, thr->wait_co);
-            thr->wait_co = NULL;
-        }
-        break;
-    case CONNECT_THREAD_RUNNING_DETACHED:
-        do_free = true;
-        break;
-    default:
-        abort();
+    assert(thr->running);
+    thr->running = false;
+    if (thr->wait_co) {
+        aio_co_schedule(NULL, thr->wait_co);
+        thr->wait_co = NULL;
     }
+    do_free = thr->detached;
 
     qemu_mutex_unlock(&thr->mutex);
 
@@ -435,36 +410,24 @@ static void *connect_thread_func(void *opaque)
 static int coroutine_fn
 nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
 {
-    int ret;
     QemuThread thread;
     BDRVNBDState *s = bs->opaque;
     NBDConnectThread *thr = s->connect_thread;
 
+    assert(!s->sioc);
+
     qemu_mutex_lock(&thr->mutex);
 
-    switch (thr->state) {
-    case CONNECT_THREAD_FAIL:
-    case CONNECT_THREAD_NONE:
+    if (!thr->running) {
+        if (thr->sioc) {
+            /* Previous attempt finally succeeded in background */
+            goto out;
+        }
+        thr->running = true;
         error_free(thr->err);
         thr->err = NULL;
-        thr->state = CONNECT_THREAD_RUNNING;
         qemu_thread_create(&thread, "nbd-connect",
                            connect_thread_func, thr, QEMU_THREAD_DETACHED);
-        break;
-    case CONNECT_THREAD_SUCCESS:
-        /* Previous attempt finally succeeded in background */
-        thr->state = CONNECT_THREAD_NONE;
-        s->sioc = thr->sioc;
-        thr->sioc = NULL;
-        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
-                               nbd_yank, bs);
-        qemu_mutex_unlock(&thr->mutex);
-        return 0;
-    case CONNECT_THREAD_RUNNING:
-        /* Already running, will wait */
-        break;
-    default:
-        abort();
     }
 
     thr->wait_co = qemu_coroutine_self();
@@ -479,10 +442,15 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
 
     qemu_mutex_lock(&thr->mutex);
 
-    switch (thr->state) {
-    case CONNECT_THREAD_SUCCESS:
-    case CONNECT_THREAD_FAIL:
-        thr->state = CONNECT_THREAD_NONE;
+out:
+    if (thr->running) {
+        /*
+         * Obviously, drained section wants to start. Report the attempt as
+         * failed. Still connect thread is executing in background, and its
+         * result may be used for next connection attempt.
+         */
+        error_setg(errp, "Connection attempt cancelled by other operation");
+    } else {
         error_propagate(errp, thr->err);
         thr->err = NULL;
         s->sioc = thr->sioc;
@@ -491,33 +459,11 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
             yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
                                    nbd_yank, bs);
         }
-        ret = (s->sioc ? 0 : -1);
-        break;
-    case CONNECT_THREAD_RUNNING:
-    case CONNECT_THREAD_RUNNING_DETACHED:
-        /*
-         * Obviously, drained section wants to start. Report the attempt as
-         * failed. Still connect thread is executing in background, and its
-         * result may be used for next connection attempt.
-         */
-        ret = -1;
-        error_setg(errp, "Connection attempt cancelled by other operation");
-        break;
-
-    case CONNECT_THREAD_NONE:
-        /*
-         * Impossible. We've seen this thread running. So it should be
-         * running or at least give some results.
-         */
-        abort();
-
-    default:
-        abort();
     }
 
     qemu_mutex_unlock(&thr->mutex);
 
-    return ret;
+    return s->sioc ? 0 : -1;
 }
 
 /*
@@ -532,12 +478,9 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
 
     qemu_mutex_lock(&thr->mutex);
 
-    if (thr->state == CONNECT_THREAD_RUNNING) {
-        /* We can cancel only in running state, when bh is not yet scheduled */
-        if (thr->wait_co) {
-            aio_co_schedule(NULL, thr->wait_co);
-            thr->wait_co = NULL;
-        }
+    if (thr->wait_co) {
+        aio_co_schedule(NULL, thr->wait_co);
+        thr->wait_co = NULL;
     }
 
     qemu_mutex_unlock(&thr->mutex);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection()
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (7 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 08/33] block/nbd: drop thr->state Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-06-02 19:14   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 10/33] block/nbd: make nbd_co_establish_connection_cancel() bs-independent Vladimir Sementsov-Ogievskiy
                   ` (24 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We are going to split connection code to separate file. Now we are
ready to give nbd_co_establish_connection() clean and bs-independent
interface.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
---
 block/nbd.c | 49 +++++++++++++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 18 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 2b26a033a4..dd97ea0916 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -121,7 +121,8 @@ typedef struct BDRVNBDState {
 static void nbd_free_connect_thread(NBDConnectThread *thr);
 static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
                                     Error **errp);
-static int nbd_co_establish_connection(BlockDriverState *bs, Error **errp);
+static coroutine_fn QIOChannelSocket *
+nbd_co_establish_connection(NBDConnectThread *thr, Error **errp);
 static void nbd_co_establish_connection_cancel(BlockDriverState *bs);
 static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
 static void nbd_yank(void *opaque);
@@ -407,22 +408,36 @@ static void *connect_thread_func(void *opaque)
     return NULL;
 }
 
-static int coroutine_fn
-nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
+/*
+ * Get a new connection in context of @thr:
+ *   if thread is running, wait for completion
+ *   if thread is already succeeded in background, and user didn't get the
+ *     result, just return it now
+ *   otherwise if thread is not running, start a thread and wait for completion
+ */
+static coroutine_fn QIOChannelSocket *
+nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
 {
+    QIOChannelSocket *sioc = NULL;
     QemuThread thread;
-    BDRVNBDState *s = bs->opaque;
-    NBDConnectThread *thr = s->connect_thread;
-
-    assert(!s->sioc);
 
     qemu_mutex_lock(&thr->mutex);
 
+    /*
+     * Don't call nbd_co_establish_connection() in several coroutines in
+     * parallel. Only one call at once is supported.
+     */
+    assert(!thr->wait_co);
+
     if (!thr->running) {
         if (thr->sioc) {
             /* Previous attempt finally succeeded in background */
-            goto out;
+            sioc = g_steal_pointer(&thr->sioc);
+            qemu_mutex_unlock(&thr->mutex);
+
+            return sioc;
         }
+
         thr->running = true;
         error_free(thr->err);
         thr->err = NULL;
@@ -436,13 +451,12 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
 
     /*
      * We are going to wait for connect-thread finish, but
-     * nbd_client_co_drain_begin() can interrupt.
+     * nbd_co_establish_connection_cancel() can interrupt.
      */
     qemu_coroutine_yield();
 
     qemu_mutex_lock(&thr->mutex);
 
-out:
     if (thr->running) {
         /*
          * Obviously, drained section wants to start. Report the attempt as
@@ -453,17 +467,12 @@ out:
     } else {
         error_propagate(errp, thr->err);
         thr->err = NULL;
-        s->sioc = thr->sioc;
-        thr->sioc = NULL;
-        if (s->sioc) {
-            yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
-                                   nbd_yank, bs);
-        }
+        sioc = g_steal_pointer(&thr->sioc);
     }
 
     qemu_mutex_unlock(&thr->mutex);
 
-    return s->sioc ? 0 : -1;
+    return sioc;
 }
 
 /*
@@ -530,11 +539,15 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    if (nbd_co_establish_connection(s->bs, NULL) < 0) {
+    s->sioc = nbd_co_establish_connection(s->connect_thread, NULL);
+    if (!s->sioc) {
         ret = -ECONNREFUSED;
         goto out;
     }
 
+    yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
+                           s->bs);
+
     bdrv_dec_in_flight(s->bs);
 
     ret = nbd_client_handshake(s->bs, NULL);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 10/33] block/nbd: make nbd_co_establish_connection_cancel() bs-independent
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (8 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-06-02 21:18   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection Vladimir Sementsov-Ogievskiy
                   ` (23 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

nbd_co_establish_connection_cancel() actually needs only pointer to
NBDConnectThread. So, make it clean.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
---
 block/nbd.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index dd97ea0916..dab73bdf3b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -123,7 +123,7 @@ static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
                                     Error **errp);
 static coroutine_fn QIOChannelSocket *
 nbd_co_establish_connection(NBDConnectThread *thr, Error **errp);
-static void nbd_co_establish_connection_cancel(BlockDriverState *bs);
+static void nbd_co_establish_connection_cancel(NBDConnectThread *thr);
 static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
 static void nbd_yank(void *opaque);
 
@@ -286,7 +286,7 @@ static void coroutine_fn nbd_client_co_drain_begin(BlockDriverState *bs)
         qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
     }
 
-    nbd_co_establish_connection_cancel(bs);
+    nbd_co_establish_connection_cancel(s->connect_thread);
 
     reconnect_delay_timer_del(s);
 
@@ -326,7 +326,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
         if (s->connection_co_sleep_ns_state) {
             qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
         }
-        nbd_co_establish_connection_cancel(bs);
+        nbd_co_establish_connection_cancel(s->connect_thread);
     }
     if (qemu_in_coroutine()) {
         s->teardown_co = qemu_coroutine_self();
@@ -477,14 +477,12 @@ nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
 
 /*
  * nbd_co_establish_connection_cancel
- * Cancel nbd_co_establish_connection asynchronously: it will finish soon, to
- * allow drained section to begin.
+ * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
+ * stop the thread itself neither close the socket. It just safely wakes
+ * nbd_co_establish_connection() sleeping in the yield().
  */
-static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
+static void nbd_co_establish_connection_cancel(NBDConnectThread *thr)
 {
-    BDRVNBDState *s = bs->opaque;
-    NBDConnectThread *thr = s->connect_thread;
-
     qemu_mutex_lock(&thr->mutex);
 
     if (thr->wait_co) {
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (9 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 10/33] block/nbd: make nbd_co_establish_connection_cancel() bs-independent Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-27 22:28   ` Roman Kagan
  2021-06-02 21:21   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 12/33] block/nbd: introduce nbd_client_connection_new() Vladimir Sementsov-Ogievskiy
                   ` (22 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We are going to move connection code to own file and want clear names
and APIs.

The structure is shared between user and (possibly) several runs of
connect-thread. So it's wrong to call it "thread". Let's rename to
something more generic.

Appropriately rename connect_thread and thr variables to conn.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 137 ++++++++++++++++++++++++++--------------------------
 1 file changed, 68 insertions(+), 69 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index dab73bdf3b..9ce6a323eb 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -66,7 +66,7 @@ typedef enum NBDClientState {
     NBD_CLIENT_QUIT
 } NBDClientState;
 
-typedef struct NBDConnectThread {
+typedef struct NBDClientConnection {
     /* Initialization constants */
     SocketAddress *saddr; /* address to connect to */
 
@@ -82,7 +82,7 @@ typedef struct NBDConnectThread {
     bool running; /* thread is running now */
     bool detached; /* thread is detached and should cleanup the state */
     Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
-} NBDConnectThread;
+} NBDClientConnection;
 
 typedef struct BDRVNBDState {
     QIOChannelSocket *sioc; /* The master data channel */
@@ -115,34 +115,34 @@ typedef struct BDRVNBDState {
     char *x_dirty_bitmap;
     bool alloc_depth;
 
-    NBDConnectThread *connect_thread;
+    NBDClientConnection *conn;
 } BDRVNBDState;
 
-static void nbd_free_connect_thread(NBDConnectThread *thr);
+static void nbd_free_connect_thread(NBDClientConnection *conn);
 static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
                                     Error **errp);
 static coroutine_fn QIOChannelSocket *
-nbd_co_establish_connection(NBDConnectThread *thr, Error **errp);
-static void nbd_co_establish_connection_cancel(NBDConnectThread *thr);
+nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
+static void nbd_co_establish_connection_cancel(NBDClientConnection *conn);
 static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
 static void nbd_yank(void *opaque);
 
 static void nbd_clear_bdrvstate(BlockDriverState *bs)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-    NBDConnectThread *thr = s->connect_thread;
+    NBDClientConnection *conn = s->conn;
     bool do_free;
 
-    qemu_mutex_lock(&thr->mutex);
-    if (thr->running) {
-        thr->detached = true;
+    qemu_mutex_lock(&conn->mutex);
+    if (conn->running) {
+        conn->detached = true;
     }
-    do_free = !thr->running && !thr->detached;
-    qemu_mutex_unlock(&thr->mutex);
+    do_free = !conn->running && !conn->detached;
+    qemu_mutex_unlock(&conn->mutex);
 
     /* the runaway thread will clean it up itself */
     if (do_free) {
-        nbd_free_connect_thread(thr);
+        nbd_free_connect_thread(conn);
     }
 
     yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
@@ -286,7 +286,7 @@ static void coroutine_fn nbd_client_co_drain_begin(BlockDriverState *bs)
         qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
     }
 
-    nbd_co_establish_connection_cancel(s->connect_thread);
+    nbd_co_establish_connection_cancel(s->conn);
 
     reconnect_delay_timer_del(s);
 
@@ -326,7 +326,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
         if (s->connection_co_sleep_ns_state) {
             qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
         }
-        nbd_co_establish_connection_cancel(s->connect_thread);
+        nbd_co_establish_connection_cancel(s->conn);
     }
     if (qemu_in_coroutine()) {
         s->teardown_co = qemu_coroutine_self();
@@ -353,101 +353,101 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
 
 static void nbd_init_connect_thread(BDRVNBDState *s)
 {
-    s->connect_thread = g_new(NBDConnectThread, 1);
+    s->conn = g_new(NBDClientConnection, 1);
 
-    *s->connect_thread = (NBDConnectThread) {
+    *s->conn = (NBDClientConnection) {
         .saddr = QAPI_CLONE(SocketAddress, s->saddr),
     };
 
-    qemu_mutex_init(&s->connect_thread->mutex);
+    qemu_mutex_init(&s->conn->mutex);
 }
 
-static void nbd_free_connect_thread(NBDConnectThread *thr)
+static void nbd_free_connect_thread(NBDClientConnection *conn)
 {
-    if (thr->sioc) {
-        qio_channel_close(QIO_CHANNEL(thr->sioc), NULL);
-        object_unref(OBJECT(thr->sioc));
+    if (conn->sioc) {
+        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
+        object_unref(OBJECT(conn->sioc));
     }
-    error_free(thr->err);
-    qapi_free_SocketAddress(thr->saddr);
-    g_free(thr);
+    error_free(conn->err);
+    qapi_free_SocketAddress(conn->saddr);
+    g_free(conn);
 }
 
 static void *connect_thread_func(void *opaque)
 {
-    NBDConnectThread *thr = opaque;
+    NBDClientConnection *conn = opaque;
     int ret;
     bool do_free = false;
 
-    thr->sioc = qio_channel_socket_new();
+    conn->sioc = qio_channel_socket_new();
 
-    error_free(thr->err);
-    thr->err = NULL;
-    ret = qio_channel_socket_connect_sync(thr->sioc, thr->saddr, &thr->err);
+    error_free(conn->err);
+    conn->err = NULL;
+    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
     if (ret < 0) {
-        object_unref(OBJECT(thr->sioc));
-        thr->sioc = NULL;
+        object_unref(OBJECT(conn->sioc));
+        conn->sioc = NULL;
     }
 
-    qemu_mutex_lock(&thr->mutex);
+    qemu_mutex_lock(&conn->mutex);
 
-    assert(thr->running);
-    thr->running = false;
-    if (thr->wait_co) {
-        aio_co_schedule(NULL, thr->wait_co);
-        thr->wait_co = NULL;
+    assert(conn->running);
+    conn->running = false;
+    if (conn->wait_co) {
+        aio_co_schedule(NULL, conn->wait_co);
+        conn->wait_co = NULL;
     }
-    do_free = thr->detached;
+    do_free = conn->detached;
 
-    qemu_mutex_unlock(&thr->mutex);
+    qemu_mutex_unlock(&conn->mutex);
 
     if (do_free) {
-        nbd_free_connect_thread(thr);
+        nbd_free_connect_thread(conn);
     }
 
     return NULL;
 }
 
 /*
- * Get a new connection in context of @thr:
+ * Get a new connection in context of @conn:
  *   if thread is running, wait for completion
  *   if thread is already succeeded in background, and user didn't get the
  *     result, just return it now
  *   otherwise if thread is not running, start a thread and wait for completion
  */
 static coroutine_fn QIOChannelSocket *
-nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
+nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
 {
     QIOChannelSocket *sioc = NULL;
     QemuThread thread;
 
-    qemu_mutex_lock(&thr->mutex);
+    qemu_mutex_lock(&conn->mutex);
 
     /*
      * Don't call nbd_co_establish_connection() in several coroutines in
      * parallel. Only one call at once is supported.
      */
-    assert(!thr->wait_co);
+    assert(!conn->wait_co);
 
-    if (!thr->running) {
-        if (thr->sioc) {
+    if (!conn->running) {
+        if (conn->sioc) {
             /* Previous attempt finally succeeded in background */
-            sioc = g_steal_pointer(&thr->sioc);
-            qemu_mutex_unlock(&thr->mutex);
+            sioc = g_steal_pointer(&conn->sioc);
+            qemu_mutex_unlock(&conn->mutex);
 
             return sioc;
         }
 
-        thr->running = true;
-        error_free(thr->err);
-        thr->err = NULL;
+        conn->running = true;
+        error_free(conn->err);
+        conn->err = NULL;
         qemu_thread_create(&thread, "nbd-connect",
-                           connect_thread_func, thr, QEMU_THREAD_DETACHED);
+                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
     }
 
-    thr->wait_co = qemu_coroutine_self();
+    conn->wait_co = qemu_coroutine_self();
 
-    qemu_mutex_unlock(&thr->mutex);
+    qemu_mutex_unlock(&conn->mutex);
 
     /*
      * We are going to wait for connect-thread finish, but
@@ -455,9 +455,9 @@ nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
      */
     qemu_coroutine_yield();
 
-    qemu_mutex_lock(&thr->mutex);
+    qemu_mutex_lock(&conn->mutex);
 
-    if (thr->running) {
+    if (conn->running) {
         /*
          * Obviously, drained section wants to start. Report the attempt as
          * failed. Still connect thread is executing in background, and its
@@ -465,12 +465,12 @@ nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
          */
         error_setg(errp, "Connection attempt cancelled by other operation");
     } else {
-        error_propagate(errp, thr->err);
-        thr->err = NULL;
-        sioc = g_steal_pointer(&thr->sioc);
+        error_propagate(errp, conn->err);
+        conn->err = NULL;
+        sioc = g_steal_pointer(&conn->sioc);
     }
 
-    qemu_mutex_unlock(&thr->mutex);
+    qemu_mutex_unlock(&conn->mutex);
 
     return sioc;
 }
@@ -481,17 +481,16 @@ nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
  * stop the thread itself neither close the socket. It just safely wakes
  * nbd_co_establish_connection() sleeping in the yield().
  */
-static void nbd_co_establish_connection_cancel(NBDConnectThread *thr)
+static void nbd_co_establish_connection_cancel(NBDClientConnection *conn)
 {
-    qemu_mutex_lock(&thr->mutex);
+    qemu_mutex_lock(&conn->mutex);
 
-    if (thr->wait_co) {
-        aio_co_schedule(NULL, thr->wait_co);
-        thr->wait_co = NULL;
+    if (conn->wait_co) {
+        aio_co_schedule(NULL, conn->wait_co);
+        conn->wait_co = NULL;
     }
 
-    qemu_mutex_unlock(&thr->mutex);
-
+    qemu_mutex_unlock(&conn->mutex);
 }
 
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
@@ -537,7 +536,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    s->sioc = nbd_co_establish_connection(s->connect_thread, NULL);
+    s->sioc = nbd_co_establish_connection(s->conn, NULL);
     if (!s->sioc) {
         ret = -ECONNREFUSED;
         goto out;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 12/33] block/nbd: introduce nbd_client_connection_new()
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (10 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-06-02 21:22   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release() Vladimir Sementsov-Ogievskiy
                   ` (21 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

This is the last step of creating bs-independent nbd connection
interface. With next commit we can finally move it to separate file.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
---
 block/nbd.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 9ce6a323eb..21a4039359 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -351,15 +351,18 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
     return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
 }
 
-static void nbd_init_connect_thread(BDRVNBDState *s)
+static NBDClientConnection *
+nbd_client_connection_new(const SocketAddress *saddr)
 {
-    s->conn = g_new(NBDClientConnection, 1);
+    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
 
-    *s->conn = (NBDClientConnection) {
-        .saddr = QAPI_CLONE(SocketAddress, s->saddr),
+    *conn = (NBDClientConnection) {
+        .saddr = QAPI_CLONE(SocketAddress, saddr),
     };
 
-    qemu_mutex_init(&s->conn->mutex);
+    qemu_mutex_init(&conn->mutex);
+
+    return conn;
 }
 
 static void nbd_free_connect_thread(NBDClientConnection *conn)
@@ -2208,7 +2211,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         goto fail;
     }
 
-    nbd_init_connect_thread(s);
+    s->conn = nbd_client_connection_new(s->saddr);
 
     /*
      * establish TCP connection, return error if it fails
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (11 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 12/33] block/nbd: introduce nbd_client_connection_new() Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-27 22:35   ` Roman Kagan
  2021-06-02 21:27   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection Vladimir Sementsov-Ogievskiy
                   ` (20 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
 1 file changed, 26 insertions(+), 17 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 21a4039359..8531d019b2 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
     NBDClientConnection *conn;
 } BDRVNBDState;
 
-static void nbd_free_connect_thread(NBDClientConnection *conn);
+static void nbd_client_connection_release(NBDClientConnection *conn);
 static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
                                     Error **errp);
 static coroutine_fn QIOChannelSocket *
@@ -130,20 +130,9 @@ static void nbd_yank(void *opaque);
 static void nbd_clear_bdrvstate(BlockDriverState *bs)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-    NBDClientConnection *conn = s->conn;
-    bool do_free;
-
-    qemu_mutex_lock(&conn->mutex);
-    if (conn->running) {
-        conn->detached = true;
-    }
-    do_free = !conn->running && !conn->detached;
-    qemu_mutex_unlock(&conn->mutex);
 
-    /* the runaway thread will clean it up itself */
-    if (do_free) {
-        nbd_free_connect_thread(conn);
-    }
+    nbd_client_connection_release(s->conn);
+    s->conn = NULL;
 
     yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
 
@@ -365,7 +354,7 @@ nbd_client_connection_new(const SocketAddress *saddr)
     return conn;
 }
 
-static void nbd_free_connect_thread(NBDClientConnection *conn)
+static void nbd_client_connection_do_free(NBDClientConnection *conn)
 {
     if (conn->sioc) {
         qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
@@ -379,8 +368,8 @@ static void nbd_free_connect_thread(NBDClientConnection *conn)
 static void *connect_thread_func(void *opaque)
 {
     NBDClientConnection *conn = opaque;
+    bool do_free;
     int ret;
-    bool do_free = false;
 
     conn->sioc = qio_channel_socket_new();
 
@@ -405,12 +394,32 @@ static void *connect_thread_func(void *opaque)
     qemu_mutex_unlock(&conn->mutex);
 
     if (do_free) {
-        nbd_free_connect_thread(conn);
+        nbd_client_connection_do_free(conn);
     }
 
     return NULL;
 }
 
+static void nbd_client_connection_release(NBDClientConnection *conn)
+{
+    bool do_free;
+
+    if (!conn) {
+        return;
+    }
+
+    qemu_mutex_lock(&conn->mutex);
+    if (conn->running) {
+        conn->detached = true;
+    }
+    do_free = !conn->running && !conn->detached;
+    qemu_mutex_unlock(&conn->mutex);
+
+    if (do_free) {
+        nbd_client_connection_do_free(conn);
+    }
+}
+
 /*
  * Get a new connection in context of @conn:
  *   if thread is running, wait for completion
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (12 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release() Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-27 22:45   ` Roman Kagan
  2021-06-03 15:55   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD Vladimir Sementsov-Ogievskiy
                   ` (19 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We now have bs-independent connection API, which consists of four
functions:

  nbd_client_connection_new()
  nbd_client_connection_unref()
  nbd_co_establish_connection()
  nbd_co_establish_connection_cancel()

Move them to a separate file together with NBDClientConnection
structure which becomes private to the new API.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h     |  11 +++
 block/nbd.c             | 187 -----------------------------------
 nbd/client-connection.c | 212 ++++++++++++++++++++++++++++++++++++++++
 nbd/meson.build         |   1 +
 4 files changed, 224 insertions(+), 187 deletions(-)
 create mode 100644 nbd/client-connection.c

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 5f34d23bb0..57381be76f 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -406,4 +406,15 @@ const char *nbd_info_lookup(uint16_t info);
 const char *nbd_cmd_lookup(uint16_t info);
 const char *nbd_err_lookup(int err);
 
+/* nbd/client-connection.c */
+typedef struct NBDClientConnection NBDClientConnection;
+
+NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
+void nbd_client_connection_release(NBDClientConnection *conn);
+
+QIOChannelSocket *coroutine_fn
+nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
+
+void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
+
 #endif
diff --git a/block/nbd.c b/block/nbd.c
index 8531d019b2..9bd68dcf10 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -66,24 +66,6 @@ typedef enum NBDClientState {
     NBD_CLIENT_QUIT
 } NBDClientState;
 
-typedef struct NBDClientConnection {
-    /* Initialization constants */
-    SocketAddress *saddr; /* address to connect to */
-
-    /*
-     * Result of last attempt. Valid in FAIL and SUCCESS states.
-     * If you want to steal error, don't forget to set pointer to NULL.
-     */
-    QIOChannelSocket *sioc;
-    Error *err;
-
-    QemuMutex mutex;
-    /* All further fields are protected by mutex */
-    bool running; /* thread is running now */
-    bool detached; /* thread is detached and should cleanup the state */
-    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
-} NBDClientConnection;
-
 typedef struct BDRVNBDState {
     QIOChannelSocket *sioc; /* The master data channel */
     QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
@@ -118,12 +100,8 @@ typedef struct BDRVNBDState {
     NBDClientConnection *conn;
 } BDRVNBDState;
 
-static void nbd_client_connection_release(NBDClientConnection *conn);
 static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
                                     Error **errp);
-static coroutine_fn QIOChannelSocket *
-nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
-static void nbd_co_establish_connection_cancel(NBDClientConnection *conn);
 static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
 static void nbd_yank(void *opaque);
 
@@ -340,171 +318,6 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
     return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
 }
 
-static NBDClientConnection *
-nbd_client_connection_new(const SocketAddress *saddr)
-{
-    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
-
-    *conn = (NBDClientConnection) {
-        .saddr = QAPI_CLONE(SocketAddress, saddr),
-    };
-
-    qemu_mutex_init(&conn->mutex);
-
-    return conn;
-}
-
-static void nbd_client_connection_do_free(NBDClientConnection *conn)
-{
-    if (conn->sioc) {
-        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
-        object_unref(OBJECT(conn->sioc));
-    }
-    error_free(conn->err);
-    qapi_free_SocketAddress(conn->saddr);
-    g_free(conn);
-}
-
-static void *connect_thread_func(void *opaque)
-{
-    NBDClientConnection *conn = opaque;
-    bool do_free;
-    int ret;
-
-    conn->sioc = qio_channel_socket_new();
-
-    error_free(conn->err);
-    conn->err = NULL;
-    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
-    if (ret < 0) {
-        object_unref(OBJECT(conn->sioc));
-        conn->sioc = NULL;
-    }
-
-    qemu_mutex_lock(&conn->mutex);
-
-    assert(conn->running);
-    conn->running = false;
-    if (conn->wait_co) {
-        aio_co_schedule(NULL, conn->wait_co);
-        conn->wait_co = NULL;
-    }
-    do_free = conn->detached;
-
-    qemu_mutex_unlock(&conn->mutex);
-
-    if (do_free) {
-        nbd_client_connection_do_free(conn);
-    }
-
-    return NULL;
-}
-
-static void nbd_client_connection_release(NBDClientConnection *conn)
-{
-    bool do_free;
-
-    if (!conn) {
-        return;
-    }
-
-    qemu_mutex_lock(&conn->mutex);
-    if (conn->running) {
-        conn->detached = true;
-    }
-    do_free = !conn->running && !conn->detached;
-    qemu_mutex_unlock(&conn->mutex);
-
-    if (do_free) {
-        nbd_client_connection_do_free(conn);
-    }
-}
-
-/*
- * Get a new connection in context of @conn:
- *   if thread is running, wait for completion
- *   if thread is already succeeded in background, and user didn't get the
- *     result, just return it now
- *   otherwise if thread is not running, start a thread and wait for completion
- */
-static coroutine_fn QIOChannelSocket *
-nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
-{
-    QIOChannelSocket *sioc = NULL;
-    QemuThread thread;
-
-    qemu_mutex_lock(&conn->mutex);
-
-    /*
-     * Don't call nbd_co_establish_connection() in several coroutines in
-     * parallel. Only one call at once is supported.
-     */
-    assert(!conn->wait_co);
-
-    if (!conn->running) {
-        if (conn->sioc) {
-            /* Previous attempt finally succeeded in background */
-            sioc = g_steal_pointer(&conn->sioc);
-            qemu_mutex_unlock(&conn->mutex);
-
-            return sioc;
-        }
-
-        conn->running = true;
-        error_free(conn->err);
-        conn->err = NULL;
-        qemu_thread_create(&thread, "nbd-connect",
-                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
-    }
-
-    conn->wait_co = qemu_coroutine_self();
-
-    qemu_mutex_unlock(&conn->mutex);
-
-    /*
-     * We are going to wait for connect-thread finish, but
-     * nbd_co_establish_connection_cancel() can interrupt.
-     */
-    qemu_coroutine_yield();
-
-    qemu_mutex_lock(&conn->mutex);
-
-    if (conn->running) {
-        /*
-         * Obviously, drained section wants to start. Report the attempt as
-         * failed. Still connect thread is executing in background, and its
-         * result may be used for next connection attempt.
-         */
-        error_setg(errp, "Connection attempt cancelled by other operation");
-    } else {
-        error_propagate(errp, conn->err);
-        conn->err = NULL;
-        sioc = g_steal_pointer(&conn->sioc);
-    }
-
-    qemu_mutex_unlock(&conn->mutex);
-
-    return sioc;
-}
-
-/*
- * nbd_co_establish_connection_cancel
- * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
- * stop the thread itself neither close the socket. It just safely wakes
- * nbd_co_establish_connection() sleeping in the yield().
- */
-static void nbd_co_establish_connection_cancel(NBDClientConnection *conn)
-{
-    qemu_mutex_lock(&conn->mutex);
-
-    if (conn->wait_co) {
-        aio_co_schedule(NULL, conn->wait_co);
-        conn->wait_co = NULL;
-    }
-
-    qemu_mutex_unlock(&conn->mutex);
-}
-
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     int ret;
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
new file mode 100644
index 0000000000..4e39a5b1af
--- /dev/null
+++ b/nbd/client-connection.c
@@ -0,0 +1,212 @@
+/*
+ * QEMU Block driver for  NBD
+ *
+ * Copyright (c) 2021 Virtuozzo International GmbH.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+
+#include "block/nbd.h"
+
+#include "qapi/qapi-visit-sockets.h"
+#include "qapi/clone-visitor.h"
+
+struct NBDClientConnection {
+    /* Initialization constants */
+    SocketAddress *saddr; /* address to connect to */
+
+    /*
+     * Result of last attempt. Valid in FAIL and SUCCESS states.
+     * If you want to steal error, don't forget to set pointer to NULL.
+     */
+    QIOChannelSocket *sioc;
+    Error *err;
+
+    QemuMutex mutex;
+    /* All further fields are protected by mutex */
+    bool running; /* thread is running now */
+    bool detached; /* thread is detached and should cleanup the state */
+    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
+};
+
+NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
+{
+    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
+
+    *conn = (NBDClientConnection) {
+        .saddr = QAPI_CLONE(SocketAddress, saddr),
+    };
+
+    qemu_mutex_init(&conn->mutex);
+
+    return conn;
+}
+
+static void nbd_client_connection_do_free(NBDClientConnection *conn)
+{
+    if (conn->sioc) {
+        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
+        object_unref(OBJECT(conn->sioc));
+    }
+    error_free(conn->err);
+    qapi_free_SocketAddress(conn->saddr);
+    g_free(conn);
+}
+
+static void *connect_thread_func(void *opaque)
+{
+    NBDClientConnection *conn = opaque;
+    bool do_free;
+    int ret;
+
+    conn->sioc = qio_channel_socket_new();
+
+    error_free(conn->err);
+    conn->err = NULL;
+    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
+    if (ret < 0) {
+        object_unref(OBJECT(conn->sioc));
+        conn->sioc = NULL;
+    }
+
+    qemu_mutex_lock(&conn->mutex);
+
+    assert(conn->running);
+    conn->running = false;
+    if (conn->wait_co) {
+        aio_co_schedule(NULL, conn->wait_co);
+        conn->wait_co = NULL;
+    }
+    do_free = conn->detached;
+
+    qemu_mutex_unlock(&conn->mutex);
+
+    if (do_free) {
+        nbd_client_connection_do_free(conn);
+    }
+
+    return NULL;
+}
+
+void nbd_client_connection_release(NBDClientConnection *conn)
+{
+    bool do_free;
+
+    if (!conn) {
+        return;
+    }
+
+    qemu_mutex_lock(&conn->mutex);
+    if (conn->running) {
+        conn->detached = true;
+    }
+    do_free = !conn->running && !conn->detached;
+    qemu_mutex_unlock(&conn->mutex);
+
+    if (do_free) {
+        nbd_client_connection_do_free(conn);
+    }
+}
+
+/*
+ * Get a new connection in context of @conn:
+ *   if thread is running, wait for completion
+ *   if thread is already succeeded in background, and user didn't get the
+ *     result, just return it now
+ *   otherwise if thread is not running, start a thread and wait for completion
+ */
+QIOChannelSocket *coroutine_fn
+nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
+{
+    QIOChannelSocket *sioc = NULL;
+    QemuThread thread;
+
+    qemu_mutex_lock(&conn->mutex);
+
+    /*
+     * Don't call nbd_co_establish_connection() in several coroutines in
+     * parallel. Only one call at once is supported.
+     */
+    assert(!conn->wait_co);
+
+    if (!conn->running) {
+        if (conn->sioc) {
+            /* Previous attempt finally succeeded in background */
+            sioc = g_steal_pointer(&conn->sioc);
+            qemu_mutex_unlock(&conn->mutex);
+
+            return sioc;
+        }
+
+        conn->running = true;
+        error_free(conn->err);
+        conn->err = NULL;
+        qemu_thread_create(&thread, "nbd-connect",
+                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
+    }
+
+    conn->wait_co = qemu_coroutine_self();
+
+    qemu_mutex_unlock(&conn->mutex);
+
+    /*
+     * We are going to wait for connect-thread finish, but
+     * nbd_co_establish_connection_cancel() can interrupt.
+     */
+    qemu_coroutine_yield();
+
+    qemu_mutex_lock(&conn->mutex);
+
+    if (conn->running) {
+        /*
+         * Obviously, drained section wants to start. Report the attempt as
+         * failed. Still connect thread is executing in background, and its
+         * result may be used for next connection attempt.
+         */
+        error_setg(errp, "Connection attempt cancelled by other operation");
+    } else {
+        error_propagate(errp, conn->err);
+        conn->err = NULL;
+        sioc = g_steal_pointer(&conn->sioc);
+    }
+
+    qemu_mutex_unlock(&conn->mutex);
+
+    return sioc;
+}
+
+/*
+ * nbd_co_establish_connection_cancel
+ * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
+ * stop the thread itself neither close the socket. It just safely wakes
+ * nbd_co_establish_connection() sleeping in the yield().
+ */
+void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn)
+{
+    qemu_mutex_lock(&conn->mutex);
+
+    if (conn->wait_co) {
+        aio_co_schedule(NULL, conn->wait_co);
+        conn->wait_co = NULL;
+    }
+
+    qemu_mutex_unlock(&conn->mutex);
+}
diff --git a/nbd/meson.build b/nbd/meson.build
index 2baaa36948..b26d70565e 100644
--- a/nbd/meson.build
+++ b/nbd/meson.build
@@ -1,5 +1,6 @@
 block_ss.add(files(
   'client.c',
+  'client-connection.c',
   'common.c',
 ))
 blockdev_ss.add(files(
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (13 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-28  6:08   ` Roman Kagan
  2021-04-16  8:08 ` [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation Vladimir Sementsov-Ogievskiy
                   ` (18 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 nbd/client-connection.c | 94 ++++++++++++++++++-----------------------
 1 file changed, 42 insertions(+), 52 deletions(-)

diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 4e39a5b1af..b45a0bd5f6 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -87,17 +87,16 @@ static void *connect_thread_func(void *opaque)
         conn->sioc = NULL;
     }
 
-    qemu_mutex_lock(&conn->mutex);
-
-    assert(conn->running);
-    conn->running = false;
-    if (conn->wait_co) {
-        aio_co_schedule(NULL, conn->wait_co);
-        conn->wait_co = NULL;
+    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
+        assert(conn->running);
+        conn->running = false;
+        if (conn->wait_co) {
+            aio_co_schedule(NULL, conn->wait_co);
+            conn->wait_co = NULL;
+        }
     }
     do_free = conn->detached;
 
-    qemu_mutex_unlock(&conn->mutex);
 
     if (do_free) {
         nbd_client_connection_do_free(conn);
@@ -136,61 +135,54 @@ void nbd_client_connection_release(NBDClientConnection *conn)
 QIOChannelSocket *coroutine_fn
 nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
 {
-    QIOChannelSocket *sioc = NULL;
     QemuThread thread;
 
-    qemu_mutex_lock(&conn->mutex);
-
-    /*
-     * Don't call nbd_co_establish_connection() in several coroutines in
-     * parallel. Only one call at once is supported.
-     */
-    assert(!conn->wait_co);
-
-    if (!conn->running) {
-        if (conn->sioc) {
-            /* Previous attempt finally succeeded in background */
-            sioc = g_steal_pointer(&conn->sioc);
-            qemu_mutex_unlock(&conn->mutex);
-
-            return sioc;
+    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
+        /*
+         * Don't call nbd_co_establish_connection() in several coroutines in
+         * parallel. Only one call at once is supported.
+         */
+        assert(!conn->wait_co);
+
+        if (!conn->running) {
+            if (conn->sioc) {
+                /* Previous attempt finally succeeded in background */
+                return g_steal_pointer(&conn->sioc);
+            }
+
+            conn->running = true;
+            error_free(conn->err);
+            conn->err = NULL;
+            qemu_thread_create(&thread, "nbd-connect",
+                               connect_thread_func, conn, QEMU_THREAD_DETACHED);
         }
 
-        conn->running = true;
-        error_free(conn->err);
-        conn->err = NULL;
-        qemu_thread_create(&thread, "nbd-connect",
-                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
+        conn->wait_co = qemu_coroutine_self();
     }
 
-    conn->wait_co = qemu_coroutine_self();
-
-    qemu_mutex_unlock(&conn->mutex);
-
     /*
      * We are going to wait for connect-thread finish, but
      * nbd_co_establish_connection_cancel() can interrupt.
      */
     qemu_coroutine_yield();
 
-    qemu_mutex_lock(&conn->mutex);
-
-    if (conn->running) {
-        /*
-         * Obviously, drained section wants to start. Report the attempt as
-         * failed. Still connect thread is executing in background, and its
-         * result may be used for next connection attempt.
-         */
-        error_setg(errp, "Connection attempt cancelled by other operation");
-    } else {
-        error_propagate(errp, conn->err);
-        conn->err = NULL;
-        sioc = g_steal_pointer(&conn->sioc);
+    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
+        if (conn->running) {
+            /*
+             * Obviously, drained section wants to start. Report the attempt as
+             * failed. Still connect thread is executing in background, and its
+             * result may be used for next connection attempt.
+             */
+            error_setg(errp, "Connection attempt cancelled by other operation");
+            return NULL;
+        } else {
+            error_propagate(errp, conn->err);
+            conn->err = NULL;
+            return g_steal_pointer(&conn->sioc);
+        }
     }
 
-    qemu_mutex_unlock(&conn->mutex);
-
-    return sioc;
+    abort(); /* unreachable */
 }
 
 /*
@@ -201,12 +193,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
  */
 void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn)
 {
-    qemu_mutex_lock(&conn->mutex);
+    QEMU_LOCK_GUARD(&conn->mutex);
 
     if (conn->wait_co) {
         aio_co_schedule(NULL, conn->wait_co);
         conn->wait_co = NULL;
     }
-
-    qemu_mutex_unlock(&conn->mutex);
 }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (14 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-05-11 10:45   ` Roman Kagan
  2021-04-16  8:08 ` [PATCH v3 17/33] nbd/client-connection: implement connection retry Vladimir Sementsov-Ogievskiy
                   ` (17 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Add arguments and logic to support nbd negotiation in the same thread
after successful connection.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h     |   9 +++-
 block/nbd.c             |   4 +-
 nbd/client-connection.c | 105 ++++++++++++++++++++++++++++++++++++++--
 3 files changed, 109 insertions(+), 9 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 57381be76f..5d86e6a393 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -409,11 +409,16 @@ const char *nbd_err_lookup(int err);
 /* nbd/client-connection.c */
 typedef struct NBDClientConnection NBDClientConnection;
 
-NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
+NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
+                                               bool do_negotiation,
+                                               const char *export_name,
+                                               const char *x_dirty_bitmap,
+                                               QCryptoTLSCreds *tlscreds);
 void nbd_client_connection_release(NBDClientConnection *conn);
 
 QIOChannelSocket *coroutine_fn
-nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
+nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
+                            QIOChannel **ioc, Error **errp);
 
 void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
 
diff --git a/block/nbd.c b/block/nbd.c
index 9bd68dcf10..5e63caaf4b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -361,7 +361,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    s->sioc = nbd_co_establish_connection(s->conn, NULL);
+    s->sioc = nbd_co_establish_connection(s->conn, NULL, NULL, NULL);
     if (!s->sioc) {
         ret = -ECONNREFUSED;
         goto out;
@@ -2033,7 +2033,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         goto fail;
     }
 
-    s->conn = nbd_client_connection_new(s->saddr);
+    s->conn = nbd_client_connection_new(s->saddr, false, NULL, NULL, NULL);
 
     /*
      * establish TCP connection, return error if it fails
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index b45a0bd5f6..ae4a77f826 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -30,14 +30,19 @@
 #include "qapi/clone-visitor.h"
 
 struct NBDClientConnection {
-    /* Initialization constants */
+    /* Initialization constants, never change */
     SocketAddress *saddr; /* address to connect to */
+    QCryptoTLSCreds *tlscreds;
+    NBDExportInfo initial_info;
+    bool do_negotiation;
 
     /*
      * Result of last attempt. Valid in FAIL and SUCCESS states.
      * If you want to steal error, don't forget to set pointer to NULL.
      */
+    NBDExportInfo updated_info;
     QIOChannelSocket *sioc;
+    QIOChannel *ioc;
     Error *err;
 
     QemuMutex mutex;
@@ -47,12 +52,25 @@ struct NBDClientConnection {
     Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
 };
 
-NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
+NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
+                                               bool do_negotiation,
+                                               const char *export_name,
+                                               const char *x_dirty_bitmap,
+                                               QCryptoTLSCreds *tlscreds)
 {
     NBDClientConnection *conn = g_new(NBDClientConnection, 1);
 
+    object_ref(OBJECT(tlscreds));
     *conn = (NBDClientConnection) {
         .saddr = QAPI_CLONE(SocketAddress, saddr),
+        .tlscreds = tlscreds,
+        .do_negotiation = do_negotiation,
+
+        .initial_info.request_sizes = true,
+        .initial_info.structured_reply = true,
+        .initial_info.base_allocation = true,
+        .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
+        .initial_info.name = g_strdup(export_name ?: "")
     };
 
     qemu_mutex_init(&conn->mutex);
@@ -68,9 +86,59 @@ static void nbd_client_connection_do_free(NBDClientConnection *conn)
     }
     error_free(conn->err);
     qapi_free_SocketAddress(conn->saddr);
+    object_unref(OBJECT(conn->tlscreds));
+    g_free(conn->initial_info.x_dirty_bitmap);
+    g_free(conn->initial_info.name);
     g_free(conn);
 }
 
+/*
+ * Connect to @addr and do NBD negotiation if @info is not null. If @tlscreds
+ * given @outioc is provided. @outioc is provided only on success.  The call may
+ * be cancelled in parallel by simply qio_channel_shutdown(sioc).
+ */
+static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
+                       NBDExportInfo *info, QCryptoTLSCreds *tlscreds,
+                       QIOChannel **outioc, Error **errp)
+{
+    int ret;
+
+    if (outioc) {
+        *outioc = NULL;
+    }
+
+    ret = qio_channel_socket_connect_sync(sioc, addr, errp);
+    if (ret < 0) {
+        return ret;
+    }
+
+    if (!info) {
+        return 0;
+    }
+
+    ret = nbd_receive_negotiate(NULL, QIO_CHANNEL(sioc), tlscreds,
+                                tlscreds ? addr->u.inet.host : NULL,
+                                outioc, info, errp);
+    if (ret < 0) {
+        /*
+         * nbd_receive_negotiate() may setup tls ioc and return it even on
+         * failure path. In this case we should use it instead of original
+         * channel.
+         */
+        if (outioc && *outioc) {
+            qio_channel_close(QIO_CHANNEL(*outioc), NULL);
+            object_unref(OBJECT(*outioc));
+            *outioc = NULL;
+        } else {
+            qio_channel_close(QIO_CHANNEL(sioc), NULL);
+        }
+
+        return ret;
+    }
+
+    return 0;
+}
+
 static void *connect_thread_func(void *opaque)
 {
     NBDClientConnection *conn = opaque;
@@ -81,12 +149,19 @@ static void *connect_thread_func(void *opaque)
 
     error_free(conn->err);
     conn->err = NULL;
-    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
+    conn->updated_info = conn->initial_info;
+
+    ret = nbd_connect(conn->sioc, conn->saddr,
+                      conn->do_negotiation ? &conn->updated_info : NULL,
+                      conn->tlscreds, &conn->ioc, &conn->err);
     if (ret < 0) {
         object_unref(OBJECT(conn->sioc));
         conn->sioc = NULL;
     }
 
+    conn->updated_info.x_dirty_bitmap = NULL;
+    conn->updated_info.name = NULL;
+
     WITH_QEMU_LOCK_GUARD(&conn->mutex) {
         assert(conn->running);
         conn->running = false;
@@ -94,8 +169,8 @@ static void *connect_thread_func(void *opaque)
             aio_co_schedule(NULL, conn->wait_co);
             conn->wait_co = NULL;
         }
+        do_free = conn->detached;
     }
-    do_free = conn->detached;
 
 
     if (do_free) {
@@ -131,12 +206,24 @@ void nbd_client_connection_release(NBDClientConnection *conn)
  *   if thread is already succeeded in background, and user didn't get the
  *     result, just return it now
  *   otherwise if thread is not running, start a thread and wait for completion
+ *
+ * If @info is not NULL, also do nbd-negotiation after successful connection.
+ * In this case info is used only as out parameter, and is fully initialized by
+ * nbd_co_establish_connection(). "IN" fields of info as well as related only to
+ * nbd_receive_export_list() would be zero (see description of NBDExportInfo in
+ * include/block/nbd.h).
  */
 QIOChannelSocket *coroutine_fn
-nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
+nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
+                            QIOChannel **ioc, Error **errp)
 {
     QemuThread thread;
 
+    if (conn->do_negotiation) {
+        assert(info);
+        assert(ioc);
+    }
+
     WITH_QEMU_LOCK_GUARD(&conn->mutex) {
         /*
          * Don't call nbd_co_establish_connection() in several coroutines in
@@ -147,6 +234,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
         if (!conn->running) {
             if (conn->sioc) {
                 /* Previous attempt finally succeeded in background */
+                if (conn->do_negotiation) {
+                    *ioc = g_steal_pointer(&conn->ioc);
+                    memcpy(info, &conn->updated_info, sizeof(*info));
+                }
                 return g_steal_pointer(&conn->sioc);
             }
 
@@ -178,6 +269,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
         } else {
             error_propagate(errp, conn->err);
             conn->err = NULL;
+            if (conn->sioc && conn->do_negotiation) {
+                *ioc = g_steal_pointer(&conn->ioc);
+                memcpy(info, &conn->updated_info, sizeof(*info));
+            }
             return g_steal_pointer(&conn->sioc);
         }
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 17/33] nbd/client-connection: implement connection retry
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (15 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-05-11 20:54   ` Roman Kagan
  2021-06-03 16:17   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 18/33] nbd/client-connection: shutdown connection on release Vladimir Sementsov-Ogievskiy
                   ` (16 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Add an option for thread to retry connection until success. We'll use
nbd/client-connection both for reconnect and for initial connection in
nbd_open(), so we need a possibility to use same NBDClientConnection
instance to connect once in nbd_open() and then use retry semantics for
reconnect.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h     |  2 ++
 nbd/client-connection.c | 55 +++++++++++++++++++++++++++++------------
 2 files changed, 41 insertions(+), 16 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 5d86e6a393..5bb54d831c 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -409,6 +409,8 @@ const char *nbd_err_lookup(int err);
 /* nbd/client-connection.c */
 typedef struct NBDClientConnection NBDClientConnection;
 
+void nbd_client_connection_enable_retry(NBDClientConnection *conn);
+
 NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
                                                bool do_negotiation,
                                                const char *export_name,
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index ae4a77f826..002bd91f42 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -36,6 +36,8 @@ struct NBDClientConnection {
     NBDExportInfo initial_info;
     bool do_negotiation;
 
+    bool do_retry;
+
     /*
      * Result of last attempt. Valid in FAIL and SUCCESS states.
      * If you want to steal error, don't forget to set pointer to NULL.
@@ -52,6 +54,15 @@ struct NBDClientConnection {
     Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
 };
 
+/*
+ * The function isn't protected by any mutex, so call it when thread is not
+ * running.
+ */
+void nbd_client_connection_enable_retry(NBDClientConnection *conn)
+{
+    conn->do_retry = true;
+}
+
 NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
                                                bool do_negotiation,
                                                const char *export_name,
@@ -144,24 +155,37 @@ static void *connect_thread_func(void *opaque)
     NBDClientConnection *conn = opaque;
     bool do_free;
     int ret;
+    uint64_t timeout = 1;
+    uint64_t max_timeout = 16;
+
+    while (true) {
+        conn->sioc = qio_channel_socket_new();
+
+        error_free(conn->err);
+        conn->err = NULL;
+        conn->updated_info = conn->initial_info;
+
+        ret = nbd_connect(conn->sioc, conn->saddr,
+                          conn->do_negotiation ? &conn->updated_info : NULL,
+                          conn->tlscreds, &conn->ioc, &conn->err);
+        conn->updated_info.x_dirty_bitmap = NULL;
+        conn->updated_info.name = NULL;
+
+        if (ret < 0) {
+            object_unref(OBJECT(conn->sioc));
+            conn->sioc = NULL;
+            if (conn->do_retry) {
+                sleep(timeout);
+                if (timeout < max_timeout) {
+                    timeout *= 2;
+                }
+                continue;
+            }
+        }
 
-    conn->sioc = qio_channel_socket_new();
-
-    error_free(conn->err);
-    conn->err = NULL;
-    conn->updated_info = conn->initial_info;
-
-    ret = nbd_connect(conn->sioc, conn->saddr,
-                      conn->do_negotiation ? &conn->updated_info : NULL,
-                      conn->tlscreds, &conn->ioc, &conn->err);
-    if (ret < 0) {
-        object_unref(OBJECT(conn->sioc));
-        conn->sioc = NULL;
+        break;
     }
 
-    conn->updated_info.x_dirty_bitmap = NULL;
-    conn->updated_info.name = NULL;
-
     WITH_QEMU_LOCK_GUARD(&conn->mutex) {
         assert(conn->running);
         conn->running = false;
@@ -172,7 +196,6 @@ static void *connect_thread_func(void *opaque)
         do_free = conn->detached;
     }
 
-
     if (do_free) {
         nbd_client_connection_do_free(conn);
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 18/33] nbd/client-connection: shutdown connection on release
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (16 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 17/33] nbd/client-connection: implement connection retry Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-05-11 21:08   ` Roman Kagan
  2021-06-03 16:20   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake() Vladimir Sementsov-Ogievskiy
                   ` (15 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Now, when thread can do negotiation and retry, it may run relatively
long. We need a mechanism to stop it, when user is not interested in
result anymore. So, on nbd_client_connection_release() let's shutdown
the socked, and do not retry connection if thread is detached.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 nbd/client-connection.c | 36 ++++++++++++++++++++++++++----------
 1 file changed, 26 insertions(+), 10 deletions(-)

diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 002bd91f42..54f73c6c24 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -158,9 +158,13 @@ static void *connect_thread_func(void *opaque)
     uint64_t timeout = 1;
     uint64_t max_timeout = 16;
 
-    while (true) {
+    qemu_mutex_lock(&conn->mutex);
+    while (!conn->detached) {
+        assert(!conn->sioc);
         conn->sioc = qio_channel_socket_new();
 
+        qemu_mutex_unlock(&conn->mutex);
+
         error_free(conn->err);
         conn->err = NULL;
         conn->updated_info = conn->initial_info;
@@ -171,14 +175,20 @@ static void *connect_thread_func(void *opaque)
         conn->updated_info.x_dirty_bitmap = NULL;
         conn->updated_info.name = NULL;
 
+        qemu_mutex_lock(&conn->mutex);
+
         if (ret < 0) {
             object_unref(OBJECT(conn->sioc));
             conn->sioc = NULL;
-            if (conn->do_retry) {
+            if (conn->do_retry && !conn->detached) {
+                qemu_mutex_unlock(&conn->mutex);
+
                 sleep(timeout);
                 if (timeout < max_timeout) {
                     timeout *= 2;
                 }
+
+                qemu_mutex_lock(&conn->mutex);
                 continue;
             }
         }
@@ -186,15 +196,17 @@ static void *connect_thread_func(void *opaque)
         break;
     }
 
-    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
-        assert(conn->running);
-        conn->running = false;
-        if (conn->wait_co) {
-            aio_co_schedule(NULL, conn->wait_co);
-            conn->wait_co = NULL;
-        }
-        do_free = conn->detached;
+    /* mutex is locked */
+
+    assert(conn->running);
+    conn->running = false;
+    if (conn->wait_co) {
+        aio_co_schedule(NULL, conn->wait_co);
+        conn->wait_co = NULL;
     }
+    do_free = conn->detached;
+
+    qemu_mutex_unlock(&conn->mutex);
 
     if (do_free) {
         nbd_client_connection_do_free(conn);
@@ -215,6 +227,10 @@ void nbd_client_connection_release(NBDClientConnection *conn)
     if (conn->running) {
         conn->detached = true;
     }
+    if (conn->sioc) {
+        qio_channel_shutdown(QIO_CHANNEL(conn->sioc),
+                             QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
+    }
     do_free = !conn->running && !conn->detached;
     qemu_mutex_unlock(&conn->mutex);
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (17 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 18/33] nbd/client-connection: shutdown connection on release Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-05-12  8:40   ` Roman Kagan
  2021-06-03 16:29   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 20/33] block/nbd: use negotiation of NBDClientConnection Vladimir Sementsov-Ogievskiy
                   ` (14 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

To be reused in the following patch.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 99 ++++++++++++++++++++++++++++++-----------------------
 1 file changed, 57 insertions(+), 42 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 5e63caaf4b..03ffe95231 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -318,6 +318,50 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
     return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
 }
 
+/*
+ * Check s->info updated by negotiation process.
+ * Update @bs correspondingly to new options.
+ */
+static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
+{
+    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
+    int ret;
+
+    if (s->x_dirty_bitmap) {
+        if (!s->info.base_allocation) {
+            error_setg(errp, "requested x-dirty-bitmap %s not found",
+                       s->x_dirty_bitmap);
+            return -EINVAL;
+        }
+        if (strcmp(s->x_dirty_bitmap, "qemu:allocation-depth") == 0) {
+            s->alloc_depth = true;
+        }
+    }
+
+    if (s->info.flags & NBD_FLAG_READ_ONLY) {
+        ret = bdrv_apply_auto_read_only(bs, "NBD export is read-only", errp);
+        if (ret < 0) {
+            return ret;
+        }
+    }
+
+    if (s->info.flags & NBD_FLAG_SEND_FUA) {
+        bs->supported_write_flags = BDRV_REQ_FUA;
+        bs->supported_zero_flags |= BDRV_REQ_FUA;
+    }
+
+    if (s->info.flags & NBD_FLAG_SEND_WRITE_ZEROES) {
+        bs->supported_zero_flags |= BDRV_REQ_MAY_UNMAP;
+        if (s->info.flags & NBD_FLAG_SEND_FAST_ZERO) {
+            bs->supported_zero_flags |= BDRV_REQ_NO_FALLBACK;
+        }
+    }
+
+    trace_nbd_client_handshake_success(s->export);
+
+    return 0;
+}
+
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     int ret;
@@ -1579,49 +1623,13 @@ static int nbd_client_handshake(BlockDriverState *bs, Error **errp)
         s->sioc = NULL;
         return ret;
     }
-    if (s->x_dirty_bitmap) {
-        if (!s->info.base_allocation) {
-            error_setg(errp, "requested x-dirty-bitmap %s not found",
-                       s->x_dirty_bitmap);
-            ret = -EINVAL;
-            goto fail;
-        }
-        if (strcmp(s->x_dirty_bitmap, "qemu:allocation-depth") == 0) {
-            s->alloc_depth = true;
-        }
-    }
-    if (s->info.flags & NBD_FLAG_READ_ONLY) {
-        ret = bdrv_apply_auto_read_only(bs, "NBD export is read-only", errp);
-        if (ret < 0) {
-            goto fail;
-        }
-    }
-    if (s->info.flags & NBD_FLAG_SEND_FUA) {
-        bs->supported_write_flags = BDRV_REQ_FUA;
-        bs->supported_zero_flags |= BDRV_REQ_FUA;
-    }
-    if (s->info.flags & NBD_FLAG_SEND_WRITE_ZEROES) {
-        bs->supported_zero_flags |= BDRV_REQ_MAY_UNMAP;
-        if (s->info.flags & NBD_FLAG_SEND_FAST_ZERO) {
-            bs->supported_zero_flags |= BDRV_REQ_NO_FALLBACK;
-        }
-    }
 
-    if (!s->ioc) {
-        s->ioc = QIO_CHANNEL(s->sioc);
-        object_ref(OBJECT(s->ioc));
-    }
-
-    trace_nbd_client_handshake_success(s->export);
-
-    return 0;
-
- fail:
-    /*
-     * We have connected, but must fail for other reasons.
-     * Send NBD_CMD_DISC as a courtesy to the server.
-     */
-    {
+    ret = nbd_handle_updated_info(bs, errp);
+    if (ret < 0) {
+        /*
+         * We have connected, but must fail for other reasons.
+         * Send NBD_CMD_DISC as a courtesy to the server.
+         */
         NBDRequest request = { .type = NBD_CMD_DISC };
 
         nbd_send_request(s->ioc ?: QIO_CHANNEL(s->sioc), &request);
@@ -1635,6 +1643,13 @@ static int nbd_client_handshake(BlockDriverState *bs, Error **errp)
 
         return ret;
     }
+
+    if (!s->ioc) {
+        s->ioc = QIO_CHANNEL(s->sioc);
+        object_ref(OBJECT(s->ioc));
+    }
+
+    return 0;
 }
 
 /*
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 20/33] block/nbd: use negotiation of NBDClientConnection
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (18 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake() Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 18:05   ` Eric Blake
  2021-04-16  8:08 ` [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly Vladimir Sementsov-Ogievskiy
                   ` (13 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Use a new possibility of negotiation in connect thread. Now we are on
the way of simplifying connection_co. We want to move the whole
reconnect code to NBDClientConnection. NBDClientConnection already
updated to support negotiation and retry. Use now the first thing.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 44 ++++++++++++++++++++++++++++++--------------
 1 file changed, 30 insertions(+), 14 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 03ffe95231..c1e61a2a52 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -365,6 +365,7 @@ static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     int ret;
+    AioContext *aio_context = bdrv_get_aio_context(s->bs);
 
     if (!nbd_client_connecting(s)) {
         return;
@@ -405,30 +406,44 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    s->sioc = nbd_co_establish_connection(s->conn, NULL, NULL, NULL);
+    s->sioc = nbd_co_establish_connection(s->conn, &s->info, &s->ioc, NULL);
     if (!s->sioc) {
         ret = -ECONNREFUSED;
         goto out;
     }
 
+    qio_channel_set_blocking(QIO_CHANNEL(s->sioc), false, NULL);
+    qio_channel_attach_aio_context(QIO_CHANNEL(s->sioc), aio_context);
+    if (s->ioc) {
+        qio_channel_set_blocking(QIO_CHANNEL(s->ioc), false, NULL);
+        qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), aio_context);
+    } else {
+        s->ioc = QIO_CHANNEL(s->sioc);
+        object_ref(OBJECT(s->ioc));
+    }
+
     yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
                            s->bs);
 
-    bdrv_dec_in_flight(s->bs);
+    ret = nbd_handle_updated_info(s->bs, NULL);
+    if (ret < 0) {
+        /*
+         * We have connected, but must fail for other reasons.
+         * Send NBD_CMD_DISC as a courtesy to the server.
+         */
+        NBDRequest request = { .type = NBD_CMD_DISC };
 
-    ret = nbd_client_handshake(s->bs, NULL);
+        nbd_send_request(s->ioc, &request);
 
-    if (s->drained) {
-        s->wait_drained_end = true;
-        while (s->drained) {
-            /*
-             * We may be entered once from nbd_client_attach_aio_context_bh
-             * and then from nbd_client_co_drain_end. So here is a loop.
-             */
-            qemu_coroutine_yield();
-        }
+        yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
+                                 nbd_yank, s->bs);
+        object_unref(OBJECT(s->sioc));
+        s->sioc = NULL;
+        object_unref(OBJECT(s->ioc));
+        s->ioc = NULL;
+
+        return;
     }
-    bdrv_inc_in_flight(s->bs);
 
 out:
     if (ret >= 0) {
@@ -2048,7 +2063,8 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
         goto fail;
     }
 
-    s->conn = nbd_client_connection_new(s->saddr, false, NULL, NULL, NULL);
+    s->conn = nbd_client_connection_new(s->saddr, true, s->export,
+                                        s->x_dirty_bitmap, s->tlscreds);
 
     /*
      * establish TCP connection, return error if it fails
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (19 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 20/33] block/nbd: use negotiation of NBDClientConnection Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:08 ` Vladimir Sementsov-Ogievskiy
  2021-04-19  9:34   ` Daniel P. Berrangé
  2021-04-16  8:09 ` [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread Vladimir Sementsov-Ogievskiy
                   ` (12 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:08 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den,
	Daniel P. Berrangé,
	Gerd Hoffmann

Detecting monitor by current coroutine works bad when we are not in
coroutine context. And that's exactly so in nbd reconnect code, where
qio_channel_socket_connect_sync() is called from thread.

Add a possibility to pass monitor by hand, to be used in the following
commit.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/io/channel-socket.h    | 20 ++++++++++++++++++++
 include/qemu/sockets.h         |  2 +-
 io/channel-socket.c            | 17 +++++++++++++----
 tests/unit/test-util-sockets.c | 16 ++++++++--------
 util/qemu-sockets.c            | 10 +++++-----
 5 files changed, 47 insertions(+), 18 deletions(-)

diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
index e747e63514..6d0915420d 100644
--- a/include/io/channel-socket.h
+++ b/include/io/channel-socket.h
@@ -78,6 +78,23 @@ qio_channel_socket_new_fd(int fd,
                           Error **errp);
 
 
+/**
+ * qio_channel_socket_connect_sync_mon:
+ * @ioc: the socket channel object
+ * @addr: the address to connect to
+ * @mon: current monitor. If NULL, it will be detected by
+ *       current coroutine.
+ * @errp: pointer to a NULL-initialized error object
+ *
+ * Attempt to connect to the address @addr. This method
+ * will run in the foreground so the caller will not regain
+ * execution control until the connection is established or
+ * an error occurs.
+ */
+int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
+                                        SocketAddress *addr,
+                                        Monitor *mon,
+                                        Error **errp);
 /**
  * qio_channel_socket_connect_sync:
  * @ioc: the socket channel object
@@ -88,6 +105,9 @@ qio_channel_socket_new_fd(int fd,
  * will run in the foreground so the caller will not regain
  * execution control until the connection is established or
  * an error occurs.
+ *
+ * This a wrapper, calling qio_channel_socket_connect_sync_mon()
+ * with @mon=NULL.
  */
 int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
                                     SocketAddress *addr,
diff --git a/include/qemu/sockets.h b/include/qemu/sockets.h
index 7d1f813576..cdf6f2413b 100644
--- a/include/qemu/sockets.h
+++ b/include/qemu/sockets.h
@@ -41,7 +41,7 @@ int unix_listen(const char *path, Error **errp);
 int unix_connect(const char *path, Error **errp);
 
 SocketAddress *socket_parse(const char *str, Error **errp);
-int socket_connect(SocketAddress *addr, Error **errp);
+int socket_connect(SocketAddress *addr, Monitor *mon, Error **errp);
 int socket_listen(SocketAddress *addr, int num, Error **errp);
 void socket_listen_cleanup(int fd, Error **errp);
 int socket_dgram(SocketAddress *remote, SocketAddress *local, Error **errp);
diff --git a/io/channel-socket.c b/io/channel-socket.c
index de259f7eed..9dc42ca29d 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -135,14 +135,15 @@ qio_channel_socket_new_fd(int fd,
 }
 
 
-int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
-                                    SocketAddress *addr,
-                                    Error **errp)
+int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
+                                        SocketAddress *addr,
+                                        Monitor *mon,
+                                        Error **errp)
 {
     int fd;
 
     trace_qio_channel_socket_connect_sync(ioc, addr);
-    fd = socket_connect(addr, errp);
+    fd = socket_connect(addr, mon, errp);
     if (fd < 0) {
         trace_qio_channel_socket_connect_fail(ioc);
         return -1;
@@ -158,6 +159,14 @@ int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
 }
 
 
+int qio_channel_socket_connect_sync(QIOChannelSocket *ioc,
+                                    SocketAddress *addr,
+                                    Error **errp)
+{
+    return qio_channel_socket_connect_sync_mon(ioc, addr, NULL, errp);
+}
+
+
 static void qio_channel_socket_connect_worker(QIOTask *task,
                                               gpointer opaque)
 {
diff --git a/tests/unit/test-util-sockets.c b/tests/unit/test-util-sockets.c
index 72b9246529..d902ecede7 100644
--- a/tests/unit/test-util-sockets.c
+++ b/tests/unit/test-util-sockets.c
@@ -90,7 +90,7 @@ static void test_socket_fd_pass_name_good(void)
     addr.type = SOCKET_ADDRESS_TYPE_FD;
     addr.u.fd.str = g_strdup(mon_fdname);
 
-    fd = socket_connect(&addr, &error_abort);
+    fd = socket_connect(&addr, NULL, &error_abort);
     g_assert_cmpint(fd, !=, -1);
     g_assert_cmpint(fd, !=, mon_fd);
     close(fd);
@@ -122,7 +122,7 @@ static void test_socket_fd_pass_name_bad(void)
     addr.type = SOCKET_ADDRESS_TYPE_FD;
     addr.u.fd.str = g_strdup(mon_fdname);
 
-    fd = socket_connect(&addr, &err);
+    fd = socket_connect(&addr, NULL, &err);
     g_assert_cmpint(fd, ==, -1);
     error_free_or_abort(&err);
 
@@ -149,7 +149,7 @@ static void test_socket_fd_pass_name_nomon(void)
     addr.type = SOCKET_ADDRESS_TYPE_FD;
     addr.u.fd.str = g_strdup("myfd");
 
-    fd = socket_connect(&addr, &err);
+    fd = socket_connect(&addr, NULL, &err);
     g_assert_cmpint(fd, ==, -1);
     error_free_or_abort(&err);
 
@@ -173,7 +173,7 @@ static void test_socket_fd_pass_num_good(void)
     addr.type = SOCKET_ADDRESS_TYPE_FD;
     addr.u.fd.str = g_strdup_printf("%d", sfd);
 
-    fd = socket_connect(&addr, &error_abort);
+    fd = socket_connect(&addr, NULL, &error_abort);
     g_assert_cmpint(fd, ==, sfd);
 
     fd = socket_listen(&addr, 1, &error_abort);
@@ -195,7 +195,7 @@ static void test_socket_fd_pass_num_bad(void)
     addr.type = SOCKET_ADDRESS_TYPE_FD;
     addr.u.fd.str = g_strdup_printf("%d", sfd);
 
-    fd = socket_connect(&addr, &err);
+    fd = socket_connect(&addr, NULL, &err);
     g_assert_cmpint(fd, ==, -1);
     error_free_or_abort(&err);
 
@@ -218,7 +218,7 @@ static void test_socket_fd_pass_num_nocli(void)
     addr.type = SOCKET_ADDRESS_TYPE_FD;
     addr.u.fd.str = g_strdup_printf("%d", STDOUT_FILENO);
 
-    fd = socket_connect(&addr, &err);
+    fd = socket_connect(&addr, NULL, &err);
     g_assert_cmpint(fd, ==, -1);
     error_free_or_abort(&err);
 
@@ -247,10 +247,10 @@ static gpointer unix_client_thread_func(gpointer user_data)
 
     for (i = 0; i < ABSTRACT_SOCKET_VARIANTS; i++) {
         if (row->expect_connect[i]) {
-            fd = socket_connect(row->client[i], &error_abort);
+            fd = socket_connect(row->client[i], NULL, &error_abort);
             g_assert_cmpint(fd, >=, 0);
         } else {
-            fd = socket_connect(row->client[i], &err);
+            fd = socket_connect(row->client[i], NULL, &err);
             g_assert_cmpint(fd, ==, -1);
             error_free_or_abort(&err);
         }
diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
index 8af0278f15..8b7e3cc7bf 100644
--- a/util/qemu-sockets.c
+++ b/util/qemu-sockets.c
@@ -1116,9 +1116,9 @@ fail:
     return NULL;
 }
 
-static int socket_get_fd(const char *fdstr, int num, Error **errp)
+static int socket_get_fd(const char *fdstr, int num, Monitor *mon, Error **errp)
 {
-    Monitor *cur_mon = monitor_cur();
+    Monitor *cur_mon = mon ?: monitor_cur();
     int fd;
     if (num != 1) {
         error_setg_errno(errp, EINVAL, "socket_get_fd: too many connections");
@@ -1145,7 +1145,7 @@ static int socket_get_fd(const char *fdstr, int num, Error **errp)
     return fd;
 }
 
-int socket_connect(SocketAddress *addr, Error **errp)
+int socket_connect(SocketAddress *addr, Monitor *mon, Error **errp)
 {
     int fd;
 
@@ -1159,7 +1159,7 @@ int socket_connect(SocketAddress *addr, Error **errp)
         break;
 
     case SOCKET_ADDRESS_TYPE_FD:
-        fd = socket_get_fd(addr->u.fd.str, 1, errp);
+        fd = socket_get_fd(addr->u.fd.str, 1, mon, errp);
         break;
 
     case SOCKET_ADDRESS_TYPE_VSOCK:
@@ -1187,7 +1187,7 @@ int socket_listen(SocketAddress *addr, int num, Error **errp)
         break;
 
     case SOCKET_ADDRESS_TYPE_FD:
-        fd = socket_get_fd(addr->u.fd.str, num, errp);
+        fd = socket_get_fd(addr->u.fd.str, num, NULL, errp);
         break;
 
     case SOCKET_ADDRESS_TYPE_VSOCK:
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (20 preceding siblings ...)
  2021-04-16  8:08 ` [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 18:16   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 23/33] block/nbd: nbd_teardown_connection() don't touch s->sioc Vladimir Sementsov-Ogievskiy
                   ` (11 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

monitor_cur() is used by socket_get_fd, but it doesn't work in
connection thread. Let's monitor directly to cover this thing. We are
going to unify connection establishing path in nbd_open and reconnect,
so we should support fd-passing.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h     |  3 ++-
 block/nbd.c             |  5 ++++-
 nbd/client-connection.c | 11 +++++++----
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 5bb54d831c..10756d2544 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -415,7 +415,8 @@ NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
                                                bool do_negotiation,
                                                const char *export_name,
                                                const char *x_dirty_bitmap,
-                                               QCryptoTLSCreds *tlscreds);
+                                               QCryptoTLSCreds *tlscreds,
+                                               Monitor *mon);
 void nbd_client_connection_release(NBDClientConnection *conn);
 
 QIOChannelSocket *coroutine_fn
diff --git a/block/nbd.c b/block/nbd.c
index c1e61a2a52..ec69a4ad65 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -45,6 +45,8 @@
 #include "block/nbd.h"
 #include "block/block_int.h"
 
+#include "monitor/monitor.h"
+
 #include "qemu/yank.h"
 
 #define EN_OPTSTR ":exportname="
@@ -2064,7 +2066,8 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
     }
 
     s->conn = nbd_client_connection_new(s->saddr, true, s->export,
-                                        s->x_dirty_bitmap, s->tlscreds);
+                                        s->x_dirty_bitmap, s->tlscreds,
+                                        monitor_cur());
 
     /*
      * establish TCP connection, return error if it fails
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 54f73c6c24..c26cd59464 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -37,6 +37,7 @@ struct NBDClientConnection {
     bool do_negotiation;
 
     bool do_retry;
+    Monitor *mon;
 
     /*
      * Result of last attempt. Valid in FAIL and SUCCESS states.
@@ -67,7 +68,8 @@ NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
                                                bool do_negotiation,
                                                const char *export_name,
                                                const char *x_dirty_bitmap,
-                                               QCryptoTLSCreds *tlscreds)
+                                               QCryptoTLSCreds *tlscreds,
+                                               Monitor *mon)
 {
     NBDClientConnection *conn = g_new(NBDClientConnection, 1);
 
@@ -76,6 +78,7 @@ NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
         .saddr = QAPI_CLONE(SocketAddress, saddr),
         .tlscreds = tlscreds,
         .do_negotiation = do_negotiation,
+        .mon = mon,
 
         .initial_info.request_sizes = true,
         .initial_info.structured_reply = true,
@@ -110,7 +113,7 @@ static void nbd_client_connection_do_free(NBDClientConnection *conn)
  */
 static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
                        NBDExportInfo *info, QCryptoTLSCreds *tlscreds,
-                       QIOChannel **outioc, Error **errp)
+                       QIOChannel **outioc, Monitor *mon, Error **errp)
 {
     int ret;
 
@@ -118,7 +121,7 @@ static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
         *outioc = NULL;
     }
 
-    ret = qio_channel_socket_connect_sync(sioc, addr, errp);
+    ret = qio_channel_socket_connect_sync_mon(sioc, addr, mon, errp);
     if (ret < 0) {
         return ret;
     }
@@ -171,7 +174,7 @@ static void *connect_thread_func(void *opaque)
 
         ret = nbd_connect(conn->sioc, conn->saddr,
                           conn->do_negotiation ? &conn->updated_info : NULL,
-                          conn->tlscreds, &conn->ioc, &conn->err);
+                          conn->tlscreds, &conn->ioc, conn->mon, &conn->err);
         conn->updated_info.x_dirty_bitmap = NULL;
         conn->updated_info.name = NULL;
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 23/33] block/nbd: nbd_teardown_connection() don't touch s->sioc
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (21 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 19:04   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 24/33] block/nbd: drop BDRVNBDState::sioc Vladimir Sementsov-Ogievskiy
                   ` (10 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Negotiation during reconnect is now done in thread, and s->sioc is not
available during negotiation. Negotiation in thread will be cancelled
by nbd_client_connection_release() called from nbd_clear_bdrvstate().
So, we don't need this code chunk anymore.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index ec69a4ad65..cece53313c 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -284,10 +284,6 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     if (s->ioc) {
         /* finish any pending coroutines */
         qio_channel_shutdown(s->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
-    } else if (s->sioc) {
-        /* abort negotiation */
-        qio_channel_shutdown(QIO_CHANNEL(s->sioc), QIO_CHANNEL_SHUTDOWN_BOTH,
-                             NULL);
     }
 
     s->state = NBD_CLIENT_QUIT;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 24/33] block/nbd: drop BDRVNBDState::sioc
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (22 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 23/33] block/nbd: nbd_teardown_connection() don't touch s->sioc Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 19:12   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 25/33] nbd/client-connection: return only one io channel Vladimir Sementsov-Ogievskiy
                   ` (9 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Currently sioc pointer is used just to pass from socket-connection to
nbd negotiation. Drop the field, and use local variables instead. With
next commit we'll update nbd/client-connection.c to behave
appropriately (return only top-most ioc, not two channels).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 98 ++++++++++++++++++++++++++---------------------------
 1 file changed, 48 insertions(+), 50 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index cece53313c..f9b56c57b4 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -69,8 +69,7 @@ typedef enum NBDClientState {
 } NBDClientState;
 
 typedef struct BDRVNBDState {
-    QIOChannelSocket *sioc; /* The master data channel */
-    QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
+    QIOChannel *ioc; /* The current I/O channel */
     NBDExportInfo info;
 
     CoMutex send_mutex;
@@ -102,9 +101,11 @@ typedef struct BDRVNBDState {
     NBDClientConnection *conn;
 } BDRVNBDState;
 
-static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
-                                    Error **errp);
-static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
+static QIOChannelSocket *nbd_establish_connection(BlockDriverState *bs,
+                                                  SocketAddress *saddr,
+                                                  Error **errp);
+static int nbd_client_handshake(BlockDriverState *bs, QIOChannelSocket *sioc,
+                                Error **errp);
 static void nbd_yank(void *opaque);
 
 static void nbd_clear_bdrvstate(BlockDriverState *bs)
@@ -364,6 +365,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     int ret;
     AioContext *aio_context = bdrv_get_aio_context(s->bs);
+    QIOChannelSocket *sioc;
 
     if (!nbd_client_connecting(s)) {
         return;
@@ -398,27 +400,26 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         qio_channel_detach_aio_context(QIO_CHANNEL(s->ioc));
         yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
                                  nbd_yank, s->bs);
-        object_unref(OBJECT(s->sioc));
-        s->sioc = NULL;
         object_unref(OBJECT(s->ioc));
         s->ioc = NULL;
     }
 
-    s->sioc = nbd_co_establish_connection(s->conn, &s->info, &s->ioc, NULL);
-    if (!s->sioc) {
+    sioc = nbd_co_establish_connection(s->conn, &s->info, &s->ioc, NULL);
+    if (!sioc) {
         ret = -ECONNREFUSED;
         goto out;
     }
 
-    qio_channel_set_blocking(QIO_CHANNEL(s->sioc), false, NULL);
-    qio_channel_attach_aio_context(QIO_CHANNEL(s->sioc), aio_context);
     if (s->ioc) {
-        qio_channel_set_blocking(QIO_CHANNEL(s->ioc), false, NULL);
-        qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), aio_context);
+        /* sioc is referenced by s->ioc */
+        object_unref(OBJECT(sioc));
     } else {
-        s->ioc = QIO_CHANNEL(s->sioc);
-        object_ref(OBJECT(s->ioc));
+        s->ioc = QIO_CHANNEL(sioc);
     }
+    sioc = NULL;
+
+    qio_channel_set_blocking(QIO_CHANNEL(s->ioc), false, NULL);
+    qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), aio_context);
 
     yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
                            s->bs);
@@ -435,8 +436,6 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 
         yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
                                  nbd_yank, s->bs);
-        object_unref(OBJECT(s->sioc));
-        s->sioc = NULL;
         object_unref(OBJECT(s->ioc));
         s->ioc = NULL;
 
@@ -571,8 +570,6 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
         qio_channel_detach_aio_context(QIO_CHANNEL(s->ioc));
         yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
                                  nbd_yank, s->bs);
-        object_unref(OBJECT(s->sioc));
-        s->sioc = NULL;
         object_unref(OBJECT(s->ioc));
         s->ioc = NULL;
     }
@@ -1571,7 +1568,7 @@ static void nbd_yank(void *opaque)
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
 
     qatomic_store_release(&s->state, NBD_CLIENT_QUIT);
-    qio_channel_shutdown(QIO_CHANNEL(s->sioc), QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
+    qio_channel_shutdown(QIO_CHANNEL(s->ioc), QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
 }
 
 static void nbd_client_close(BlockDriverState *bs)
@@ -1586,57 +1583,64 @@ static void nbd_client_close(BlockDriverState *bs)
     nbd_teardown_connection(bs);
 }
 
-static int nbd_establish_connection(BlockDriverState *bs,
-                                    SocketAddress *saddr,
-                                    Error **errp)
+static QIOChannelSocket *nbd_establish_connection(BlockDriverState *bs,
+                                                  SocketAddress *saddr,
+                                                  Error **errp)
 {
     ERRP_GUARD();
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
+    QIOChannelSocket *sioc;
 
-    s->sioc = qio_channel_socket_new();
-    qio_channel_set_name(QIO_CHANNEL(s->sioc), "nbd-client");
+    sioc = qio_channel_socket_new();
+    qio_channel_set_name(QIO_CHANNEL(sioc), "nbd-client");
 
-    qio_channel_socket_connect_sync(s->sioc, saddr, errp);
+    qio_channel_socket_connect_sync(sioc, saddr, errp);
     if (*errp) {
-        object_unref(OBJECT(s->sioc));
-        s->sioc = NULL;
-        return -1;
+        object_unref(OBJECT(sioc));
+        return NULL;
     }
 
     yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name), nbd_yank, bs);
-    qio_channel_set_delay(QIO_CHANNEL(s->sioc), false);
+    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
 
-    return 0;
+    return sioc;
 }
 
-/* nbd_client_handshake takes ownership on s->sioc. On failure it's unref'ed. */
-static int nbd_client_handshake(BlockDriverState *bs, Error **errp)
+/* nbd_client_handshake takes ownership on sioc. */
+static int nbd_client_handshake(BlockDriverState *bs, QIOChannelSocket *sioc,
+                                Error **errp)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     AioContext *aio_context = bdrv_get_aio_context(bs);
     int ret;
 
     trace_nbd_client_handshake(s->export);
-    qio_channel_set_blocking(QIO_CHANNEL(s->sioc), false, NULL);
-    qio_channel_attach_aio_context(QIO_CHANNEL(s->sioc), aio_context);
+    qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
+    qio_channel_attach_aio_context(QIO_CHANNEL(sioc), aio_context);
 
     s->info.request_sizes = true;
     s->info.structured_reply = true;
     s->info.base_allocation = true;
     s->info.x_dirty_bitmap = g_strdup(s->x_dirty_bitmap);
     s->info.name = g_strdup(s->export ?: "");
-    ret = nbd_receive_negotiate(aio_context, QIO_CHANNEL(s->sioc), s->tlscreds,
+    ret = nbd_receive_negotiate(aio_context, QIO_CHANNEL(sioc), s->tlscreds,
                                 s->hostname, &s->ioc, &s->info, errp);
     g_free(s->info.x_dirty_bitmap);
     g_free(s->info.name);
     if (ret < 0) {
         yank_unregister_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
                                  nbd_yank, bs);
-        object_unref(OBJECT(s->sioc));
-        s->sioc = NULL;
+        object_unref(OBJECT(sioc));
         return ret;
     }
 
+    if (s->ioc) {
+        /* sioc is referenced by s->ioc */
+        object_unref(OBJECT(sioc));
+    } else {
+        s->ioc = QIO_CHANNEL(sioc);
+    }
+    sioc = NULL;
+
     ret = nbd_handle_updated_info(bs, errp);
     if (ret < 0) {
         /*
@@ -1645,23 +1649,15 @@ static int nbd_client_handshake(BlockDriverState *bs, Error **errp)
          */
         NBDRequest request = { .type = NBD_CMD_DISC };
 
-        nbd_send_request(s->ioc ?: QIO_CHANNEL(s->sioc), &request);
+        nbd_send_request(s->ioc, &request);
 
         yank_unregister_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
                                  nbd_yank, bs);
-        object_unref(OBJECT(s->sioc));
-        s->sioc = NULL;
         object_unref(OBJECT(s->ioc));
         s->ioc = NULL;
-
         return ret;
     }
 
-    if (!s->ioc) {
-        s->ioc = QIO_CHANNEL(s->sioc);
-        object_ref(OBJECT(s->ioc));
-    }
-
     return 0;
 }
 
@@ -2047,6 +2043,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
 {
     int ret;
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
+    QIOChannelSocket *sioc;
 
     s->bs = bs;
     qemu_co_mutex_init(&s->send_mutex);
@@ -2069,12 +2066,13 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
      * establish TCP connection, return error if it fails
      * TODO: Configurable retry-until-timeout behaviour.
      */
-    if (nbd_establish_connection(bs, s->saddr, errp) < 0) {
+    sioc = nbd_establish_connection(bs, s->saddr, errp);
+    if (!sioc) {
         ret = -ECONNREFUSED;
         goto fail;
     }
 
-    ret = nbd_client_handshake(bs, errp);
+    ret = nbd_client_handshake(bs, sioc, errp);
     if (ret < 0) {
         goto fail;
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 25/33] nbd/client-connection: return only one io channel
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (23 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 24/33] block/nbd: drop BDRVNBDState::sioc Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 19:58   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix Vladimir Sementsov-Ogievskiy
                   ` (8 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

block/nbd doesn't need underlying sioc channel anymore. So, we can
update nbd/client-connection interface to return only one top-most io
channel, which is more straight forward.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h     |  4 ++--
 block/nbd.c             | 13 ++-----------
 nbd/client-connection.c | 33 +++++++++++++++++++++++++--------
 3 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 10756d2544..00bf08bade 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -419,9 +419,9 @@ NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
                                                Monitor *mon);
 void nbd_client_connection_release(NBDClientConnection *conn);
 
-QIOChannelSocket *coroutine_fn
+QIOChannel *coroutine_fn
 nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
-                            QIOChannel **ioc, Error **errp);
+                            Error **errp);
 
 void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
 
diff --git a/block/nbd.c b/block/nbd.c
index f9b56c57b4..15b5921725 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -365,7 +365,6 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     int ret;
     AioContext *aio_context = bdrv_get_aio_context(s->bs);
-    QIOChannelSocket *sioc;
 
     if (!nbd_client_connecting(s)) {
         return;
@@ -404,20 +403,12 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    sioc = nbd_co_establish_connection(s->conn, &s->info, &s->ioc, NULL);
-    if (!sioc) {
+    s->ioc = nbd_co_establish_connection(s->conn, &s->info, NULL);
+    if (!s->ioc) {
         ret = -ECONNREFUSED;
         goto out;
     }
 
-    if (s->ioc) {
-        /* sioc is referenced by s->ioc */
-        object_unref(OBJECT(sioc));
-    } else {
-        s->ioc = QIO_CHANNEL(sioc);
-    }
-    sioc = NULL;
-
     qio_channel_set_blocking(QIO_CHANNEL(s->ioc), false, NULL);
     qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), aio_context);
 
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index c26cd59464..36d2c7c693 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -255,15 +255,15 @@ void nbd_client_connection_release(NBDClientConnection *conn)
  * nbd_receive_export_list() would be zero (see description of NBDExportInfo in
  * include/block/nbd.h).
  */
-QIOChannelSocket *coroutine_fn
+QIOChannel *coroutine_fn
 nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
-                            QIOChannel **ioc, Error **errp)
+                            Error **errp)
 {
+    QIOChannel *ioc;
     QemuThread thread;
 
     if (conn->do_negotiation) {
         assert(info);
-        assert(ioc);
     }
 
     WITH_QEMU_LOCK_GUARD(&conn->mutex) {
@@ -277,10 +277,17 @@ nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
             if (conn->sioc) {
                 /* Previous attempt finally succeeded in background */
                 if (conn->do_negotiation) {
-                    *ioc = g_steal_pointer(&conn->ioc);
+                    ioc = g_steal_pointer(&conn->ioc);
                     memcpy(info, &conn->updated_info, sizeof(*info));
                 }
-                return g_steal_pointer(&conn->sioc);
+                if (ioc) {
+                    /* TLS channel now has own reference to parent */
+                    object_unref(OBJECT(conn->sioc));
+                } else {
+                    ioc = QIO_CHANNEL(conn->sioc);
+                }
+                conn->sioc = NULL;
+                return ioc;
             }
 
             conn->running = true;
@@ -311,11 +318,21 @@ nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
         } else {
             error_propagate(errp, conn->err);
             conn->err = NULL;
-            if (conn->sioc && conn->do_negotiation) {
-                *ioc = g_steal_pointer(&conn->ioc);
+            if (!conn->sioc) {
+                return NULL;
+            }
+            if (conn->do_negotiation) {
+                ioc = g_steal_pointer(&conn->ioc);
                 memcpy(info, &conn->updated_info, sizeof(*info));
             }
-            return g_steal_pointer(&conn->sioc);
+            if (ioc) {
+                /* TLS channel now has own reference to parent */
+                object_unref(OBJECT(conn->sioc));
+            } else {
+                ioc = QIO_CHANNEL(conn->sioc);
+            }
+            conn->sioc = NULL;
+            return ioc;
         }
     }
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (24 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 25/33] nbd/client-connection: return only one io channel Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 20:00   ` Eric Blake
  2021-06-03 20:53   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt Vladimir Sementsov-Ogievskiy
                   ` (7 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block
  Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den,
	Eduardo Habkost, Cleber Rosa

We are going to reuse the script to generate a qcow2_ function in
further commit. Prepare the script now.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 scripts/block-coroutine-wrapper.py | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/scripts/block-coroutine-wrapper.py b/scripts/block-coroutine-wrapper.py
index 0461fd1c45..85dbeb9ecf 100644
--- a/scripts/block-coroutine-wrapper.py
+++ b/scripts/block-coroutine-wrapper.py
@@ -98,12 +98,13 @@ def snake_to_camel(func_name: str) -> str:
 
 
 def gen_wrapper(func: FuncDecl) -> str:
-    assert func.name.startswith('bdrv_')
-    assert not func.name.startswith('bdrv_co_')
+    assert not '_co_' in func.name
     assert func.return_type == 'int'
     assert func.args[0].type in ['BlockDriverState *', 'BdrvChild *']
 
-    name = 'bdrv_co_' + func.name[5:]
+    subsystem, subname = func.name.split('_', 1)
+
+    name = f'{subsystem}_co_{subname}'
     bs = 'bs' if func.args[0].type == 'BlockDriverState *' else 'child->bs'
     struct_name = snake_to_camel(name)
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (25 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 20:04   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false) Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

Split out part, that we want to reuse for nbd_open().

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 79 +++++++++++++++++++++++++++--------------------------
 1 file changed, 41 insertions(+), 38 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 15b5921725..59971bfba8 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -361,11 +361,49 @@ static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
     return 0;
 }
 
-static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
+static int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
 {
+    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     int ret;
-    AioContext *aio_context = bdrv_get_aio_context(s->bs);
 
+    assert(!s->ioc);
+
+    s->ioc = nbd_co_establish_connection(s->conn, &s->info, errp);
+    if (!s->ioc) {
+        return -ECONNREFUSED;
+    }
+
+    ret = nbd_handle_updated_info(s->bs, NULL);
+    if (ret < 0) {
+        /*
+         * We have connected, but must fail for other reasons.
+         * Send NBD_CMD_DISC as a courtesy to the server.
+         */
+        NBDRequest request = { .type = NBD_CMD_DISC };
+
+        nbd_send_request(s->ioc, &request);
+
+        object_unref(OBJECT(s->ioc));
+        s->ioc = NULL;
+
+        return ret;
+    }
+
+    qio_channel_set_blocking(s->ioc, false, NULL);
+    qio_channel_attach_aio_context(s->ioc, bdrv_get_aio_context(bs));
+
+    yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
+                           bs);
+
+    /* successfully connected */
+    s->state = NBD_CLIENT_CONNECTED;
+    qemu_co_queue_restart_all(&s->free_sema);
+
+    return 0;
+}
+
+static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
+{
     if (!nbd_client_connecting(s)) {
         return;
     }
@@ -403,42 +441,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
         s->ioc = NULL;
     }
 
-    s->ioc = nbd_co_establish_connection(s->conn, &s->info, NULL);
-    if (!s->ioc) {
-        ret = -ECONNREFUSED;
-        goto out;
-    }
-
-    qio_channel_set_blocking(QIO_CHANNEL(s->ioc), false, NULL);
-    qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), aio_context);
-
-    yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
-                           s->bs);
-
-    ret = nbd_handle_updated_info(s->bs, NULL);
-    if (ret < 0) {
-        /*
-         * We have connected, but must fail for other reasons.
-         * Send NBD_CMD_DISC as a courtesy to the server.
-         */
-        NBDRequest request = { .type = NBD_CMD_DISC };
-
-        nbd_send_request(s->ioc, &request);
-
-        yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
-                                 nbd_yank, s->bs);
-        object_unref(OBJECT(s->ioc));
-        s->ioc = NULL;
-
-        return;
-    }
-
-out:
-    if (ret >= 0) {
-        /* successfully connected */
-        s->state = NBD_CLIENT_CONNECTED;
-        qemu_co_queue_restart_all(&s->free_sema);
-    }
+    nbd_co_do_establish_connection(s->bs, NULL);
 }
 
 static coroutine_fn void nbd_co_reconnect_loop(BDRVNBDState *s)
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false)
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (26 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 20:48   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 29/33] nbd/client-connection: add option for non-blocking connection attempt Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

nbd_open() does it (through nbd_establish_connection()).
Actually we lost that call on reconnect path in 1dc4718d849e1a1fe
"block/nbd: use non-blocking connect: fix vm hang on connect()"
when we have introduced reconnect thread.

Fixes: 1dc4718d849e1a1fe665ce5241ed79048cfa2cfc
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 nbd/client-connection.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 36d2c7c693..00efff293f 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -126,6 +126,8 @@ static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
         return ret;
     }
 
+    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
+
     if (!info) {
         return 0;
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 29/33] nbd/client-connection: add option for non-blocking connection attempt
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (27 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false) Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 20:51   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 30/33] block/nbd: reuse nbd_co_do_establish_connection() in nbd_open() Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We'll need a possibility of non-blocking nbd_co_establish_connection(),
so that it returns immediately, and it returns success only if
connections was previously established in background.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/block/nbd.h     | 2 +-
 block/nbd.c             | 2 +-
 nbd/client-connection.c | 8 +++++++-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 00bf08bade..6d5a807482 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -421,7 +421,7 @@ void nbd_client_connection_release(NBDClientConnection *conn);
 
 QIOChannel *coroutine_fn
 nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
-                            Error **errp);
+                            bool blocking, Error **errp);
 
 void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
 
diff --git a/block/nbd.c b/block/nbd.c
index 59971bfba8..863d950abd 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -368,7 +368,7 @@ static int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
 
     assert(!s->ioc);
 
-    s->ioc = nbd_co_establish_connection(s->conn, &s->info, errp);
+    s->ioc = nbd_co_establish_connection(s->conn, &s->info, true, errp);
     if (!s->ioc) {
         return -ECONNREFUSED;
     }
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 00efff293f..8914de7b94 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -251,6 +251,8 @@ void nbd_client_connection_release(NBDClientConnection *conn)
  *     result, just return it now
  *   otherwise if thread is not running, start a thread and wait for completion
  *
+ * If @blocking is false, don't wait for the thread, return immediately.
+ *
  * If @info is not NULL, also do nbd-negotiation after successful connection.
  * In this case info is used only as out parameter, and is fully initialized by
  * nbd_co_establish_connection(). "IN" fields of info as well as related only to
@@ -259,7 +261,7 @@ void nbd_client_connection_release(NBDClientConnection *conn)
  */
 QIOChannel *coroutine_fn
 nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
-                            Error **errp)
+                            bool blocking, Error **errp)
 {
     QIOChannel *ioc;
     QemuThread thread;
@@ -299,6 +301,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
                                connect_thread_func, conn, QEMU_THREAD_DETACHED);
         }
 
+        if (!blocking) {
+            return NULL;
+        }
+
         conn->wait_co = qemu_coroutine_self();
     }
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 30/33] block/nbd: reuse nbd_co_do_establish_connection() in nbd_open()
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (28 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 29/33] nbd/client-connection: add option for non-blocking connection attempt Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 20:57   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

The only last step we need to reuse the function is coroutine-wrapper.
nbd_open() may be called from non-coroutine context. So, generate the
wrapper and use it.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/coroutines.h |   6 +++
 block/nbd.c        | 101 ++-------------------------------------------
 2 files changed, 10 insertions(+), 97 deletions(-)

diff --git a/block/coroutines.h b/block/coroutines.h
index 4cfb4946e6..514d169d23 100644
--- a/block/coroutines.h
+++ b/block/coroutines.h
@@ -66,4 +66,10 @@ int coroutine_fn bdrv_co_readv_vmstate(BlockDriverState *bs,
 int coroutine_fn bdrv_co_writev_vmstate(BlockDriverState *bs,
                                         QEMUIOVector *qiov, int64_t pos);
 
+int generated_co_wrapper
+nbd_do_establish_connection(BlockDriverState *bs, Error **errp);
+int coroutine_fn
+nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp);
+
+
 #endif /* BLOCK_COROUTINES_INT_H */
diff --git a/block/nbd.c b/block/nbd.c
index 863d950abd..3b31941a83 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -44,6 +44,7 @@
 #include "block/qdict.h"
 #include "block/nbd.h"
 #include "block/block_int.h"
+#include "block/coroutines.h"
 
 #include "monitor/monitor.h"
 
@@ -101,11 +102,6 @@ typedef struct BDRVNBDState {
     NBDClientConnection *conn;
 } BDRVNBDState;
 
-static QIOChannelSocket *nbd_establish_connection(BlockDriverState *bs,
-                                                  SocketAddress *saddr,
-                                                  Error **errp);
-static int nbd_client_handshake(BlockDriverState *bs, QIOChannelSocket *sioc,
-                                Error **errp);
 static void nbd_yank(void *opaque);
 
 static void nbd_clear_bdrvstate(BlockDriverState *bs)
@@ -361,7 +357,7 @@ static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
     return 0;
 }
 
-static int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
+int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     int ret;
@@ -1577,83 +1573,6 @@ static void nbd_client_close(BlockDriverState *bs)
     nbd_teardown_connection(bs);
 }
 
-static QIOChannelSocket *nbd_establish_connection(BlockDriverState *bs,
-                                                  SocketAddress *saddr,
-                                                  Error **errp)
-{
-    ERRP_GUARD();
-    QIOChannelSocket *sioc;
-
-    sioc = qio_channel_socket_new();
-    qio_channel_set_name(QIO_CHANNEL(sioc), "nbd-client");
-
-    qio_channel_socket_connect_sync(sioc, saddr, errp);
-    if (*errp) {
-        object_unref(OBJECT(sioc));
-        return NULL;
-    }
-
-    yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name), nbd_yank, bs);
-    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
-
-    return sioc;
-}
-
-/* nbd_client_handshake takes ownership on sioc. */
-static int nbd_client_handshake(BlockDriverState *bs, QIOChannelSocket *sioc,
-                                Error **errp)
-{
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-    AioContext *aio_context = bdrv_get_aio_context(bs);
-    int ret;
-
-    trace_nbd_client_handshake(s->export);
-    qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
-    qio_channel_attach_aio_context(QIO_CHANNEL(sioc), aio_context);
-
-    s->info.request_sizes = true;
-    s->info.structured_reply = true;
-    s->info.base_allocation = true;
-    s->info.x_dirty_bitmap = g_strdup(s->x_dirty_bitmap);
-    s->info.name = g_strdup(s->export ?: "");
-    ret = nbd_receive_negotiate(aio_context, QIO_CHANNEL(sioc), s->tlscreds,
-                                s->hostname, &s->ioc, &s->info, errp);
-    g_free(s->info.x_dirty_bitmap);
-    g_free(s->info.name);
-    if (ret < 0) {
-        yank_unregister_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
-                                 nbd_yank, bs);
-        object_unref(OBJECT(sioc));
-        return ret;
-    }
-
-    if (s->ioc) {
-        /* sioc is referenced by s->ioc */
-        object_unref(OBJECT(sioc));
-    } else {
-        s->ioc = QIO_CHANNEL(sioc);
-    }
-    sioc = NULL;
-
-    ret = nbd_handle_updated_info(bs, errp);
-    if (ret < 0) {
-        /*
-         * We have connected, but must fail for other reasons.
-         * Send NBD_CMD_DISC as a courtesy to the server.
-         */
-        NBDRequest request = { .type = NBD_CMD_DISC };
-
-        nbd_send_request(s->ioc, &request);
-
-        yank_unregister_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
-                                 nbd_yank, bs);
-        object_unref(OBJECT(s->ioc));
-        s->ioc = NULL;
-        return ret;
-    }
-
-    return 0;
-}
 
 /*
  * Parse nbd_open options
@@ -2037,7 +1956,6 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
 {
     int ret;
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-    QIOChannelSocket *sioc;
 
     s->bs = bs;
     qemu_co_mutex_init(&s->send_mutex);
@@ -2056,22 +1974,11 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
                                         s->x_dirty_bitmap, s->tlscreds,
                                         monitor_cur());
 
-    /*
-     * establish TCP connection, return error if it fails
-     * TODO: Configurable retry-until-timeout behaviour.
-     */
-    sioc = nbd_establish_connection(bs, s->saddr, errp);
-    if (!sioc) {
-        ret = -ECONNREFUSED;
-        goto fail;
-    }
-
-    ret = nbd_client_handshake(bs, sioc, errp);
+    /* TODO: Configurable retry-until-timeout behaviour.*/
+    ret = nbd_do_establish_connection(bs, errp);
     if (ret < 0) {
         goto fail;
     }
-    /* successfully connected */
-    s->state = NBD_CLIENT_CONNECTED;
 
     s->connection_co = qemu_coroutine_create(nbd_connection_entry, s);
     bdrv_inc_in_flight(bs);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (29 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 30/33] block/nbd: reuse nbd_co_do_establish_connection() in nbd_open() Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-05-12  7:06   ` Paolo Bonzini
  2021-06-03 21:08   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 32/33] block/nbd: safer transition to receiving request Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

We already have two similar helpers for other state. Let's add another
one for convenience.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 3b31941a83..6cc563e13d 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -124,15 +124,20 @@ static void nbd_clear_bdrvstate(BlockDriverState *bs)
     s->x_dirty_bitmap = NULL;
 }
 
+static bool nbd_client_connected(BDRVNBDState *s)
+{
+    return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTED;
+}
+
 static void nbd_channel_error(BDRVNBDState *s, int ret)
 {
     if (ret == -EIO) {
-        if (qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTED) {
+        if (nbd_client_connected(s)) {
             s->state = s->reconnect_delay ? NBD_CLIENT_CONNECTING_WAIT :
                                             NBD_CLIENT_CONNECTING_NOWAIT;
         }
     } else {
-        if (qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTED) {
+        if (nbd_client_connected(s)) {
             qio_channel_shutdown(s->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
         }
         s->state = NBD_CLIENT_QUIT;
@@ -230,7 +235,7 @@ static void nbd_client_attach_aio_context(BlockDriverState *bs,
      * s->connection_co is either yielded from nbd_receive_reply or from
      * nbd_co_reconnect_loop()
      */
-    if (qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTED) {
+    if (nbd_client_connected(s)) {
         qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), new_context);
     }
 
@@ -503,7 +508,7 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
             nbd_co_reconnect_loop(s);
         }
 
-        if (qatomic_load_acquire(&s->state) != NBD_CLIENT_CONNECTED) {
+        if (!nbd_client_connected(s)) {
             continue;
         }
 
@@ -582,7 +587,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
         qemu_co_queue_wait(&s->free_sema, &s->send_mutex);
     }
 
-    if (qatomic_load_acquire(&s->state) != NBD_CLIENT_CONNECTED) {
+    if (!nbd_client_connected(s)) {
         rc = -EIO;
         goto err;
     }
@@ -609,8 +614,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
     if (qiov) {
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
-        if (qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTED &&
-            rc >= 0) {
+        if (nbd_clinet_connected(s) && rc >= 0) {
             if (qio_channel_writev_all(s->ioc, qiov->iov, qiov->niov,
                                        NULL) < 0) {
                 rc = -EIO;
@@ -935,7 +939,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
     s->requests[i].receiving = false;
-    if (qatomic_load_acquire(&s->state) != NBD_CLIENT_CONNECTED) {
+    if (!nbd_client_connected(s)) {
         error_setg(errp, "Connection closed");
         return -EIO;
     }
@@ -1094,7 +1098,7 @@ static bool nbd_reply_chunk_iter_receive(BDRVNBDState *s,
     NBDReply local_reply;
     NBDStructuredReplyChunk *chunk;
     Error *local_err = NULL;
-    if (qatomic_load_acquire(&s->state) != NBD_CLIENT_CONNECTED) {
+    if (!nbd_client_connected(s)) {
         error_setg(&local_err, "Connection closed");
         nbd_iter_channel_error(iter, -EIO, &local_err);
         goto break_loop;
@@ -1119,8 +1123,7 @@ static bool nbd_reply_chunk_iter_receive(BDRVNBDState *s,
     }
 
     /* Do not execute the body of NBD_FOREACH_REPLY_CHUNK for simple reply. */
-    if (nbd_reply_is_simple(reply) ||
-        qatomic_load_acquire(&s->state) != NBD_CLIENT_CONNECTED) {
+    if (nbd_reply_is_simple(reply) || !nbd_client_connected(s)) {
         goto break_loop;
     }
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 32/33] block/nbd: safer transition to receiving request
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (30 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-06-03 21:11   ` Eric Blake
  2021-04-16  8:09 ` [PATCH v3 33/33] block/nbd: drop connection_co Vladimir Sementsov-Ogievskiy
  2021-05-12  6:54 ` [PATCH v3 00/33] block/nbd: rework client connection Paolo Bonzini
  33 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

req->receiving is a flag of request being in one concrete yield point
in nbd_co_do_receive_one_chunk().

Such kind of boolean flag is always better to unset before scheduling
the coroutine, to avoid double scheduling. So, let's be more careful.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 6cc563e13d..03391bb231 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -152,6 +152,7 @@ static void nbd_recv_coroutines_wake_all(BDRVNBDState *s)
         NBDClientRequest *req = &s->requests[i];
 
         if (req->coroutine && req->receiving) {
+            req->receiving = false;
             aio_co_wake(req->coroutine);
         }
     }
@@ -552,6 +553,7 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
          *   connection_co happens through a bottom half, which can only
          *   run after we yield.
          */
+        s->requests[i].receiving = false;
         aio_co_wake(s->requests[i].coroutine);
         qemu_coroutine_yield();
     }
@@ -614,7 +616,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
     if (qiov) {
         qio_channel_set_cork(s->ioc, true);
         rc = nbd_send_request(s->ioc, request);
-        if (nbd_clinet_connected(s) && rc >= 0) {
+        if (nbd_client_connected(s) && rc >= 0) {
             if (qio_channel_writev_all(s->ioc, qiov->iov, qiov->niov,
                                        NULL) < 0) {
                 rc = -EIO;
@@ -938,7 +940,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     /* Wait until we're woken up by nbd_connection_entry.  */
     s->requests[i].receiving = true;
     qemu_coroutine_yield();
-    s->requests[i].receiving = false;
+    assert(!s->requests[i].receiving);
     if (!nbd_client_connected(s)) {
         error_setg(errp, "Connection closed");
         return -EIO;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* [PATCH v3 33/33] block/nbd: drop connection_co
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (31 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 32/33] block/nbd: safer transition to receiving request Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:09 ` Vladimir Sementsov-Ogievskiy
  2021-04-16  8:14   ` Vladimir Sementsov-Ogievskiy
  2021-06-03 21:27   ` Eric Blake
  2021-05-12  6:54 ` [PATCH v3 00/33] block/nbd: rework client connection Paolo Bonzini
  33 siblings, 2 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:09 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, vsementsov, eblake, mreitz, kwolf, rvkagan, den

OK, that's a big rewrite of the logic.

Pre-patch we have an always running coroutine - connection_co. It does
reply receiving and reconnecting. And it leads to a lot of difficult
and unobvious code around drained sections and context switch. We also
abuse bs->in_flight counter which is increased for connection_co and
temporary decreased in points where we want to allow drained section to
begin. One of these place is in another file: in nbd_read_eof() in
nbd/client.c.

We also cancel reconnect and requests waiting for reconnect on drained
begin which is not correct.

Let's finally drop this always running coroutine and go another way:

1. reconnect_attempt() goes to nbd_co_send_request and called under
   send_mutex

2. We do receive headers in request coroutine. But we also should
   dispatch replies for another pending requests. So,
   nbd_connection_entry() is turned into nbd_receive_replies(), which
   does reply dispatching until it receive another request headers, and
   returns when it receive the requested header.

3. All old staff around drained sections and context switch is dropped.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/nbd.c  | 376 ++++++++++++++++-----------------------------------
 nbd/client.c |   2 -
 2 files changed, 119 insertions(+), 259 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 03391bb231..3a7b532790 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -59,7 +59,7 @@
 typedef struct {
     Coroutine *coroutine;
     uint64_t offset;        /* original offset of the request */
-    bool receiving;         /* waiting for connection_co? */
+    bool receiving; /* waiting in first yield of nbd_receive_replies() */
 } NBDClientRequest;
 
 typedef enum NBDClientState {
@@ -75,14 +75,10 @@ typedef struct BDRVNBDState {
 
     CoMutex send_mutex;
     CoQueue free_sema;
-    Coroutine *connection_co;
-    Coroutine *teardown_co;
-    QemuCoSleepState *connection_co_sleep_ns_state;
-    bool drained;
-    bool wait_drained_end;
+    Coroutine *receive_co;
+    Coroutine *in_flight_waiter;
     int in_flight;
     NBDClientState state;
-    bool wait_in_flight;
 
     QEMUTimer *reconnect_delay_timer;
 
@@ -131,33 +127,20 @@ static bool nbd_client_connected(BDRVNBDState *s)
 
 static void nbd_channel_error(BDRVNBDState *s, int ret)
 {
+    if (nbd_client_connected(s)) {
+        qio_channel_shutdown(s->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
+    }
+
     if (ret == -EIO) {
         if (nbd_client_connected(s)) {
             s->state = s->reconnect_delay ? NBD_CLIENT_CONNECTING_WAIT :
                                             NBD_CLIENT_CONNECTING_NOWAIT;
         }
     } else {
-        if (nbd_client_connected(s)) {
-            qio_channel_shutdown(s->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
-        }
         s->state = NBD_CLIENT_QUIT;
     }
 }
 
-static void nbd_recv_coroutines_wake_all(BDRVNBDState *s)
-{
-    int i;
-
-    for (i = 0; i < MAX_NBD_REQUESTS; i++) {
-        NBDClientRequest *req = &s->requests[i];
-
-        if (req->coroutine && req->receiving) {
-            req->receiving = false;
-            aio_co_wake(req->coroutine);
-        }
-    }
-}
-
 static void reconnect_delay_timer_del(BDRVNBDState *s)
 {
     if (s->reconnect_delay_timer) {
@@ -194,117 +177,23 @@ static void reconnect_delay_timer_init(BDRVNBDState *s, uint64_t expire_time_ns)
     timer_mod(s->reconnect_delay_timer, expire_time_ns);
 }
 
-static void nbd_client_detach_aio_context(BlockDriverState *bs)
-{
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-
-    /* Timer is deleted in nbd_client_co_drain_begin() */
-    assert(!s->reconnect_delay_timer);
-    /*
-     * If reconnect is in progress we may have no ->ioc.  It will be
-     * re-instantiated in the proper aio context once the connection is
-     * reestablished.
-     */
-    if (s->ioc) {
-        qio_channel_detach_aio_context(QIO_CHANNEL(s->ioc));
-    }
-}
-
-static void nbd_client_attach_aio_context_bh(void *opaque)
-{
-    BlockDriverState *bs = opaque;
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-
-    if (s->connection_co) {
-        /*
-         * The node is still drained, so we know the coroutine has yielded in
-         * nbd_read_eof(), the only place where bs->in_flight can reach 0, or
-         * it is entered for the first time. Both places are safe for entering
-         * the coroutine.
-         */
-        qemu_aio_coroutine_enter(bs->aio_context, s->connection_co);
-    }
-    bdrv_dec_in_flight(bs);
-}
-
-static void nbd_client_attach_aio_context(BlockDriverState *bs,
-                                          AioContext *new_context)
-{
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-
-    /*
-     * s->connection_co is either yielded from nbd_receive_reply or from
-     * nbd_co_reconnect_loop()
-     */
-    if (nbd_client_connected(s)) {
-        qio_channel_attach_aio_context(QIO_CHANNEL(s->ioc), new_context);
-    }
-
-    bdrv_inc_in_flight(bs);
-
-    /*
-     * Need to wait here for the BH to run because the BH must run while the
-     * node is still drained.
-     */
-    aio_wait_bh_oneshot(new_context, nbd_client_attach_aio_context_bh, bs);
-}
-
-static void coroutine_fn nbd_client_co_drain_begin(BlockDriverState *bs)
-{
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-
-    s->drained = true;
-    if (s->connection_co_sleep_ns_state) {
-        qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
-    }
-
-    nbd_co_establish_connection_cancel(s->conn);
-
-    reconnect_delay_timer_del(s);
-
-    if (qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT) {
-        s->state = NBD_CLIENT_CONNECTING_NOWAIT;
-        qemu_co_queue_restart_all(&s->free_sema);
-    }
-}
-
-static void coroutine_fn nbd_client_co_drain_end(BlockDriverState *bs)
-{
-    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-
-    s->drained = false;
-    if (s->wait_drained_end) {
-        s->wait_drained_end = false;
-        aio_co_wake(s->connection_co);
-    }
-}
-
-
 static void nbd_teardown_connection(BlockDriverState *bs)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
 
+    assert(!s->in_flight);
+    assert(!s->receive_co);
+
     if (s->ioc) {
         /* finish any pending coroutines */
         qio_channel_shutdown(s->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
+        yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
+                                 nbd_yank, s->bs);
+        object_unref(OBJECT(s->ioc));
+        s->ioc = NULL;
     }
 
     s->state = NBD_CLIENT_QUIT;
-    if (s->connection_co) {
-        if (s->connection_co_sleep_ns_state) {
-            qemu_co_sleep_wake(s->connection_co_sleep_ns_state);
-        }
-        nbd_co_establish_connection_cancel(s->conn);
-    }
-    if (qemu_in_coroutine()) {
-        s->teardown_co = qemu_coroutine_self();
-        /* connection_co resumes us when it terminates */
-        qemu_coroutine_yield();
-        s->teardown_co = NULL;
-    } else {
-        BDRV_POLL_WHILE(bs, s->connection_co);
-    }
-    assert(!s->connection_co);
 }
 
 static bool nbd_client_connecting(BDRVNBDState *s)
@@ -367,10 +256,11 @@ int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     int ret;
+    bool blocking = nbd_client_connecting_wait(s);
 
     assert(!s->ioc);
 
-    s->ioc = nbd_co_establish_connection(s->conn, &s->info, true, errp);
+    s->ioc = nbd_co_establish_connection(s->conn, &s->info, blocking, errp);
     if (!s->ioc) {
         return -ECONNREFUSED;
     }
@@ -404,6 +294,7 @@ int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
     return 0;
 }
 
+/* called under s->send_mutex */
 static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 {
     if (!nbd_client_connecting(s)) {
@@ -412,23 +303,29 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
 
     /* Wait for completion of all in-flight requests */
 
-    qemu_co_mutex_lock(&s->send_mutex);
-
-    while (s->in_flight > 0) {
-        qemu_co_mutex_unlock(&s->send_mutex);
-        nbd_recv_coroutines_wake_all(s);
-        s->wait_in_flight = true;
+    if (s->in_flight) {
+        s->in_flight_waiter = qemu_coroutine_self();
         qemu_coroutine_yield();
-        s->wait_in_flight = false;
-        qemu_co_mutex_lock(&s->send_mutex);
+        assert(!s->in_flight_waiter);
+        assert(!s->in_flight);
     }
 
-    qemu_co_mutex_unlock(&s->send_mutex);
-
     if (!nbd_client_connecting(s)) {
         return;
     }
 
+    if (nbd_client_connecting_wait(s) && s->reconnect_delay &&
+        !s->reconnect_delay_timer)
+    {
+        /*
+         * It's first reconnect attempt after switching to
+         * NBD_CLIENT_CONNECTING_WAIT
+         */
+        reconnect_delay_timer_init(s,
+            qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
+            s->reconnect_delay * NANOSECONDS_PER_SECOND);
+    }
+
     /*
      * Now we are sure that nobody is accessing the channel, and no one will
      * try until we set the state to CONNECTED.
@@ -446,73 +343,34 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
     nbd_co_do_establish_connection(s->bs, NULL);
 }
 
-static coroutine_fn void nbd_co_reconnect_loop(BDRVNBDState *s)
-{
-    uint64_t timeout = 1 * NANOSECONDS_PER_SECOND;
-    uint64_t max_timeout = 16 * NANOSECONDS_PER_SECOND;
-
-    if (qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT) {
-        reconnect_delay_timer_init(s, qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
-                                   s->reconnect_delay * NANOSECONDS_PER_SECOND);
-    }
-
-    nbd_reconnect_attempt(s);
-
-    while (nbd_client_connecting(s)) {
-        if (s->drained) {
-            bdrv_dec_in_flight(s->bs);
-            s->wait_drained_end = true;
-            while (s->drained) {
-                /*
-                 * We may be entered once from nbd_client_attach_aio_context_bh
-                 * and then from nbd_client_co_drain_end. So here is a loop.
-                 */
-                qemu_coroutine_yield();
-            }
-            bdrv_inc_in_flight(s->bs);
-        } else {
-            qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, timeout,
-                                      &s->connection_co_sleep_ns_state);
-            if (s->drained) {
-                continue;
-            }
-            if (timeout < max_timeout) {
-                timeout *= 2;
-            }
-        }
-
-        nbd_reconnect_attempt(s);
-    }
-
-    reconnect_delay_timer_del(s);
-}
-
-static coroutine_fn void nbd_connection_entry(void *opaque)
+static coroutine_fn void nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
 {
-    BDRVNBDState *s = opaque;
     uint64_t i;
     int ret = 0;
     Error *local_err = NULL;
 
-    while (qatomic_load_acquire(&s->state) != NBD_CLIENT_QUIT) {
-        /*
-         * The NBD client can only really be considered idle when it has
-         * yielded from qio_channel_readv_all_eof(), waiting for data. This is
-         * the point where the additional scheduled coroutine entry happens
-         * after nbd_client_attach_aio_context().
-         *
-         * Therefore we keep an additional in_flight reference all the time and
-         * only drop it temporarily here.
-         */
+    i = HANDLE_TO_INDEX(s, handle);
+    if (s->receive_co) {
+        assert(s->receive_co != qemu_coroutine_self());
 
-        if (nbd_client_connecting(s)) {
-            nbd_co_reconnect_loop(s);
-        }
+        /* Another request coroutine is receiving now */
+        s->requests[i].receiving = true;
+        qemu_coroutine_yield();
+        assert(!s->requests[i].receiving);
 
-        if (!nbd_client_connected(s)) {
-            continue;
+        if (s->receive_co != qemu_coroutine_self()) {
+            /*
+             * We are either failed or done, caller uses nbd_client_connected()
+             * to distinguish.
+             */
+            return;
         }
+    }
+
+    assert(s->receive_co == 0 || s->receive_co == qemu_coroutine_self());
+    s->receive_co = qemu_coroutine_self();
 
+    while (nbd_client_connected(s)) {
         assert(s->reply.handle == 0);
         ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, &local_err);
 
@@ -522,8 +380,21 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
             local_err = NULL;
         }
         if (ret <= 0) {
-            nbd_channel_error(s, ret ? ret : -EIO);
-            continue;
+            ret = ret ? ret : -EIO;
+            nbd_channel_error(s, ret);
+            goto out;
+        }
+
+        if (!nbd_client_connected(s)) {
+            ret = -EIO;
+            goto out;
+        }
+
+        i = HANDLE_TO_INDEX(s, s->reply.handle);
+
+        if (s->reply.handle == handle) {
+            ret = 0;
+            goto out;
         }
 
         /*
@@ -531,50 +402,49 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
          * handler acts as a synchronization point and ensures that only
          * one coroutine is called until the reply finishes.
          */
-        i = HANDLE_TO_INDEX(s, s->reply.handle);
         if (i >= MAX_NBD_REQUESTS ||
             !s->requests[i].coroutine ||
             !s->requests[i].receiving ||
             (nbd_reply_is_structured(&s->reply) && !s->info.structured_reply))
         {
             nbd_channel_error(s, -EINVAL);
-            continue;
+            ret = -EINVAL;
+            goto out;
         }
 
-        /*
-         * We're woken up again by the request itself.  Note that there
-         * is no race between yielding and reentering connection_co.  This
-         * is because:
-         *
-         * - if the request runs on the same AioContext, it is only
-         *   entered after we yield
-         *
-         * - if the request runs on a different AioContext, reentering
-         *   connection_co happens through a bottom half, which can only
-         *   run after we yield.
-         */
         s->requests[i].receiving = false;
         aio_co_wake(s->requests[i].coroutine);
         qemu_coroutine_yield();
     }
 
-    qemu_co_queue_restart_all(&s->free_sema);
-    nbd_recv_coroutines_wake_all(s);
-    bdrv_dec_in_flight(s->bs);
-
-    s->connection_co = NULL;
-    if (s->ioc) {
-        qio_channel_detach_aio_context(QIO_CHANNEL(s->ioc));
-        yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
-                                 nbd_yank, s->bs);
-        object_unref(OBJECT(s->ioc));
-        s->ioc = NULL;
-    }
+out:
+    if (ret < 0) {
+        s->receive_co = NULL;
+        for (i = 0; i < MAX_NBD_REQUESTS; i++) {
+            NBDClientRequest *req = &s->requests[i];
 
-    if (s->teardown_co) {
-        aio_co_wake(s->teardown_co);
+            if (req->coroutine && req->receiving) {
+                req->receiving = false;
+                aio_co_wake(req->coroutine);
+            }
+        }
+    } else {
+        /*
+         * If there are still some receiving request, it should become next
+         * "receive_co"
+         */
+        for (i = 0; i < MAX_NBD_REQUESTS; i++) {
+            NBDClientRequest *req = &s->requests[i];
+
+            if (req->coroutine && req->receiving) {
+                req->receiving = false;
+                s->receive_co = req->coroutine;
+                aio_co_wake(req->coroutine);
+                return;
+            }
+        }
+        s->receive_co = NULL;
     }
-    aio_wait_kick();
 }
 
 static int nbd_co_send_request(BlockDriverState *bs,
@@ -585,7 +455,15 @@ static int nbd_co_send_request(BlockDriverState *bs,
     int rc, i = -1;
 
     qemu_co_mutex_lock(&s->send_mutex);
-    while (s->in_flight == MAX_NBD_REQUESTS || nbd_client_connecting_wait(s)) {
+
+    nbd_reconnect_attempt(s);
+
+    if (!nbd_client_connected(s)) {
+        qemu_co_mutex_unlock(&s->send_mutex);
+        return -EIO;
+    }
+
+    while (s->in_flight == MAX_NBD_REQUESTS) {
         qemu_co_queue_wait(&s->free_sema, &s->send_mutex);
     }
 
@@ -636,10 +514,10 @@ err:
             s->requests[i].coroutine = NULL;
             s->in_flight--;
         }
-        if (s->in_flight == 0 && s->wait_in_flight) {
-            aio_co_wake(s->connection_co);
-        } else {
-            qemu_co_queue_next(&s->free_sema);
+        if (s->in_flight == 0 && s->in_flight_waiter) {
+            Coroutine *co = s->in_flight_waiter;
+            s->in_flight_waiter = NULL;
+            aio_co_wake(co);
         }
     }
     qemu_co_mutex_unlock(&s->send_mutex);
@@ -938,9 +816,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     *request_ret = 0;
 
     /* Wait until we're woken up by nbd_connection_entry.  */
-    s->requests[i].receiving = true;
-    qemu_coroutine_yield();
-    assert(!s->requests[i].receiving);
+    nbd_receive_replies(s, handle);
     if (!nbd_client_connected(s)) {
         error_setg(errp, "Connection closed");
         return -EIO;
@@ -1033,13 +909,8 @@ static coroutine_fn int nbd_co_receive_one_chunk(
     }
     s->reply.handle = 0;
 
-    if (s->connection_co && !s->wait_in_flight) {
-        /*
-         * We must check s->wait_in_flight, because we may entered by
-         * nbd_recv_coroutines_wake_all(), in this case we should not
-         * wake connection_co here, it will woken by last request.
-         */
-        aio_co_wake(s->connection_co);
+    if (s->receive_co) {
+        aio_co_wake(s->receive_co);
     }
 
     return ret;
@@ -1151,8 +1022,10 @@ break_loop:
 
     qemu_co_mutex_lock(&s->send_mutex);
     s->in_flight--;
-    if (s->in_flight == 0 && s->wait_in_flight) {
-        aio_co_wake(s->connection_co);
+    if (s->in_flight == 0 && s->in_flight_waiter) {
+        Coroutine *co = s->in_flight_waiter;
+        s->in_flight_waiter = NULL;
+        aio_co_wake(co);
     } else {
         qemu_co_queue_next(&s->free_sema);
     }
@@ -1980,14 +1853,13 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
                                         monitor_cur());
 
     /* TODO: Configurable retry-until-timeout behaviour.*/
+    s->state = NBD_CLIENT_CONNECTING_WAIT;
     ret = nbd_do_establish_connection(bs, errp);
     if (ret < 0) {
         goto fail;
     }
 
-    s->connection_co = qemu_coroutine_create(nbd_connection_entry, s);
-    bdrv_inc_in_flight(bs);
-    aio_co_schedule(bdrv_get_aio_context(bs), s->connection_co);
+    nbd_client_connection_enable_retry(s->conn);
 
     return 0;
 
@@ -2141,6 +2013,8 @@ static void nbd_cancel_in_flight(BlockDriverState *bs)
         s->state = NBD_CLIENT_CONNECTING_NOWAIT;
         qemu_co_queue_restart_all(&s->free_sema);
     }
+
+    nbd_co_establish_connection_cancel(s->conn);
 }
 
 static BlockDriver bdrv_nbd = {
@@ -2161,10 +2035,6 @@ static BlockDriver bdrv_nbd = {
     .bdrv_refresh_limits        = nbd_refresh_limits,
     .bdrv_co_truncate           = nbd_co_truncate,
     .bdrv_getlength             = nbd_getlength,
-    .bdrv_detach_aio_context    = nbd_client_detach_aio_context,
-    .bdrv_attach_aio_context    = nbd_client_attach_aio_context,
-    .bdrv_co_drain_begin        = nbd_client_co_drain_begin,
-    .bdrv_co_drain_end          = nbd_client_co_drain_end,
     .bdrv_refresh_filename      = nbd_refresh_filename,
     .bdrv_co_block_status       = nbd_client_co_block_status,
     .bdrv_dirname               = nbd_dirname,
@@ -2190,10 +2060,6 @@ static BlockDriver bdrv_nbd_tcp = {
     .bdrv_refresh_limits        = nbd_refresh_limits,
     .bdrv_co_truncate           = nbd_co_truncate,
     .bdrv_getlength             = nbd_getlength,
-    .bdrv_detach_aio_context    = nbd_client_detach_aio_context,
-    .bdrv_attach_aio_context    = nbd_client_attach_aio_context,
-    .bdrv_co_drain_begin        = nbd_client_co_drain_begin,
-    .bdrv_co_drain_end          = nbd_client_co_drain_end,
     .bdrv_refresh_filename      = nbd_refresh_filename,
     .bdrv_co_block_status       = nbd_client_co_block_status,
     .bdrv_dirname               = nbd_dirname,
@@ -2219,10 +2085,6 @@ static BlockDriver bdrv_nbd_unix = {
     .bdrv_refresh_limits        = nbd_refresh_limits,
     .bdrv_co_truncate           = nbd_co_truncate,
     .bdrv_getlength             = nbd_getlength,
-    .bdrv_detach_aio_context    = nbd_client_detach_aio_context,
-    .bdrv_attach_aio_context    = nbd_client_attach_aio_context,
-    .bdrv_co_drain_begin        = nbd_client_co_drain_begin,
-    .bdrv_co_drain_end          = nbd_client_co_drain_end,
     .bdrv_refresh_filename      = nbd_refresh_filename,
     .bdrv_co_block_status       = nbd_client_co_block_status,
     .bdrv_dirname               = nbd_dirname,
diff --git a/nbd/client.c b/nbd/client.c
index 0c2db4bcba..30d5383cb1 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -1434,9 +1434,7 @@ nbd_read_eof(BlockDriverState *bs, QIOChannel *ioc, void *buffer, size_t size,
 
         len = qio_channel_readv(ioc, &iov, 1, errp);
         if (len == QIO_CHANNEL_ERR_BLOCK) {
-            bdrv_dec_in_flight(bs);
             qio_channel_yield(ioc, G_IO_IN);
-            bdrv_inc_in_flight(bs);
             continue;
         } else if (len < 0) {
             return -EIO;
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 33/33] block/nbd: drop connection_co
  2021-04-16  8:09 ` [PATCH v3 33/33] block/nbd: drop connection_co Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:14   ` Vladimir Sementsov-Ogievskiy
  2021-04-16  8:21     ` Vladimir Sementsov-Ogievskiy
  2021-06-03 21:27   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:14 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, eblake, mreitz, kwolf, rvkagan, den

16.04.2021 11:09, Vladimir Sementsov-Ogievskiy wrote:
> OK, that's a big rewrite of the logic.
> 
> Pre-patch we have an always running coroutine - connection_co. It does
> reply receiving and reconnecting. And it leads to a lot of difficult
> and unobvious code around drained sections and context switch. We also
> abuse bs->in_flight counter which is increased for connection_co and
> temporary decreased in points where we want to allow drained section to
> begin. One of these place is in another file: in nbd_read_eof() in
> nbd/client.c.
> 
> We also cancel reconnect and requests waiting for reconnect on drained
> begin which is not correct.
> 
> Let's finally drop this always running coroutine and go another way:
> 
> 1. reconnect_attempt() goes to nbd_co_send_request and called under
>     send_mutex
> 
> 2. We do receive headers in request coroutine. But we also should
>     dispatch replies for another pending requests. So,
>     nbd_connection_entry() is turned into nbd_receive_replies(), which
>     does reply dispatching until it receive another request headers, and
>     returns when it receive the requested header.
> 
> 3. All old staff around drained sections and context switch is dropped.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>

Please consider this last patch as RFC for now:

1. It is complicated, and doesn't have good documentation. Please look through and ask everything that is not obvious, I'll explain. Don't waste your time trying to understand what is not clean.

2. I also failed to image, how to split the patch into smaller simple patches.. Ideas are welcome.

3. It actually reverts what was done in

commit 8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date:   Thu Sep 3 22:02:58 2020 +0300

     block/nbd: fix drain dead-lock because of nbd reconnect-delay

and I didn't check yet, does this dead-lock still here or not. Even if it still here I believe that nbd driver is a wrong place to workaround this bug, but I should check it first at least.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 33/33] block/nbd: drop connection_co
  2021-04-16  8:14   ` Vladimir Sementsov-Ogievskiy
@ 2021-04-16  8:21     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-16  8:21 UTC (permalink / raw)
  To: qemu-block; +Cc: qemu-devel, eblake, mreitz, kwolf, rvkagan, den

16.04.2021 11:14, Vladimir Sementsov-Ogievskiy wrote:
> 16.04.2021 11:09, Vladimir Sementsov-Ogievskiy wrote:
>> OK, that's a big rewrite of the logic.
>>
>> Pre-patch we have an always running coroutine - connection_co. It does
>> reply receiving and reconnecting. And it leads to a lot of difficult
>> and unobvious code around drained sections and context switch. We also
>> abuse bs->in_flight counter which is increased for connection_co and
>> temporary decreased in points where we want to allow drained section to
>> begin. One of these place is in another file: in nbd_read_eof() in
>> nbd/client.c.
>>
>> We also cancel reconnect and requests waiting for reconnect on drained
>> begin which is not correct.
>>
>> Let's finally drop this always running coroutine and go another way:
>>
>> 1. reconnect_attempt() goes to nbd_co_send_request and called under
>>     send_mutex
>>
>> 2. We do receive headers in request coroutine. But we also should
>>     dispatch replies for another pending requests. So,
>>     nbd_connection_entry() is turned into nbd_receive_replies(), which
>>     does reply dispatching until it receive another request headers, and
>>     returns when it receive the requested header.
>>
>> 3. All old staff around drained sections and context switch is dropped.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
> 
> Please consider this last patch as RFC for now:
> 
> 1. It is complicated, and doesn't have good documentation. Please look through and ask everything that is not obvious, I'll explain. Don't waste your time trying to understand what is not clean.
> 
> 2. I also failed to image, how to split the patch into smaller simple patches.. Ideas are welcome.
> 
> 3. It actually reverts what was done in
> 
> commit 8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace
> Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Date:   Thu Sep 3 22:02:58 2020 +0300
> 
>     block/nbd: fix drain dead-lock because of nbd reconnect-delay
> 
> and I didn't check yet, does this dead-lock still here or not. Even if it still here I believe that nbd driver is a wrong place to workaround this bug, but I should check it first at least.
> 

4. As Roman said, there is a problem in new architecture: when guest is idle, we will not detect disconnect immediately but only on the next request from the guest. That may be considered as a degradation.
Still, let's implement a kind of keep-alive on top of this series, some ideas:

   - add an idle-timeout, and do simple NBD request by timeout, which will result in some expected error reply from the server.
   - or add an idle coroutine, which will do endless "read" when there no requests. It will be a kind of old connection_co, but it will have only one function and will be extremely simple. And we may just cancel it on drained-begin and restart on drained-end.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly
  2021-04-16  8:08 ` [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly Vladimir Sementsov-Ogievskiy
@ 2021-04-19  9:34   ` Daniel P. Berrangé
  2021-04-19 10:09     ` Vladimir Sementsov-Ogievskiy
  2021-05-12  9:40     ` Roman Kagan
  0 siblings, 2 replies; 121+ messages in thread
From: Daniel P. Berrangé @ 2021-04-19  9:34 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, Gerd Hoffmann, den

On Fri, Apr 16, 2021 at 11:08:59AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Detecting monitor by current coroutine works bad when we are not in
> coroutine context. And that's exactly so in nbd reconnect code, where
> qio_channel_socket_connect_sync() is called from thread.
> 
> Add a possibility to pass monitor by hand, to be used in the following
> commit.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/io/channel-socket.h    | 20 ++++++++++++++++++++
>  include/qemu/sockets.h         |  2 +-
>  io/channel-socket.c            | 17 +++++++++++++----
>  tests/unit/test-util-sockets.c | 16 ++++++++--------
>  util/qemu-sockets.c            | 10 +++++-----
>  5 files changed, 47 insertions(+), 18 deletions(-)
> 
> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> index e747e63514..6d0915420d 100644
> --- a/include/io/channel-socket.h
> +++ b/include/io/channel-socket.h
> @@ -78,6 +78,23 @@ qio_channel_socket_new_fd(int fd,
>                            Error **errp);
>  
>  
> +/**
> + * qio_channel_socket_connect_sync_mon:
> + * @ioc: the socket channel object
> + * @addr: the address to connect to
> + * @mon: current monitor. If NULL, it will be detected by
> + *       current coroutine.
> + * @errp: pointer to a NULL-initialized error object
> + *
> + * Attempt to connect to the address @addr. This method
> + * will run in the foreground so the caller will not regain
> + * execution control until the connection is established or
> + * an error occurs.
> + */
> +int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
> +                                        SocketAddress *addr,
> +                                        Monitor *mon,
> +                                        Error **errp);

I don't really like exposing the concept of the QEMU monitor in
the IO layer APIs. IMHO these ought to remain completely separate
subsystems from the API pov, and we ought to fix this problem by
making monitor_cur() work better in the scenario required.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly
  2021-04-19  9:34   ` Daniel P. Berrangé
@ 2021-04-19 10:09     ` Vladimir Sementsov-Ogievskiy
  2021-05-12  9:40     ` Roman Kagan
  1 sibling, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-19 10:09 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: qemu-block, qemu-devel, eblake, mreitz, kwolf, rvkagan, den,
	Gerd Hoffmann, gilbert >> Dr. David Alan Gilbert,
	Markus Armbruster

19.04.2021 12:34, Daniel P. Berrangé wrote:
> On Fri, Apr 16, 2021 at 11:08:59AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Detecting monitor by current coroutine works bad when we are not in
>> coroutine context. And that's exactly so in nbd reconnect code, where
>> qio_channel_socket_connect_sync() is called from thread.
>>
>> Add a possibility to pass monitor by hand, to be used in the following
>> commit.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/io/channel-socket.h    | 20 ++++++++++++++++++++
>>   include/qemu/sockets.h         |  2 +-
>>   io/channel-socket.c            | 17 +++++++++++++----
>>   tests/unit/test-util-sockets.c | 16 ++++++++--------
>>   util/qemu-sockets.c            | 10 +++++-----
>>   5 files changed, 47 insertions(+), 18 deletions(-)
>>
>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>> index e747e63514..6d0915420d 100644
>> --- a/include/io/channel-socket.h
>> +++ b/include/io/channel-socket.h
>> @@ -78,6 +78,23 @@ qio_channel_socket_new_fd(int fd,
>>                             Error **errp);
>>   
>>   
>> +/**
>> + * qio_channel_socket_connect_sync_mon:
>> + * @ioc: the socket channel object
>> + * @addr: the address to connect to
>> + * @mon: current monitor. If NULL, it will be detected by
>> + *       current coroutine.
>> + * @errp: pointer to a NULL-initialized error object
>> + *
>> + * Attempt to connect to the address @addr. This method
>> + * will run in the foreground so the caller will not regain
>> + * execution control until the connection is established or
>> + * an error occurs.
>> + */
>> +int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
>> +                                        SocketAddress *addr,
>> +                                        Monitor *mon,
>> +                                        Error **errp);
> 
> I don't really like exposing the concept of the QEMU monitor in
> the IO layer APIs. IMHO these ought to remain completely separate
> subsystems from the API pov, and we ought to fix this problem by
> making monitor_cur() work better in the scenario required.
> 

Hmm..

I can add thread_mon hash-table to monitor/monitor.c, so we can set monitor for current thread in same way like for coroutine. And monitor_cur will look first at coroutine->monitor hash map and then to thread->monitor. Then, I'll pass needed monitor link to my specific thread, and thread will call monitor_set_cur_for_thread(), an then qio_channel_socket_connect_sync() will work correctly.

David, Markus, is it OK?

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths
  2021-04-16  8:08 ` [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths Vladimir Sementsov-Ogievskiy
@ 2021-04-21 14:00   ` Roman Kagan
  2021-04-21 22:27     ` Vladimir Sementsov-Ogievskiy
  2021-06-01 21:39   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-04-21 14:00 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:40AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We have two "return error" paths in nbd_open() after
> nbd_process_options(). Actually we should call nbd_clear_bdrvstate()
> on these paths. Interesting that nbd_process_options() calls
> nbd_clear_bdrvstate() by itself.
> 
> Let's fix leaks and refactor things to be more obvious:
> 
> - intialize yank at top of nbd_open()
> - move yank cleanup to nbd_clear_bdrvstate()
> - refactor nbd_open() so that all failure paths except for
>   yank-register goes through nbd_clear_bdrvstate()
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 36 ++++++++++++++++++------------------
>  1 file changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index 739ae2941f..a407a3814b 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -152,8 +152,12 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
>  static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
>  static void nbd_yank(void *opaque);
>  
> -static void nbd_clear_bdrvstate(BDRVNBDState *s)
> +static void nbd_clear_bdrvstate(BlockDriverState *bs)
>  {
> +    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
> +
> +    yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
> +
>      object_unref(OBJECT(s->tlscreds));
>      qapi_free_SocketAddress(s->saddr);
>      s->saddr = NULL;
> @@ -2279,9 +2283,6 @@ static int nbd_process_options(BlockDriverState *bs, QDict *options,
>      ret = 0;
>  
>   error:
> -    if (ret < 0) {
> -        nbd_clear_bdrvstate(s);
> -    }
>      qemu_opts_del(opts);
>      return ret;
>  }
> @@ -2292,11 +2293,6 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>      int ret;
>      BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>  
> -    ret = nbd_process_options(bs, options, errp);
> -    if (ret < 0) {
> -        return ret;
> -    }
> -
>      s->bs = bs;
>      qemu_co_mutex_init(&s->send_mutex);
>      qemu_co_queue_init(&s->free_sema);
> @@ -2305,20 +2301,23 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>          return -EEXIST;
>      }
>  
> +    ret = nbd_process_options(bs, options, errp);
> +    if (ret < 0) {
> +        goto fail;
> +    }
> +
>      /*
>       * establish TCP connection, return error if it fails
>       * TODO: Configurable retry-until-timeout behaviour.
>       */
>      if (nbd_establish_connection(bs, s->saddr, errp) < 0) {
> -        yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
> -        return -ECONNREFUSED;
> +        ret = -ECONNREFUSED;
> +        goto fail;
>      }
>  
>      ret = nbd_client_handshake(bs, errp);

Not that this was introduced by this patch, but once you're at it:
AFAICT nbd_client_handshake() calls yank_unregister_instance() on some
error path(s); I assume this needs to go too, otherwise it's called
twice (and asserts).

Roman.

>      if (ret < 0) {
> -        yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
> -        nbd_clear_bdrvstate(s);
> -        return ret;
> +        goto fail;
>      }
>      /* successfully connected */
>      s->state = NBD_CLIENT_CONNECTED;
> @@ -2330,6 +2329,10 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>      aio_co_schedule(bdrv_get_aio_context(bs), s->connection_co);
>  
>      return 0;
> +
> +fail:
> +    nbd_clear_bdrvstate(bs);
> +    return ret;
>  }
>  
>  static int nbd_co_flush(BlockDriverState *bs)
> @@ -2373,11 +2376,8 @@ static void nbd_refresh_limits(BlockDriverState *bs, Error **errp)
>  
>  static void nbd_close(BlockDriverState *bs)
>  {
> -    BDRVNBDState *s = bs->opaque;
> -
>      nbd_client_close(bs);
> -    yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
> -    nbd_clear_bdrvstate(s);
> +    nbd_clear_bdrvstate(bs);
>  }
>  
>  /*


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths
  2021-04-21 14:00   ` Roman Kagan
@ 2021-04-21 22:27     ` Vladimir Sementsov-Ogievskiy
  2021-04-22  8:49       ` Roman Kagan
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-21 22:27 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

21.04.2021 17:00, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:40AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> We have two "return error" paths in nbd_open() after
>> nbd_process_options(). Actually we should call nbd_clear_bdrvstate()
>> on these paths. Interesting that nbd_process_options() calls
>> nbd_clear_bdrvstate() by itself.
>>
>> Let's fix leaks and refactor things to be more obvious:
>>
>> - intialize yank at top of nbd_open()
>> - move yank cleanup to nbd_clear_bdrvstate()
>> - refactor nbd_open() so that all failure paths except for
>>    yank-register goes through nbd_clear_bdrvstate()
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 36 ++++++++++++++++++------------------
>>   1 file changed, 18 insertions(+), 18 deletions(-)
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 739ae2941f..a407a3814b 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -152,8 +152,12 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs,
>>   static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
>>   static void nbd_yank(void *opaque);
>>   
>> -static void nbd_clear_bdrvstate(BDRVNBDState *s)
>> +static void nbd_clear_bdrvstate(BlockDriverState *bs)
>>   {
>> +    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>> +
>> +    yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
>> +
>>       object_unref(OBJECT(s->tlscreds));
>>       qapi_free_SocketAddress(s->saddr);
>>       s->saddr = NULL;
>> @@ -2279,9 +2283,6 @@ static int nbd_process_options(BlockDriverState *bs, QDict *options,
>>       ret = 0;
>>   
>>    error:
>> -    if (ret < 0) {
>> -        nbd_clear_bdrvstate(s);
>> -    }
>>       qemu_opts_del(opts);
>>       return ret;
>>   }
>> @@ -2292,11 +2293,6 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>>       int ret;
>>       BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>>   
>> -    ret = nbd_process_options(bs, options, errp);
>> -    if (ret < 0) {
>> -        return ret;
>> -    }
>> -
>>       s->bs = bs;
>>       qemu_co_mutex_init(&s->send_mutex);
>>       qemu_co_queue_init(&s->free_sema);
>> @@ -2305,20 +2301,23 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>>           return -EEXIST;
>>       }
>>   
>> +    ret = nbd_process_options(bs, options, errp);
>> +    if (ret < 0) {
>> +        goto fail;
>> +    }
>> +
>>       /*
>>        * establish TCP connection, return error if it fails
>>        * TODO: Configurable retry-until-timeout behaviour.
>>        */
>>       if (nbd_establish_connection(bs, s->saddr, errp) < 0) {
>> -        yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
>> -        return -ECONNREFUSED;
>> +        ret = -ECONNREFUSED;
>> +        goto fail;
>>       }
>>   
>>       ret = nbd_client_handshake(bs, errp);
> Not that this was introduced by this patch, but once you're at it:
> AFAICT nbd_client_handshake() calls yank_unregister_instance() on some
> error path(s); I assume this needs to go too, otherwise it's called
> twice (and asserts).
> 
> Roman.
> 

No, nbd_client_handshake() only calls yank_unregister_function(), not instance.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths
  2021-04-21 22:27     ` Vladimir Sementsov-Ogievskiy
@ 2021-04-22  8:49       ` Roman Kagan
  0 siblings, 0 replies; 121+ messages in thread
From: Roman Kagan @ 2021-04-22  8:49 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Thu, Apr 22, 2021 at 01:27:22AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 21.04.2021 17:00, Roman Kagan wrote:
> > On Fri, Apr 16, 2021 at 11:08:40AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > @@ -2305,20 +2301,23 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
> > >           return -EEXIST;
> > >       }
> > > +    ret = nbd_process_options(bs, options, errp);
> > > +    if (ret < 0) {
> > > +        goto fail;
> > > +    }
> > > +
> > >       /*
> > >        * establish TCP connection, return error if it fails
> > >        * TODO: Configurable retry-until-timeout behaviour.
> > >        */
> > >       if (nbd_establish_connection(bs, s->saddr, errp) < 0) {
> > > -        yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
> > > -        return -ECONNREFUSED;
> > > +        ret = -ECONNREFUSED;
> > > +        goto fail;
> > >       }
> > >       ret = nbd_client_handshake(bs, errp);
> > Not that this was introduced by this patch, but once you're at it:
> > AFAICT nbd_client_handshake() calls yank_unregister_instance() on some
> > error path(s); I assume this needs to go too, otherwise it's called
> > twice (and asserts).
> > 
> > Roman.
> > 
> 
> No, nbd_client_handshake() only calls yank_unregister_function(), not instance.

Indeed.  Sorry for confusion.

Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 04/33] block/nbd: nbd_client_handshake(): fix leak of s->ioc
  2021-04-16  8:08 ` [PATCH v3 04/33] block/nbd: nbd_client_handshake(): fix leak of s->ioc Vladimir Sementsov-Ogievskiy
@ 2021-04-22  8:59   ` Roman Kagan
  0 siblings, 0 replies; 121+ messages in thread
From: Roman Kagan @ 2021-04-22  8:59 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:42AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 2 ++
>  1 file changed, 2 insertions(+)

Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-04-16  8:08 ` [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx Vladimir Sementsov-Ogievskiy
@ 2021-04-23 10:09   ` Roman Kagan
  2021-04-26  8:52     ` Vladimir Sementsov-Ogievskiy
  2021-05-12  6:56   ` Paolo Bonzini
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-04-23 10:09 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, Fam Zheng, qemu-block, qemu-devel, mreitz, Stefan Hajnoczi, den

On Fri, Apr 16, 2021 at 11:08:44AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> With the following patch we want to call wake coroutine from thread.
> And it doesn't work with aio_co_wake:
> Assume we have no iothreads.
> Assume we have a coroutine A, which waits in the yield point for
> external aio_co_wake(), and no progress can be done until it happen.
> Main thread is in blocking aio_poll() (for example, in blk_read()).
> 
> Now, in a separate thread we do aio_co_wake(). It calls  aio_co_enter(),
> which goes through last "else" branch and do aio_context_acquire(ctx).
> 
> Now we have a deadlock, as aio_poll() will not release the context lock
> until some progress is done, and progress can't be done until
> aio_co_wake() wake the coroutine A. And it can't because it wait for
> aio_context_acquire().
> 
> Still, aio_co_schedule() works well in parallel with blocking
> aio_poll(). So we want use it. Let's add a possibility of rescheduling
> coroutine in same ctx where it was yield'ed.
> 
> Fetch co->ctx in same way as it is done in aio_co_wake().
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/aio.h | 2 +-
>  util/async.c        | 8 ++++++++
>  2 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 5f342267d5..744b695525 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -643,7 +643,7 @@ static inline bool aio_node_check(AioContext *ctx, bool is_external)
>  
>  /**
>   * aio_co_schedule:
> - * @ctx: the aio context
> + * @ctx: the aio context, if NULL, the current ctx of @co will be used.
>   * @co: the coroutine
>   *
>   * Start a coroutine on a remote AioContext.
> diff --git a/util/async.c b/util/async.c
> index 674dbefb7c..750be555c6 100644
> --- a/util/async.c
> +++ b/util/async.c
> @@ -545,6 +545,14 @@ fail:
>  
>  void aio_co_schedule(AioContext *ctx, Coroutine *co)
>  {
> +    if (!ctx) {
> +        /*
> +         * Read coroutine before co->ctx.  Matches smp_wmb in
> +         * qemu_coroutine_enter.
> +         */
> +        smp_read_barrier_depends();
> +        ctx = qatomic_read(&co->ctx);
> +    }

I'd rather not extend this interface, but add a new one on top.  And
document how it's different from aio_co_wake().

Roman.

>      trace_aio_co_schedule(ctx, co);
>      const char *scheduled = qatomic_cmpxchg(&co->scheduled, NULL,
>                                             __func__);
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-04-23 10:09   ` Roman Kagan
@ 2021-04-26  8:52     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-26  8:52 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den,
	Stefan Hajnoczi, Fam Zheng

23.04.2021 13:09, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:44AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> With the following patch we want to call wake coroutine from thread.
>> And it doesn't work with aio_co_wake:
>> Assume we have no iothreads.
>> Assume we have a coroutine A, which waits in the yield point for
>> external aio_co_wake(), and no progress can be done until it happen.
>> Main thread is in blocking aio_poll() (for example, in blk_read()).
>>
>> Now, in a separate thread we do aio_co_wake(). It calls  aio_co_enter(),
>> which goes through last "else" branch and do aio_context_acquire(ctx).
>>
>> Now we have a deadlock, as aio_poll() will not release the context lock
>> until some progress is done, and progress can't be done until
>> aio_co_wake() wake the coroutine A. And it can't because it wait for
>> aio_context_acquire().
>>
>> Still, aio_co_schedule() works well in parallel with blocking
>> aio_poll(). So we want use it. Let's add a possibility of rescheduling
>> coroutine in same ctx where it was yield'ed.
>>
>> Fetch co->ctx in same way as it is done in aio_co_wake().
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/aio.h | 2 +-
>>   util/async.c        | 8 ++++++++
>>   2 files changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/block/aio.h b/include/block/aio.h
>> index 5f342267d5..744b695525 100644
>> --- a/include/block/aio.h
>> +++ b/include/block/aio.h
>> @@ -643,7 +643,7 @@ static inline bool aio_node_check(AioContext *ctx, bool is_external)
>>   
>>   /**
>>    * aio_co_schedule:
>> - * @ctx: the aio context
>> + * @ctx: the aio context, if NULL, the current ctx of @co will be used.
>>    * @co: the coroutine
>>    *
>>    * Start a coroutine on a remote AioContext.
>> diff --git a/util/async.c b/util/async.c
>> index 674dbefb7c..750be555c6 100644
>> --- a/util/async.c
>> +++ b/util/async.c
>> @@ -545,6 +545,14 @@ fail:
>>   
>>   void aio_co_schedule(AioContext *ctx, Coroutine *co)
>>   {
>> +    if (!ctx) {
>> +        /*
>> +         * Read coroutine before co->ctx.  Matches smp_wmb in
>> +         * qemu_coroutine_enter.
>> +         */
>> +        smp_read_barrier_depends();
>> +        ctx = qatomic_read(&co->ctx);
>> +    }
> 
> I'd rather not extend this interface, but add a new one on top.  And
> document how it's different from aio_co_wake().
> 

Agree, that's better. Will do.



-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection()
  2021-04-16  8:08 ` [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
@ 2021-04-27 21:44   ` Roman Kagan
  2021-06-02 19:05   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Roman Kagan @ 2021-04-27 21:44 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:45AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Instead of connect_bh, bh_ctx and wait_connect fields we can live with
> only one link to waiting coroutine, protected by mutex.
> 
> So new logic is:
> 
> nbd_co_establish_connection() sets wait_co under mutex, release the
> mutex and do yield(). Note, that wait_co may be scheduled by thread
> immediately after unlocking the mutex. Still, in main thread (or
> iothread) we'll not reach the code for entering the coroutine until the
> yield() so we are safe.
> 
> Both connect_thread_func() and nbd_co_establish_connection_cancel() do
> the following to handle wait_co:
> 
> Under mutex, if thr->wait_co is not NULL, call aio_co_wake() (which
> never tries to acquire aio context since previous commit, so we are
> safe to do it under thr->mutex) and set thr->wait_co to NULL.
> This way we protect ourselves of scheduling it twice.
> 
> Also this commit make nbd_co_establish_connection() less connected to
> bs (we have generic pointer to the coroutine, not use s->connection_co
> directly). So, we are on the way of splitting connection API out of
> nbd.c (which is overcomplicated now).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 49 +++++++++----------------------------------------
>  1 file changed, 9 insertions(+), 40 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index d67556c7ee..e1f39eda6c 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -87,12 +87,6 @@ typedef enum NBDConnectThreadState {
>  typedef struct NBDConnectThread {
>      /* Initialization constants */
>      SocketAddress *saddr; /* address to connect to */
> -    /*
> -     * Bottom half to schedule on completion. Scheduled only if bh_ctx is not
> -     * NULL
> -     */
> -    QEMUBHFunc *bh_func;
> -    void *bh_opaque;
>  
>      /*
>       * Result of last attempt. Valid in FAIL and SUCCESS states.
> @@ -101,10 +95,10 @@ typedef struct NBDConnectThread {
>      QIOChannelSocket *sioc;
>      Error *err;
>  
> -    /* state and bh_ctx are protected by mutex */
>      QemuMutex mutex;
> +    /* All further fields are protected by mutex */
>      NBDConnectThreadState state; /* current state of the thread */
> -    AioContext *bh_ctx; /* where to schedule bh (NULL means don't schedule) */
> +    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>  } NBDConnectThread;
>  
>  typedef struct BDRVNBDState {
> @@ -138,7 +132,6 @@ typedef struct BDRVNBDState {
>      char *x_dirty_bitmap;
>      bool alloc_depth;
>  
> -    bool wait_connect;
>      NBDConnectThread *connect_thread;
>  } BDRVNBDState;
>  
> @@ -374,15 +367,6 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
>      return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
>  }
>  
> -static void connect_bh(void *opaque)
> -{
> -    BDRVNBDState *state = opaque;
> -
> -    assert(state->wait_connect);
> -    state->wait_connect = false;
> -    aio_co_wake(state->connection_co);
> -}
> -
>  static void nbd_init_connect_thread(BDRVNBDState *s)
>  {
>      s->connect_thread = g_new(NBDConnectThread, 1);
> @@ -390,8 +374,6 @@ static void nbd_init_connect_thread(BDRVNBDState *s)
>      *s->connect_thread = (NBDConnectThread) {
>          .saddr = QAPI_CLONE(SocketAddress, s->saddr),
>          .state = CONNECT_THREAD_NONE,
> -        .bh_func = connect_bh,
> -        .bh_opaque = s,
>      };
>  
>      qemu_mutex_init(&s->connect_thread->mutex);
> @@ -429,11 +411,9 @@ static void *connect_thread_func(void *opaque)
>      switch (thr->state) {
>      case CONNECT_THREAD_RUNNING:
>          thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
> -        if (thr->bh_ctx) {
> -            aio_bh_schedule_oneshot(thr->bh_ctx, thr->bh_func, thr->bh_opaque);
> -
> -            /* play safe, don't reuse bh_ctx on further connection attempts */
> -            thr->bh_ctx = NULL;
> +        if (thr->wait_co) {
> +            aio_co_schedule(NULL, thr->wait_co);
> +            thr->wait_co = NULL;
>          }
>          break;
>      case CONNECT_THREAD_RUNNING_DETACHED:
> @@ -487,20 +467,14 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>          abort();
>      }
>  
> -    thr->bh_ctx = qemu_get_current_aio_context();
> +    thr->wait_co = qemu_coroutine_self();
>  
>      qemu_mutex_unlock(&thr->mutex);
>  
> -
>      /*
>       * We are going to wait for connect-thread finish, but
>       * nbd_client_co_drain_begin() can interrupt.
> -     *
> -     * Note that wait_connect variable is not visible for connect-thread. It
> -     * doesn't need mutex protection, it used only inside home aio context of
> -     * bs.
>       */
> -    s->wait_connect = true;
>      qemu_coroutine_yield();
>  
>      qemu_mutex_lock(&thr->mutex);
> @@ -555,24 +529,19 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
>  {
>      BDRVNBDState *s = bs->opaque;
>      NBDConnectThread *thr = s->connect_thread;
> -    bool wake = false;
>  
>      qemu_mutex_lock(&thr->mutex);
>  
>      if (thr->state == CONNECT_THREAD_RUNNING) {

This check looks redundant and can probably go.  This patch may be quite
appropriate for this: the logic becomes even more straightforward.

>          /* We can cancel only in running state, when bh is not yet scheduled */
> -        thr->bh_ctx = NULL;
> -        if (s->wait_connect) {
> -            s->wait_connect = false;
> -            wake = true;
> +        if (thr->wait_co) {
> +            aio_co_schedule(NULL, thr->wait_co);

This will probably be replaced by a new function per our discussion of
the previous patch.

Note however that if that one doesn't fly for whatever reason you can
retain the aio_context on NBDConnectThread and pass it explicitly into
aio_co_schedule here.

Roman.

> +            thr->wait_co = NULL;
>          }
>      }
>  
>      qemu_mutex_unlock(&thr->mutex);
>  
> -    if (wake) {
> -        aio_co_wake(s->connection_co);
> -    }
>  }
>  
>  static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 08/33] block/nbd: drop thr->state
  2021-04-16  8:08 ` [PATCH v3 08/33] block/nbd: drop thr->state Vladimir Sementsov-Ogievskiy
@ 2021-04-27 22:23   ` Roman Kagan
  2021-04-28  8:01     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-04-27 22:23 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:46AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We don't need all these states. The code refactored to use two boolean
> variables looks simpler.

Indeed.

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 125 ++++++++++++++--------------------------------------
>  1 file changed, 34 insertions(+), 91 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index e1f39eda6c..2b26a033a4 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -66,24 +66,6 @@ typedef enum NBDClientState {
>      NBD_CLIENT_QUIT
>  } NBDClientState;
>  
> -typedef enum NBDConnectThreadState {
> -    /* No thread, no pending results */
> -    CONNECT_THREAD_NONE,
> -
> -    /* Thread is running, no results for now */
> -    CONNECT_THREAD_RUNNING,
> -
> -    /*
> -     * Thread is running, but requestor exited. Thread should close
> -     * the new socket and free the connect state on exit.
> -     */
> -    CONNECT_THREAD_RUNNING_DETACHED,
> -
> -    /* Thread finished, results are stored in a state */
> -    CONNECT_THREAD_FAIL,
> -    CONNECT_THREAD_SUCCESS
> -} NBDConnectThreadState;
> -
>  typedef struct NBDConnectThread {
>      /* Initialization constants */
>      SocketAddress *saddr; /* address to connect to */
> @@ -97,7 +79,8 @@ typedef struct NBDConnectThread {
>  
>      QemuMutex mutex;
>      /* All further fields are protected by mutex */
> -    NBDConnectThreadState state; /* current state of the thread */
> +    bool running; /* thread is running now */
> +    bool detached; /* thread is detached and should cleanup the state */
>      Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>  } NBDConnectThread;
>  
> @@ -147,17 +130,17 @@ static void nbd_clear_bdrvstate(BlockDriverState *bs)
>  {
>      BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>      NBDConnectThread *thr = s->connect_thread;
> -    bool thr_running;
> +    bool do_free;
>  
>      qemu_mutex_lock(&thr->mutex);
> -    thr_running = thr->state == CONNECT_THREAD_RUNNING;
> -    if (thr_running) {
> -        thr->state = CONNECT_THREAD_RUNNING_DETACHED;
> +    if (thr->running) {
> +        thr->detached = true;
>      }
> +    do_free = !thr->running && !thr->detached;

This is redundant.  You can unconditionally set ->detached and only
depend on ->running for the rest of this function.

>      qemu_mutex_unlock(&thr->mutex);
>  
>      /* the runaway thread will clean it up itself */
> -    if (!thr_running) {
> +    if (do_free) {
>          nbd_free_connect_thread(thr);
>      }
>  
> @@ -373,7 +356,6 @@ static void nbd_init_connect_thread(BDRVNBDState *s)
>  
>      *s->connect_thread = (NBDConnectThread) {
>          .saddr = QAPI_CLONE(SocketAddress, s->saddr),
> -        .state = CONNECT_THREAD_NONE,
>      };
>  
>      qemu_mutex_init(&s->connect_thread->mutex);
> @@ -408,20 +390,13 @@ static void *connect_thread_func(void *opaque)
>  
>      qemu_mutex_lock(&thr->mutex);
>  
> -    switch (thr->state) {
> -    case CONNECT_THREAD_RUNNING:
> -        thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
> -        if (thr->wait_co) {
> -            aio_co_schedule(NULL, thr->wait_co);
> -            thr->wait_co = NULL;
> -        }
> -        break;
> -    case CONNECT_THREAD_RUNNING_DETACHED:
> -        do_free = true;
> -        break;
> -    default:
> -        abort();
> +    assert(thr->running);
> +    thr->running = false;
> +    if (thr->wait_co) {
> +        aio_co_schedule(NULL, thr->wait_co);
> +        thr->wait_co = NULL;
>      }
> +    do_free = thr->detached;
>  
>      qemu_mutex_unlock(&thr->mutex);
>  
> @@ -435,36 +410,24 @@ static void *connect_thread_func(void *opaque)
>  static int coroutine_fn
>  nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>  {
> -    int ret;
>      QemuThread thread;
>      BDRVNBDState *s = bs->opaque;
>      NBDConnectThread *thr = s->connect_thread;
>  
> +    assert(!s->sioc);
> +
>      qemu_mutex_lock(&thr->mutex);
>  
> -    switch (thr->state) {
> -    case CONNECT_THREAD_FAIL:
> -    case CONNECT_THREAD_NONE:
> +    if (!thr->running) {
> +        if (thr->sioc) {
> +            /* Previous attempt finally succeeded in background */
> +            goto out;
> +        }
> +        thr->running = true;
>          error_free(thr->err);
>          thr->err = NULL;
> -        thr->state = CONNECT_THREAD_RUNNING;
>          qemu_thread_create(&thread, "nbd-connect",
>                             connect_thread_func, thr, QEMU_THREAD_DETACHED);
> -        break;
> -    case CONNECT_THREAD_SUCCESS:
> -        /* Previous attempt finally succeeded in background */
> -        thr->state = CONNECT_THREAD_NONE;
> -        s->sioc = thr->sioc;
> -        thr->sioc = NULL;
> -        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
> -                               nbd_yank, bs);
> -        qemu_mutex_unlock(&thr->mutex);
> -        return 0;
> -    case CONNECT_THREAD_RUNNING:
> -        /* Already running, will wait */
> -        break;
> -    default:
> -        abort();
>      }
>  
>      thr->wait_co = qemu_coroutine_self();
> @@ -479,10 +442,15 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>  
>      qemu_mutex_lock(&thr->mutex);
>  
> -    switch (thr->state) {
> -    case CONNECT_THREAD_SUCCESS:
> -    case CONNECT_THREAD_FAIL:
> -        thr->state = CONNECT_THREAD_NONE;
> +out:
> +    if (thr->running) {
> +        /*
> +         * Obviously, drained section wants to start. Report the attempt as
> +         * failed. Still connect thread is executing in background, and its
> +         * result may be used for next connection attempt.
> +         */

I must admit this wasn't quite obvious to me when I started looking at
this code.  While you're moving this comment, can you please consider
rephrasing it?  How about this:

        /*
	 * The connection attempt was canceled and the coroutine resumed
	 * before the connection thread finished its job.  Report the
	 * attempt as failed, but leave the connection thread running,
	 * to reuse it for the next connection attempt.
	 */

> +        error_setg(errp, "Connection attempt cancelled by other operation");
> +    } else {
>          error_propagate(errp, thr->err);
>          thr->err = NULL;
>          s->sioc = thr->sioc;
> @@ -491,33 +459,11 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>              yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>                                     nbd_yank, bs);
>          }
> -        ret = (s->sioc ? 0 : -1);
> -        break;
> -    case CONNECT_THREAD_RUNNING:
> -    case CONNECT_THREAD_RUNNING_DETACHED:
> -        /*
> -         * Obviously, drained section wants to start. Report the attempt as
> -         * failed. Still connect thread is executing in background, and its
> -         * result may be used for next connection attempt.
> -         */
> -        ret = -1;
> -        error_setg(errp, "Connection attempt cancelled by other operation");
> -        break;
> -
> -    case CONNECT_THREAD_NONE:
> -        /*
> -         * Impossible. We've seen this thread running. So it should be
> -         * running or at least give some results.
> -         */
> -        abort();
> -
> -    default:
> -        abort();
>      }
>  
>      qemu_mutex_unlock(&thr->mutex);
>  
> -    return ret;
> +    return s->sioc ? 0 : -1;
>  }
>  
>  /*
> @@ -532,12 +478,9 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
>  
>      qemu_mutex_lock(&thr->mutex);
>  
> -    if (thr->state == CONNECT_THREAD_RUNNING) {
> -        /* We can cancel only in running state, when bh is not yet scheduled */
> -        if (thr->wait_co) {
> -            aio_co_schedule(NULL, thr->wait_co);
> -            thr->wait_co = NULL;
> -        }
> +    if (thr->wait_co) {
> +        aio_co_schedule(NULL, thr->wait_co);
> +        thr->wait_co = NULL;

Ah, so you get rid of this redundant check in this patch.  Fine by me;
please disregard my comment on this matter to the previous patch, then.

Roman.
>      }
>  
>      qemu_mutex_unlock(&thr->mutex);
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection
  2021-04-16  8:08 ` [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection Vladimir Sementsov-Ogievskiy
@ 2021-04-27 22:28   ` Roman Kagan
  2021-06-02 21:21   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Roman Kagan @ 2021-04-27 22:28 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:49AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to move connection code to own file and want clear names
> and APIs.
> 
> The structure is shared between user and (possibly) several runs of
> connect-thread. So it's wrong to call it "thread". Let's rename to
> something more generic.
> 
> Appropriately rename connect_thread and thr variables to conn.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 137 ++++++++++++++++++++++++++--------------------------
>  1 file changed, 68 insertions(+), 69 deletions(-)

Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-04-16  8:08 ` [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release() Vladimir Sementsov-Ogievskiy
@ 2021-04-27 22:35   ` Roman Kagan
  2021-04-28  8:06     ` Vladimir Sementsov-Ogievskiy
  2021-06-02 21:27   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-04-27 22:35 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index 21a4039359..8531d019b2 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
>      NBDClientConnection *conn;
>  } BDRVNBDState;
>  
> -static void nbd_free_connect_thread(NBDClientConnection *conn);
> +static void nbd_client_connection_release(NBDClientConnection *conn);
>  static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
>                                      Error **errp);
>  static coroutine_fn QIOChannelSocket *
> @@ -130,20 +130,9 @@ static void nbd_yank(void *opaque);
>  static void nbd_clear_bdrvstate(BlockDriverState *bs)
>  {
>      BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
> -    NBDClientConnection *conn = s->conn;
> -    bool do_free;
> -
> -    qemu_mutex_lock(&conn->mutex);
> -    if (conn->running) {
> -        conn->detached = true;
> -    }
> -    do_free = !conn->running && !conn->detached;
> -    qemu_mutex_unlock(&conn->mutex);
>  
> -    /* the runaway thread will clean it up itself */
> -    if (do_free) {
> -        nbd_free_connect_thread(conn);
> -    }
> +    nbd_client_connection_release(s->conn);
> +    s->conn = NULL;
>  
>      yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
>  
> @@ -365,7 +354,7 @@ nbd_client_connection_new(const SocketAddress *saddr)
>      return conn;
>  }
>  
> -static void nbd_free_connect_thread(NBDClientConnection *conn)
> +static void nbd_client_connection_do_free(NBDClientConnection *conn)
>  {
>      if (conn->sioc) {
>          qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
> @@ -379,8 +368,8 @@ static void nbd_free_connect_thread(NBDClientConnection *conn)
>  static void *connect_thread_func(void *opaque)
>  {
>      NBDClientConnection *conn = opaque;
> +    bool do_free;
>      int ret;
> -    bool do_free = false;
>  

This hunk belongs in patch 8.

Roman.

>      conn->sioc = qio_channel_socket_new();
>  
> @@ -405,12 +394,32 @@ static void *connect_thread_func(void *opaque)
>      qemu_mutex_unlock(&conn->mutex);
>  
>      if (do_free) {
> -        nbd_free_connect_thread(conn);
> +        nbd_client_connection_do_free(conn);
>      }
>  
>      return NULL;
>  }
>  
> +static void nbd_client_connection_release(NBDClientConnection *conn)
> +{
> +    bool do_free;
> +
> +    if (!conn) {
> +        return;
> +    }
> +
> +    qemu_mutex_lock(&conn->mutex);
> +    if (conn->running) {
> +        conn->detached = true;
> +    }
> +    do_free = !conn->running && !conn->detached;
> +    qemu_mutex_unlock(&conn->mutex);
> +
> +    if (do_free) {
> +        nbd_client_connection_do_free(conn);
> +    }
> +}
> +
>  /*
>   * Get a new connection in context of @conn:
>   *   if thread is running, wait for completion
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection
  2021-04-16  8:08 ` [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection Vladimir Sementsov-Ogievskiy
@ 2021-04-27 22:45   ` Roman Kagan
  2021-04-28  8:14     ` Vladimir Sementsov-Ogievskiy
  2021-06-03 15:55   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-04-27 22:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:52AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We now have bs-independent connection API, which consists of four
> functions:
> 
>   nbd_client_connection_new()
>   nbd_client_connection_unref()
>   nbd_co_establish_connection()
>   nbd_co_establish_connection_cancel()
> 
> Move them to a separate file together with NBDClientConnection
> structure which becomes private to the new API.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     |  11 +++
>  block/nbd.c             | 187 -----------------------------------
>  nbd/client-connection.c | 212 ++++++++++++++++++++++++++++++++++++++++
>  nbd/meson.build         |   1 +
>  4 files changed, 224 insertions(+), 187 deletions(-)
>  create mode 100644 nbd/client-connection.c
> 
> diff --git a/include/block/nbd.h b/include/block/nbd.h
> index 5f34d23bb0..57381be76f 100644
> --- a/include/block/nbd.h
> +++ b/include/block/nbd.h
> @@ -406,4 +406,15 @@ const char *nbd_info_lookup(uint16_t info);
>  const char *nbd_cmd_lookup(uint16_t info);
>  const char *nbd_err_lookup(int err);
>  
> +/* nbd/client-connection.c */
> +typedef struct NBDClientConnection NBDClientConnection;
> +
> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
> +void nbd_client_connection_release(NBDClientConnection *conn);
> +
> +QIOChannelSocket *coroutine_fn
> +nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
> +
> +void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
> +
>  #endif
> diff --git a/block/nbd.c b/block/nbd.c
> index 8531d019b2..9bd68dcf10 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -66,24 +66,6 @@ typedef enum NBDClientState {
>      NBD_CLIENT_QUIT
>  } NBDClientState;
>  
> -typedef struct NBDClientConnection {
> -    /* Initialization constants */
> -    SocketAddress *saddr; /* address to connect to */
> -
> -    /*
> -     * Result of last attempt. Valid in FAIL and SUCCESS states.
> -     * If you want to steal error, don't forget to set pointer to NULL.
> -     */
> -    QIOChannelSocket *sioc;
> -    Error *err;
> -
> -    QemuMutex mutex;
> -    /* All further fields are protected by mutex */
> -    bool running; /* thread is running now */
> -    bool detached; /* thread is detached and should cleanup the state */
> -    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
> -} NBDClientConnection;
> -
>  typedef struct BDRVNBDState {
>      QIOChannelSocket *sioc; /* The master data channel */
>      QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
> @@ -118,12 +100,8 @@ typedef struct BDRVNBDState {
>      NBDClientConnection *conn;
>  } BDRVNBDState;
>  
> -static void nbd_client_connection_release(NBDClientConnection *conn);
>  static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
>                                      Error **errp);
> -static coroutine_fn QIOChannelSocket *
> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
> -static void nbd_co_establish_connection_cancel(NBDClientConnection *conn);
>  static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
>  static void nbd_yank(void *opaque);
>  
> @@ -340,171 +318,6 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
>      return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
>  }
>  
> -static NBDClientConnection *
> -nbd_client_connection_new(const SocketAddress *saddr)
> -{
> -    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
> -
> -    *conn = (NBDClientConnection) {
> -        .saddr = QAPI_CLONE(SocketAddress, saddr),
> -    };
> -
> -    qemu_mutex_init(&conn->mutex);
> -
> -    return conn;
> -}
> -
> -static void nbd_client_connection_do_free(NBDClientConnection *conn)
> -{
> -    if (conn->sioc) {
> -        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
> -        object_unref(OBJECT(conn->sioc));
> -    }
> -    error_free(conn->err);
> -    qapi_free_SocketAddress(conn->saddr);
> -    g_free(conn);
> -}
> -
> -static void *connect_thread_func(void *opaque)
> -{
> -    NBDClientConnection *conn = opaque;
> -    bool do_free;
> -    int ret;
> -
> -    conn->sioc = qio_channel_socket_new();
> -
> -    error_free(conn->err);
> -    conn->err = NULL;
> -    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
> -    if (ret < 0) {
> -        object_unref(OBJECT(conn->sioc));
> -        conn->sioc = NULL;
> -    }
> -
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    assert(conn->running);
> -    conn->running = false;
> -    if (conn->wait_co) {
> -        aio_co_schedule(NULL, conn->wait_co);
> -        conn->wait_co = NULL;
> -    }
> -    do_free = conn->detached;
> -
> -    qemu_mutex_unlock(&conn->mutex);
> -
> -    if (do_free) {
> -        nbd_client_connection_do_free(conn);
> -    }
> -
> -    return NULL;
> -}
> -
> -static void nbd_client_connection_release(NBDClientConnection *conn)
> -{
> -    bool do_free;
> -
> -    if (!conn) {
> -        return;
> -    }
> -
> -    qemu_mutex_lock(&conn->mutex);
> -    if (conn->running) {
> -        conn->detached = true;
> -    }
> -    do_free = !conn->running && !conn->detached;
> -    qemu_mutex_unlock(&conn->mutex);
> -
> -    if (do_free) {
> -        nbd_client_connection_do_free(conn);
> -    }
> -}
> -
> -/*
> - * Get a new connection in context of @conn:
> - *   if thread is running, wait for completion
> - *   if thread is already succeeded in background, and user didn't get the
> - *     result, just return it now
> - *   otherwise if thread is not running, start a thread and wait for completion
> - */
> -static coroutine_fn QIOChannelSocket *
> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
> -{
> -    QIOChannelSocket *sioc = NULL;
> -    QemuThread thread;
> -
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    /*
> -     * Don't call nbd_co_establish_connection() in several coroutines in
> -     * parallel. Only one call at once is supported.
> -     */
> -    assert(!conn->wait_co);
> -
> -    if (!conn->running) {
> -        if (conn->sioc) {
> -            /* Previous attempt finally succeeded in background */
> -            sioc = g_steal_pointer(&conn->sioc);
> -            qemu_mutex_unlock(&conn->mutex);
> -
> -            return sioc;
> -        }
> -
> -        conn->running = true;
> -        error_free(conn->err);
> -        conn->err = NULL;
> -        qemu_thread_create(&thread, "nbd-connect",
> -                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
> -    }
> -
> -    conn->wait_co = qemu_coroutine_self();
> -
> -    qemu_mutex_unlock(&conn->mutex);
> -
> -    /*
> -     * We are going to wait for connect-thread finish, but
> -     * nbd_co_establish_connection_cancel() can interrupt.
> -     */
> -    qemu_coroutine_yield();
> -
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    if (conn->running) {
> -        /*
> -         * Obviously, drained section wants to start. Report the attempt as
> -         * failed. Still connect thread is executing in background, and its
> -         * result may be used for next connection attempt.
> -         */
> -        error_setg(errp, "Connection attempt cancelled by other operation");
> -    } else {
> -        error_propagate(errp, conn->err);
> -        conn->err = NULL;
> -        sioc = g_steal_pointer(&conn->sioc);
> -    }
> -
> -    qemu_mutex_unlock(&conn->mutex);
> -
> -    return sioc;
> -}
> -
> -/*
> - * nbd_co_establish_connection_cancel
> - * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
> - * stop the thread itself neither close the socket. It just safely wakes
> - * nbd_co_establish_connection() sleeping in the yield().
> - */
> -static void nbd_co_establish_connection_cancel(NBDClientConnection *conn)
> -{
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    if (conn->wait_co) {
> -        aio_co_schedule(NULL, conn->wait_co);
> -        conn->wait_co = NULL;
> -    }
> -
> -    qemu_mutex_unlock(&conn->mutex);
> -}
> -
>  static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>  {
>      int ret;
> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
> new file mode 100644
> index 0000000000..4e39a5b1af
> --- /dev/null
> +++ b/nbd/client-connection.c
> @@ -0,0 +1,212 @@
> +/*
> + * QEMU Block driver for  NBD
> + *
> + * Copyright (c) 2021 Virtuozzo International GmbH.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to deal
> + * in the Software without restriction, including without limitation the rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +
> +#include "qemu/osdep.h"
> +
> +#include "block/nbd.h"
> +
> +#include "qapi/qapi-visit-sockets.h"
> +#include "qapi/clone-visitor.h"
> +
> +struct NBDClientConnection {
> +    /* Initialization constants */
> +    SocketAddress *saddr; /* address to connect to */
> +
> +    /*
> +     * Result of last attempt. Valid in FAIL and SUCCESS states.
> +     * If you want to steal error, don't forget to set pointer to NULL.
> +     */
> +    QIOChannelSocket *sioc;
> +    Error *err;

These two are also manipulated under the mutex.  Consider also updating
the comment: both these pointers are to be "stolen" by the caller, with
the former being valid when the connection succeeds and the latter
otherwise.

Roman.

> +
> +    QemuMutex mutex;
> +    /* All further fields are protected by mutex */
> +    bool running; /* thread is running now */
> +    bool detached; /* thread is detached and should cleanup the state */
> +    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
> +};
> +
> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
> +{
> +    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
> +
> +    *conn = (NBDClientConnection) {
> +        .saddr = QAPI_CLONE(SocketAddress, saddr),
> +    };
> +
> +    qemu_mutex_init(&conn->mutex);
> +
> +    return conn;
> +}
> +
> +static void nbd_client_connection_do_free(NBDClientConnection *conn)
> +{
> +    if (conn->sioc) {
> +        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
> +        object_unref(OBJECT(conn->sioc));
> +    }
> +    error_free(conn->err);
> +    qapi_free_SocketAddress(conn->saddr);
> +    g_free(conn);
> +}
> +
> +static void *connect_thread_func(void *opaque)
> +{
> +    NBDClientConnection *conn = opaque;
> +    bool do_free;
> +    int ret;
> +
> +    conn->sioc = qio_channel_socket_new();
> +
> +    error_free(conn->err);
> +    conn->err = NULL;
> +    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
> +    if (ret < 0) {
> +        object_unref(OBJECT(conn->sioc));
> +        conn->sioc = NULL;
> +    }
> +
> +    qemu_mutex_lock(&conn->mutex);
> +
> +    assert(conn->running);
> +    conn->running = false;
> +    if (conn->wait_co) {
> +        aio_co_schedule(NULL, conn->wait_co);
> +        conn->wait_co = NULL;
> +    }
> +    do_free = conn->detached;
> +
> +    qemu_mutex_unlock(&conn->mutex);
> +
> +    if (do_free) {
> +        nbd_client_connection_do_free(conn);
> +    }
> +
> +    return NULL;
> +}
> +
> +void nbd_client_connection_release(NBDClientConnection *conn)
> +{
> +    bool do_free;
> +
> +    if (!conn) {
> +        return;
> +    }
> +
> +    qemu_mutex_lock(&conn->mutex);
> +    if (conn->running) {
> +        conn->detached = true;
> +    }
> +    do_free = !conn->running && !conn->detached;
> +    qemu_mutex_unlock(&conn->mutex);
> +
> +    if (do_free) {
> +        nbd_client_connection_do_free(conn);
> +    }
> +}
> +
> +/*
> + * Get a new connection in context of @conn:
> + *   if thread is running, wait for completion
> + *   if thread is already succeeded in background, and user didn't get the
> + *     result, just return it now
> + *   otherwise if thread is not running, start a thread and wait for completion
> + */
> +QIOChannelSocket *coroutine_fn
> +nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
> +{
> +    QIOChannelSocket *sioc = NULL;
> +    QemuThread thread;
> +
> +    qemu_mutex_lock(&conn->mutex);
> +
> +    /*
> +     * Don't call nbd_co_establish_connection() in several coroutines in
> +     * parallel. Only one call at once is supported.
> +     */
> +    assert(!conn->wait_co);
> +
> +    if (!conn->running) {
> +        if (conn->sioc) {
> +            /* Previous attempt finally succeeded in background */
> +            sioc = g_steal_pointer(&conn->sioc);
> +            qemu_mutex_unlock(&conn->mutex);
> +
> +            return sioc;
> +        }
> +
> +        conn->running = true;
> +        error_free(conn->err);
> +        conn->err = NULL;
> +        qemu_thread_create(&thread, "nbd-connect",
> +                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
> +    }
> +
> +    conn->wait_co = qemu_coroutine_self();
> +
> +    qemu_mutex_unlock(&conn->mutex);
> +
> +    /*
> +     * We are going to wait for connect-thread finish, but
> +     * nbd_co_establish_connection_cancel() can interrupt.
> +     */
> +    qemu_coroutine_yield();
> +
> +    qemu_mutex_lock(&conn->mutex);
> +
> +    if (conn->running) {
> +        /*
> +         * Obviously, drained section wants to start. Report the attempt as
> +         * failed. Still connect thread is executing in background, and its
> +         * result may be used for next connection attempt.
> +         */
> +        error_setg(errp, "Connection attempt cancelled by other operation");
> +    } else {
> +        error_propagate(errp, conn->err);
> +        conn->err = NULL;
> +        sioc = g_steal_pointer(&conn->sioc);
> +    }
> +
> +    qemu_mutex_unlock(&conn->mutex);
> +
> +    return sioc;
> +}
> +
> +/*
> + * nbd_co_establish_connection_cancel
> + * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
> + * stop the thread itself neither close the socket. It just safely wakes
> + * nbd_co_establish_connection() sleeping in the yield().
> + */
> +void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn)
> +{
> +    qemu_mutex_lock(&conn->mutex);
> +
> +    if (conn->wait_co) {
> +        aio_co_schedule(NULL, conn->wait_co);
> +        conn->wait_co = NULL;
> +    }
> +
> +    qemu_mutex_unlock(&conn->mutex);
> +}
> diff --git a/nbd/meson.build b/nbd/meson.build
> index 2baaa36948..b26d70565e 100644
> --- a/nbd/meson.build
> +++ b/nbd/meson.build
> @@ -1,5 +1,6 @@
>  block_ss.add(files(
>    'client.c',
> +  'client-connection.c',
>    'common.c',
>  ))
>  blockdev_ss.add(files(
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD
  2021-04-16  8:08 ` [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD Vladimir Sementsov-Ogievskiy
@ 2021-04-28  6:08   ` Roman Kagan
  2021-04-28  8:17     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-04-28  6:08 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:53AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  nbd/client-connection.c | 94 ++++++++++++++++++-----------------------
>  1 file changed, 42 insertions(+), 52 deletions(-)
> 
> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
> index 4e39a5b1af..b45a0bd5f6 100644
> --- a/nbd/client-connection.c
> +++ b/nbd/client-connection.c
> @@ -87,17 +87,16 @@ static void *connect_thread_func(void *opaque)
>          conn->sioc = NULL;
>      }
>  
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    assert(conn->running);
> -    conn->running = false;
> -    if (conn->wait_co) {
> -        aio_co_schedule(NULL, conn->wait_co);
> -        conn->wait_co = NULL;
> +    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
> +        assert(conn->running);
> +        conn->running = false;
> +        if (conn->wait_co) {
> +            aio_co_schedule(NULL, conn->wait_co);
> +            conn->wait_co = NULL;
> +        }
>      }
>      do_free = conn->detached;

->detached is now accessed outside the mutex

>  
> -    qemu_mutex_unlock(&conn->mutex);
>  
>      if (do_free) {
>          nbd_client_connection_do_free(conn);
> @@ -136,61 +135,54 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>  QIOChannelSocket *coroutine_fn
>  nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>  {
> -    QIOChannelSocket *sioc = NULL;
>      QemuThread thread;
>  
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    /*
> -     * Don't call nbd_co_establish_connection() in several coroutines in
> -     * parallel. Only one call at once is supported.
> -     */
> -    assert(!conn->wait_co);
> -
> -    if (!conn->running) {
> -        if (conn->sioc) {
> -            /* Previous attempt finally succeeded in background */
> -            sioc = g_steal_pointer(&conn->sioc);
> -            qemu_mutex_unlock(&conn->mutex);
> -
> -            return sioc;
> +    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
> +        /*
> +         * Don't call nbd_co_establish_connection() in several coroutines in
> +         * parallel. Only one call at once is supported.
> +         */
> +        assert(!conn->wait_co);
> +
> +        if (!conn->running) {
> +            if (conn->sioc) {
> +                /* Previous attempt finally succeeded in background */
> +                return g_steal_pointer(&conn->sioc);
> +            }
> +
> +            conn->running = true;
> +            error_free(conn->err);
> +            conn->err = NULL;
> +            qemu_thread_create(&thread, "nbd-connect",
> +                               connect_thread_func, conn, QEMU_THREAD_DETACHED);
>          }
>  
> -        conn->running = true;
> -        error_free(conn->err);
> -        conn->err = NULL;
> -        qemu_thread_create(&thread, "nbd-connect",
> -                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
> +        conn->wait_co = qemu_coroutine_self();
>      }
>  
> -    conn->wait_co = qemu_coroutine_self();
> -
> -    qemu_mutex_unlock(&conn->mutex);
> -
>      /*
>       * We are going to wait for connect-thread finish, but
>       * nbd_co_establish_connection_cancel() can interrupt.
>       */
>      qemu_coroutine_yield();
>  
> -    qemu_mutex_lock(&conn->mutex);
> -
> -    if (conn->running) {
> -        /*
> -         * Obviously, drained section wants to start. Report the attempt as
> -         * failed. Still connect thread is executing in background, and its
> -         * result may be used for next connection attempt.
> -         */
> -        error_setg(errp, "Connection attempt cancelled by other operation");
> -    } else {
> -        error_propagate(errp, conn->err);
> -        conn->err = NULL;
> -        sioc = g_steal_pointer(&conn->sioc);
> +    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
> +        if (conn->running) {
> +            /*
> +             * Obviously, drained section wants to start. Report the attempt as
> +             * failed. Still connect thread is executing in background, and its
> +             * result may be used for next connection attempt.
> +             */
> +            error_setg(errp, "Connection attempt cancelled by other operation");
> +            return NULL;
> +        } else {
> +            error_propagate(errp, conn->err);
> +            conn->err = NULL;
> +            return g_steal_pointer(&conn->sioc);
> +        }
>      }
>  
> -    qemu_mutex_unlock(&conn->mutex);
> -
> -    return sioc;
> +    abort(); /* unreachable */
>  }
>  
>  /*
> @@ -201,12 +193,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>   */
>  void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn)
>  {
> -    qemu_mutex_lock(&conn->mutex);
> +    QEMU_LOCK_GUARD(&conn->mutex);
>  
>      if (conn->wait_co) {
>          aio_co_schedule(NULL, conn->wait_co);
>          conn->wait_co = NULL;
>      }
> -
> -    qemu_mutex_unlock(&conn->mutex);
>  }
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 08/33] block/nbd: drop thr->state
  2021-04-27 22:23   ` Roman Kagan
@ 2021-04-28  8:01     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-28  8:01 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

28.04.2021 01:23, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:46AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> We don't need all these states. The code refactored to use two boolean
>> variables looks simpler.
> 
> Indeed.
> 
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 125 ++++++++++++++--------------------------------------
>>   1 file changed, 34 insertions(+), 91 deletions(-)
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index e1f39eda6c..2b26a033a4 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -66,24 +66,6 @@ typedef enum NBDClientState {
>>       NBD_CLIENT_QUIT
>>   } NBDClientState;
>>   
>> -typedef enum NBDConnectThreadState {
>> -    /* No thread, no pending results */
>> -    CONNECT_THREAD_NONE,
>> -
>> -    /* Thread is running, no results for now */
>> -    CONNECT_THREAD_RUNNING,
>> -
>> -    /*
>> -     * Thread is running, but requestor exited. Thread should close
>> -     * the new socket and free the connect state on exit.
>> -     */
>> -    CONNECT_THREAD_RUNNING_DETACHED,
>> -
>> -    /* Thread finished, results are stored in a state */
>> -    CONNECT_THREAD_FAIL,
>> -    CONNECT_THREAD_SUCCESS
>> -} NBDConnectThreadState;
>> -
>>   typedef struct NBDConnectThread {
>>       /* Initialization constants */
>>       SocketAddress *saddr; /* address to connect to */
>> @@ -97,7 +79,8 @@ typedef struct NBDConnectThread {
>>   
>>       QemuMutex mutex;
>>       /* All further fields are protected by mutex */
>> -    NBDConnectThreadState state; /* current state of the thread */
>> +    bool running; /* thread is running now */
>> +    bool detached; /* thread is detached and should cleanup the state */
>>       Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>>   } NBDConnectThread;
>>   
>> @@ -147,17 +130,17 @@ static void nbd_clear_bdrvstate(BlockDriverState *bs)
>>   {
>>       BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>>       NBDConnectThread *thr = s->connect_thread;
>> -    bool thr_running;
>> +    bool do_free;
>>   
>>       qemu_mutex_lock(&thr->mutex);
>> -    thr_running = thr->state == CONNECT_THREAD_RUNNING;
>> -    if (thr_running) {
>> -        thr->state = CONNECT_THREAD_RUNNING_DETACHED;
>> +    if (thr->running) {
>> +        thr->detached = true;
>>       }
>> +    do_free = !thr->running && !thr->detached;
> 
> This is redundant.  You can unconditionally set ->detached and only
> depend on ->running for the rest of this function.

Still, I don't want to set ->detached unconditionally, just to not break its meaning (even before freeing). And that fact that detached is set only once worth assertion. So, I think:

assert(!thr->detached);
if (thr->running) {
   thr->detached = true;
} else {
   do_free = true;
}

> 
>>       qemu_mutex_unlock(&thr->mutex);
>>   
>>       /* the runaway thread will clean it up itself */
>> -    if (!thr_running) {
>> +    if (do_free) {
>>           nbd_free_connect_thread(thr);
>>       }
>>   
>> @@ -373,7 +356,6 @@ static void nbd_init_connect_thread(BDRVNBDState *s)
>>   
>>       *s->connect_thread = (NBDConnectThread) {
>>           .saddr = QAPI_CLONE(SocketAddress, s->saddr),
>> -        .state = CONNECT_THREAD_NONE,
>>       };
>>   
>>       qemu_mutex_init(&s->connect_thread->mutex);
>> @@ -408,20 +390,13 @@ static void *connect_thread_func(void *opaque)
>>   
>>       qemu_mutex_lock(&thr->mutex);
>>   
>> -    switch (thr->state) {
>> -    case CONNECT_THREAD_RUNNING:
>> -        thr->state = ret < 0 ? CONNECT_THREAD_FAIL : CONNECT_THREAD_SUCCESS;
>> -        if (thr->wait_co) {
>> -            aio_co_schedule(NULL, thr->wait_co);
>> -            thr->wait_co = NULL;
>> -        }
>> -        break;
>> -    case CONNECT_THREAD_RUNNING_DETACHED:
>> -        do_free = true;
>> -        break;
>> -    default:
>> -        abort();
>> +    assert(thr->running);
>> +    thr->running = false;
>> +    if (thr->wait_co) {
>> +        aio_co_schedule(NULL, thr->wait_co);
>> +        thr->wait_co = NULL;
>>       }
>> +    do_free = thr->detached;
>>   
>>       qemu_mutex_unlock(&thr->mutex);
>>   
>> @@ -435,36 +410,24 @@ static void *connect_thread_func(void *opaque)
>>   static int coroutine_fn
>>   nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>>   {
>> -    int ret;
>>       QemuThread thread;
>>       BDRVNBDState *s = bs->opaque;
>>       NBDConnectThread *thr = s->connect_thread;
>>   
>> +    assert(!s->sioc);
>> +
>>       qemu_mutex_lock(&thr->mutex);
>>   
>> -    switch (thr->state) {
>> -    case CONNECT_THREAD_FAIL:
>> -    case CONNECT_THREAD_NONE:
>> +    if (!thr->running) {
>> +        if (thr->sioc) {
>> +            /* Previous attempt finally succeeded in background */
>> +            goto out;
>> +        }
>> +        thr->running = true;
>>           error_free(thr->err);
>>           thr->err = NULL;
>> -        thr->state = CONNECT_THREAD_RUNNING;
>>           qemu_thread_create(&thread, "nbd-connect",
>>                              connect_thread_func, thr, QEMU_THREAD_DETACHED);
>> -        break;
>> -    case CONNECT_THREAD_SUCCESS:
>> -        /* Previous attempt finally succeeded in background */
>> -        thr->state = CONNECT_THREAD_NONE;
>> -        s->sioc = thr->sioc;
>> -        thr->sioc = NULL;
>> -        yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>> -                               nbd_yank, bs);
>> -        qemu_mutex_unlock(&thr->mutex);
>> -        return 0;
>> -    case CONNECT_THREAD_RUNNING:
>> -        /* Already running, will wait */
>> -        break;
>> -    default:
>> -        abort();
>>       }
>>   
>>       thr->wait_co = qemu_coroutine_self();
>> @@ -479,10 +442,15 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>>   
>>       qemu_mutex_lock(&thr->mutex);
>>   
>> -    switch (thr->state) {
>> -    case CONNECT_THREAD_SUCCESS:
>> -    case CONNECT_THREAD_FAIL:
>> -        thr->state = CONNECT_THREAD_NONE;
>> +out:
>> +    if (thr->running) {
>> +        /*
>> +         * Obviously, drained section wants to start. Report the attempt as
>> +         * failed. Still connect thread is executing in background, and its
>> +         * result may be used for next connection attempt.
>> +         */
> 
> I must admit this wasn't quite obvious to me when I started looking at
> this code.  While you're moving this comment, can you please consider
> rephrasing it?  How about this:
> 
>          /*
> 	 * The connection attempt was canceled and the coroutine resumed
> 	 * before the connection thread finished its job.  Report the
> 	 * attempt as failed, but leave the connection thread running,
> 	 * to reuse it for the next connection attempt.
> 	 */

Yes agree. Moreover, when code is moved to separate file, it shouldn't care _why_ it is cancelled, and comment should not mention drained sections. It's cancelled because user cancel it.

> 
>> +        error_setg(errp, "Connection attempt cancelled by other operation");
>> +    } else {
>>           error_propagate(errp, thr->err);
>>           thr->err = NULL;
>>           s->sioc = thr->sioc;
>> @@ -491,33 +459,11 @@ nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>>               yank_register_function(BLOCKDEV_YANK_INSTANCE(bs->node_name),
>>                                      nbd_yank, bs);
>>           }
>> -        ret = (s->sioc ? 0 : -1);
>> -        break;
>> -    case CONNECT_THREAD_RUNNING:
>> -    case CONNECT_THREAD_RUNNING_DETACHED:
>> -        /*
>> -         * Obviously, drained section wants to start. Report the attempt as
>> -         * failed. Still connect thread is executing in background, and its
>> -         * result may be used for next connection attempt.
>> -         */
>> -        ret = -1;
>> -        error_setg(errp, "Connection attempt cancelled by other operation");
>> -        break;
>> -
>> -    case CONNECT_THREAD_NONE:
>> -        /*
>> -         * Impossible. We've seen this thread running. So it should be
>> -         * running or at least give some results.
>> -         */
>> -        abort();
>> -
>> -    default:
>> -        abort();
>>       }
>>   
>>       qemu_mutex_unlock(&thr->mutex);
>>   
>> -    return ret;
>> +    return s->sioc ? 0 : -1;
>>   }
>>   
>>   /*
>> @@ -532,12 +478,9 @@ static void nbd_co_establish_connection_cancel(BlockDriverState *bs)
>>   
>>       qemu_mutex_lock(&thr->mutex);
>>   
>> -    if (thr->state == CONNECT_THREAD_RUNNING) {
>> -        /* We can cancel only in running state, when bh is not yet scheduled */
>> -        if (thr->wait_co) {
>> -            aio_co_schedule(NULL, thr->wait_co);
>> -            thr->wait_co = NULL;
>> -        }
>> +    if (thr->wait_co) {
>> +        aio_co_schedule(NULL, thr->wait_co);
>> +        thr->wait_co = NULL;
> 
> Ah, so you get rid of this redundant check in this patch.  Fine by me;
> please disregard my comment on this matter to the previous patch, then.
> 
> Roman.
>>       }
>>   
>>       qemu_mutex_unlock(&thr->mutex);
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-04-27 22:35   ` Roman Kagan
@ 2021-04-28  8:06     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-28  8:06 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

28.04.2021 01:35, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
>>   1 file changed, 26 insertions(+), 17 deletions(-)
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 21a4039359..8531d019b2 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
>>       NBDClientConnection *conn;
>>   } BDRVNBDState;
>>   
>> -static void nbd_free_connect_thread(NBDClientConnection *conn);
>> +static void nbd_client_connection_release(NBDClientConnection *conn);
>>   static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
>>                                       Error **errp);
>>   static coroutine_fn QIOChannelSocket *
>> @@ -130,20 +130,9 @@ static void nbd_yank(void *opaque);
>>   static void nbd_clear_bdrvstate(BlockDriverState *bs)
>>   {
>>       BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
>> -    NBDClientConnection *conn = s->conn;
>> -    bool do_free;
>> -
>> -    qemu_mutex_lock(&conn->mutex);
>> -    if (conn->running) {
>> -        conn->detached = true;
>> -    }
>> -    do_free = !conn->running && !conn->detached;
>> -    qemu_mutex_unlock(&conn->mutex);
>>   
>> -    /* the runaway thread will clean it up itself */
>> -    if (do_free) {
>> -        nbd_free_connect_thread(conn);
>> -    }
>> +    nbd_client_connection_release(s->conn);
>> +    s->conn = NULL;
>>   
>>       yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
>>   
>> @@ -365,7 +354,7 @@ nbd_client_connection_new(const SocketAddress *saddr)
>>       return conn;
>>   }
>>   
>> -static void nbd_free_connect_thread(NBDClientConnection *conn)
>> +static void nbd_client_connection_do_free(NBDClientConnection *conn)
>>   {
>>       if (conn->sioc) {
>>           qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
>> @@ -379,8 +368,8 @@ static void nbd_free_connect_thread(NBDClientConnection *conn)
>>   static void *connect_thread_func(void *opaque)
>>   {
>>       NBDClientConnection *conn = opaque;
>> +    bool do_free;
>>       int ret;
>> -    bool do_free = false;
>>   
> 
> This hunk belongs in patch 8.
> 

Agree

> 
>>       conn->sioc = qio_channel_socket_new();
>>   
>> @@ -405,12 +394,32 @@ static void *connect_thread_func(void *opaque)
>>       qemu_mutex_unlock(&conn->mutex);
>>   
>>       if (do_free) {
>> -        nbd_free_connect_thread(conn);
>> +        nbd_client_connection_do_free(conn);
>>       }
>>   
>>       return NULL;
>>   }
>>   
>> +static void nbd_client_connection_release(NBDClientConnection *conn)
>> +{
>> +    bool do_free;
>> +
>> +    if (!conn) {
>> +        return;
>> +    }
>> +
>> +    qemu_mutex_lock(&conn->mutex);
>> +    if (conn->running) {
>> +        conn->detached = true;
>> +    }
>> +    do_free = !conn->running && !conn->detached;
>> +    qemu_mutex_unlock(&conn->mutex);
>> +
>> +    if (do_free) {
>> +        nbd_client_connection_do_free(conn);
>> +    }
>> +}
>> +
>>   /*
>>    * Get a new connection in context of @conn:
>>    *   if thread is running, wait for completion
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection
  2021-04-27 22:45   ` Roman Kagan
@ 2021-04-28  8:14     ` Vladimir Sementsov-Ogievskiy
  2021-06-09 15:49       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-28  8:14 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

28.04.2021 01:45, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:52AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> We now have bs-independent connection API, which consists of four
>> functions:
>>
>>    nbd_client_connection_new()
>>    nbd_client_connection_unref()
>>    nbd_co_establish_connection()
>>    nbd_co_establish_connection_cancel()
>>
>> Move them to a separate file together with NBDClientConnection
>> structure which becomes private to the new API.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/nbd.h     |  11 +++
>>   block/nbd.c             | 187 -----------------------------------
>>   nbd/client-connection.c | 212 ++++++++++++++++++++++++++++++++++++++++
>>   nbd/meson.build         |   1 +
>>   4 files changed, 224 insertions(+), 187 deletions(-)
>>   create mode 100644 nbd/client-connection.c
>>
>> diff --git a/include/block/nbd.h b/include/block/nbd.h
>> index 5f34d23bb0..57381be76f 100644
>> --- a/include/block/nbd.h
>> +++ b/include/block/nbd.h
>> @@ -406,4 +406,15 @@ const char *nbd_info_lookup(uint16_t info);
>>   const char *nbd_cmd_lookup(uint16_t info);
>>   const char *nbd_err_lookup(int err);
>>   
>> +/* nbd/client-connection.c */
>> +typedef struct NBDClientConnection NBDClientConnection;
>> +
>> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
>> +void nbd_client_connection_release(NBDClientConnection *conn);
>> +
>> +QIOChannelSocket *coroutine_fn
>> +nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
>> +
>> +void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
>> +
>>   #endif
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 8531d019b2..9bd68dcf10 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -66,24 +66,6 @@ typedef enum NBDClientState {
>>       NBD_CLIENT_QUIT
>>   } NBDClientState;
>>   
>> -typedef struct NBDClientConnection {
>> -    /* Initialization constants */
>> -    SocketAddress *saddr; /* address to connect to */
>> -
>> -    /*
>> -     * Result of last attempt. Valid in FAIL and SUCCESS states.
>> -     * If you want to steal error, don't forget to set pointer to NULL.
>> -     */
>> -    QIOChannelSocket *sioc;
>> -    Error *err;
>> -
>> -    QemuMutex mutex;
>> -    /* All further fields are protected by mutex */
>> -    bool running; /* thread is running now */
>> -    bool detached; /* thread is detached and should cleanup the state */
>> -    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>> -} NBDClientConnection;
>> -
>>   typedef struct BDRVNBDState {
>>       QIOChannelSocket *sioc; /* The master data channel */
>>       QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
>> @@ -118,12 +100,8 @@ typedef struct BDRVNBDState {
>>       NBDClientConnection *conn;
>>   } BDRVNBDState;
>>   
>> -static void nbd_client_connection_release(NBDClientConnection *conn);
>>   static int nbd_establish_connection(BlockDriverState *bs, SocketAddress *saddr,
>>                                       Error **errp);
>> -static coroutine_fn QIOChannelSocket *
>> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
>> -static void nbd_co_establish_connection_cancel(NBDClientConnection *conn);
>>   static int nbd_client_handshake(BlockDriverState *bs, Error **errp);
>>   static void nbd_yank(void *opaque);
>>   
>> @@ -340,171 +318,6 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
>>       return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
>>   }
>>   
>> -static NBDClientConnection *
>> -nbd_client_connection_new(const SocketAddress *saddr)
>> -{
>> -    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
>> -
>> -    *conn = (NBDClientConnection) {
>> -        .saddr = QAPI_CLONE(SocketAddress, saddr),
>> -    };
>> -
>> -    qemu_mutex_init(&conn->mutex);
>> -
>> -    return conn;
>> -}
>> -
>> -static void nbd_client_connection_do_free(NBDClientConnection *conn)
>> -{
>> -    if (conn->sioc) {
>> -        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
>> -        object_unref(OBJECT(conn->sioc));
>> -    }
>> -    error_free(conn->err);
>> -    qapi_free_SocketAddress(conn->saddr);
>> -    g_free(conn);
>> -}
>> -
>> -static void *connect_thread_func(void *opaque)
>> -{
>> -    NBDClientConnection *conn = opaque;
>> -    bool do_free;
>> -    int ret;
>> -
>> -    conn->sioc = qio_channel_socket_new();
>> -
>> -    error_free(conn->err);
>> -    conn->err = NULL;
>> -    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
>> -    if (ret < 0) {
>> -        object_unref(OBJECT(conn->sioc));
>> -        conn->sioc = NULL;
>> -    }
>> -
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    assert(conn->running);
>> -    conn->running = false;
>> -    if (conn->wait_co) {
>> -        aio_co_schedule(NULL, conn->wait_co);
>> -        conn->wait_co = NULL;
>> -    }
>> -    do_free = conn->detached;
>> -
>> -    qemu_mutex_unlock(&conn->mutex);
>> -
>> -    if (do_free) {
>> -        nbd_client_connection_do_free(conn);
>> -    }
>> -
>> -    return NULL;
>> -}
>> -
>> -static void nbd_client_connection_release(NBDClientConnection *conn)
>> -{
>> -    bool do_free;
>> -
>> -    if (!conn) {
>> -        return;
>> -    }
>> -
>> -    qemu_mutex_lock(&conn->mutex);
>> -    if (conn->running) {
>> -        conn->detached = true;
>> -    }
>> -    do_free = !conn->running && !conn->detached;
>> -    qemu_mutex_unlock(&conn->mutex);
>> -
>> -    if (do_free) {
>> -        nbd_client_connection_do_free(conn);
>> -    }
>> -}
>> -
>> -/*
>> - * Get a new connection in context of @conn:
>> - *   if thread is running, wait for completion
>> - *   if thread is already succeeded in background, and user didn't get the
>> - *     result, just return it now
>> - *   otherwise if thread is not running, start a thread and wait for completion
>> - */
>> -static coroutine_fn QIOChannelSocket *
>> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>> -{
>> -    QIOChannelSocket *sioc = NULL;
>> -    QemuThread thread;
>> -
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    /*
>> -     * Don't call nbd_co_establish_connection() in several coroutines in
>> -     * parallel. Only one call at once is supported.
>> -     */
>> -    assert(!conn->wait_co);
>> -
>> -    if (!conn->running) {
>> -        if (conn->sioc) {
>> -            /* Previous attempt finally succeeded in background */
>> -            sioc = g_steal_pointer(&conn->sioc);
>> -            qemu_mutex_unlock(&conn->mutex);
>> -
>> -            return sioc;
>> -        }
>> -
>> -        conn->running = true;
>> -        error_free(conn->err);
>> -        conn->err = NULL;
>> -        qemu_thread_create(&thread, "nbd-connect",
>> -                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
>> -    }
>> -
>> -    conn->wait_co = qemu_coroutine_self();
>> -
>> -    qemu_mutex_unlock(&conn->mutex);
>> -
>> -    /*
>> -     * We are going to wait for connect-thread finish, but
>> -     * nbd_co_establish_connection_cancel() can interrupt.
>> -     */
>> -    qemu_coroutine_yield();
>> -
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    if (conn->running) {
>> -        /*
>> -         * Obviously, drained section wants to start. Report the attempt as
>> -         * failed. Still connect thread is executing in background, and its
>> -         * result may be used for next connection attempt.
>> -         */
>> -        error_setg(errp, "Connection attempt cancelled by other operation");
>> -    } else {
>> -        error_propagate(errp, conn->err);
>> -        conn->err = NULL;
>> -        sioc = g_steal_pointer(&conn->sioc);
>> -    }
>> -
>> -    qemu_mutex_unlock(&conn->mutex);
>> -
>> -    return sioc;
>> -}
>> -
>> -/*
>> - * nbd_co_establish_connection_cancel
>> - * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
>> - * stop the thread itself neither close the socket. It just safely wakes
>> - * nbd_co_establish_connection() sleeping in the yield().
>> - */
>> -static void nbd_co_establish_connection_cancel(NBDClientConnection *conn)
>> -{
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    if (conn->wait_co) {
>> -        aio_co_schedule(NULL, conn->wait_co);
>> -        conn->wait_co = NULL;
>> -    }
>> -
>> -    qemu_mutex_unlock(&conn->mutex);
>> -}
>> -
>>   static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>>   {
>>       int ret;
>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>> new file mode 100644
>> index 0000000000..4e39a5b1af
>> --- /dev/null
>> +++ b/nbd/client-connection.c
>> @@ -0,0 +1,212 @@
>> +/*
>> + * QEMU Block driver for  NBD
>> + *
>> + * Copyright (c) 2021 Virtuozzo International GmbH.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining a copy
>> + * of this software and associated documentation files (the "Software"), to deal
>> + * in the Software without restriction, including without limitation the rights
>> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
>> + * copies of the Software, and to permit persons to whom the Software is
>> + * furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
>> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
>> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
>> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
>> + * THE SOFTWARE.
>> + */
>> +
>> +#include "qemu/osdep.h"
>> +
>> +#include "block/nbd.h"
>> +
>> +#include "qapi/qapi-visit-sockets.h"
>> +#include "qapi/clone-visitor.h"
>> +
>> +struct NBDClientConnection {
>> +    /* Initialization constants */
>> +    SocketAddress *saddr; /* address to connect to */
>> +
>> +    /*
>> +     * Result of last attempt. Valid in FAIL and SUCCESS states.
>> +     * If you want to steal error, don't forget to set pointer to NULL.
>> +     */
>> +    QIOChannelSocket *sioc;
>> +    Error *err;
> 
> These two are also manipulated under the mutex.  Consider also updating
> the comment: both these pointers are to be "stolen" by the caller, with
> the former being valid when the connection succeeds and the latter
> otherwise.
> 

Hmm. I should move mutex and "All further" comment above these two fields.

Ok, I'll think on updating the comment (probably as an additional patch, to keep this as a simple movement). I don't like to document that they are stolen by caller(). For me it sounds like caller is user of the interface. And caller of nbd_co_establish_connection() doesn't stole anything: the structure is private now..

> 
>> +
>> +    QemuMutex mutex;
>> +    /* All further fields are protected by mutex */
>> +    bool running; /* thread is running now */
>> +    bool detached; /* thread is detached and should cleanup the state */
>> +    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>> +};
>> +
>> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
>> +{
>> +    NBDClientConnection *conn = g_new(NBDClientConnection, 1);
>> +
>> +    *conn = (NBDClientConnection) {
>> +        .saddr = QAPI_CLONE(SocketAddress, saddr),
>> +    };
>> +
>> +    qemu_mutex_init(&conn->mutex);
>> +
>> +    return conn;
>> +}
>> +
>> +static void nbd_client_connection_do_free(NBDClientConnection *conn)
>> +{
>> +    if (conn->sioc) {
>> +        qio_channel_close(QIO_CHANNEL(conn->sioc), NULL);
>> +        object_unref(OBJECT(conn->sioc));
>> +    }
>> +    error_free(conn->err);
>> +    qapi_free_SocketAddress(conn->saddr);
>> +    g_free(conn);
>> +}
>> +
>> +static void *connect_thread_func(void *opaque)
>> +{
>> +    NBDClientConnection *conn = opaque;
>> +    bool do_free;
>> +    int ret;
>> +
>> +    conn->sioc = qio_channel_socket_new();
>> +
>> +    error_free(conn->err);
>> +    conn->err = NULL;
>> +    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
>> +    if (ret < 0) {
>> +        object_unref(OBJECT(conn->sioc));
>> +        conn->sioc = NULL;
>> +    }
>> +
>> +    qemu_mutex_lock(&conn->mutex);
>> +
>> +    assert(conn->running);
>> +    conn->running = false;
>> +    if (conn->wait_co) {
>> +        aio_co_schedule(NULL, conn->wait_co);
>> +        conn->wait_co = NULL;
>> +    }
>> +    do_free = conn->detached;
>> +
>> +    qemu_mutex_unlock(&conn->mutex);
>> +
>> +    if (do_free) {
>> +        nbd_client_connection_do_free(conn);
>> +    }
>> +
>> +    return NULL;
>> +}
>> +
>> +void nbd_client_connection_release(NBDClientConnection *conn)
>> +{
>> +    bool do_free;
>> +
>> +    if (!conn) {
>> +        return;
>> +    }
>> +
>> +    qemu_mutex_lock(&conn->mutex);
>> +    if (conn->running) {
>> +        conn->detached = true;
>> +    }
>> +    do_free = !conn->running && !conn->detached;
>> +    qemu_mutex_unlock(&conn->mutex);
>> +
>> +    if (do_free) {
>> +        nbd_client_connection_do_free(conn);
>> +    }
>> +}
>> +
>> +/*
>> + * Get a new connection in context of @conn:
>> + *   if thread is running, wait for completion
>> + *   if thread is already succeeded in background, and user didn't get the
>> + *     result, just return it now
>> + *   otherwise if thread is not running, start a thread and wait for completion
>> + */
>> +QIOChannelSocket *coroutine_fn
>> +nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>> +{
>> +    QIOChannelSocket *sioc = NULL;
>> +    QemuThread thread;
>> +
>> +    qemu_mutex_lock(&conn->mutex);
>> +
>> +    /*
>> +     * Don't call nbd_co_establish_connection() in several coroutines in
>> +     * parallel. Only one call at once is supported.
>> +     */
>> +    assert(!conn->wait_co);
>> +
>> +    if (!conn->running) {
>> +        if (conn->sioc) {
>> +            /* Previous attempt finally succeeded in background */
>> +            sioc = g_steal_pointer(&conn->sioc);
>> +            qemu_mutex_unlock(&conn->mutex);
>> +
>> +            return sioc;
>> +        }
>> +
>> +        conn->running = true;
>> +        error_free(conn->err);
>> +        conn->err = NULL;
>> +        qemu_thread_create(&thread, "nbd-connect",
>> +                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
>> +    }
>> +
>> +    conn->wait_co = qemu_coroutine_self();
>> +
>> +    qemu_mutex_unlock(&conn->mutex);
>> +
>> +    /*
>> +     * We are going to wait for connect-thread finish, but
>> +     * nbd_co_establish_connection_cancel() can interrupt.
>> +     */
>> +    qemu_coroutine_yield();
>> +
>> +    qemu_mutex_lock(&conn->mutex);
>> +
>> +    if (conn->running) {
>> +        /*
>> +         * Obviously, drained section wants to start. Report the attempt as
>> +         * failed. Still connect thread is executing in background, and its
>> +         * result may be used for next connection attempt.
>> +         */
>> +        error_setg(errp, "Connection attempt cancelled by other operation");
>> +    } else {
>> +        error_propagate(errp, conn->err);
>> +        conn->err = NULL;
>> +        sioc = g_steal_pointer(&conn->sioc);
>> +    }
>> +
>> +    qemu_mutex_unlock(&conn->mutex);
>> +
>> +    return sioc;
>> +}
>> +
>> +/*
>> + * nbd_co_establish_connection_cancel
>> + * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
>> + * stop the thread itself neither close the socket. It just safely wakes
>> + * nbd_co_establish_connection() sleeping in the yield().
>> + */
>> +void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn)
>> +{
>> +    qemu_mutex_lock(&conn->mutex);
>> +
>> +    if (conn->wait_co) {
>> +        aio_co_schedule(NULL, conn->wait_co);
>> +        conn->wait_co = NULL;
>> +    }
>> +
>> +    qemu_mutex_unlock(&conn->mutex);
>> +}
>> diff --git a/nbd/meson.build b/nbd/meson.build
>> index 2baaa36948..b26d70565e 100644
>> --- a/nbd/meson.build
>> +++ b/nbd/meson.build
>> @@ -1,5 +1,6 @@
>>   block_ss.add(files(
>>     'client.c',
>> +  'client-connection.c',
>>     'common.c',
>>   ))
>>   blockdev_ss.add(files(
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD
  2021-04-28  6:08   ` Roman Kagan
@ 2021-04-28  8:17     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-04-28  8:17 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

28.04.2021 09:08, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:53AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   nbd/client-connection.c | 94 ++++++++++++++++++-----------------------
>>   1 file changed, 42 insertions(+), 52 deletions(-)
>>
>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>> index 4e39a5b1af..b45a0bd5f6 100644
>> --- a/nbd/client-connection.c
>> +++ b/nbd/client-connection.c
>> @@ -87,17 +87,16 @@ static void *connect_thread_func(void *opaque)
>>           conn->sioc = NULL;
>>       }
>>   
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    assert(conn->running);
>> -    conn->running = false;
>> -    if (conn->wait_co) {
>> -        aio_co_schedule(NULL, conn->wait_co);
>> -        conn->wait_co = NULL;
>> +    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>> +        assert(conn->running);
>> +        conn->running = false;
>> +        if (conn->wait_co) {
>> +            aio_co_schedule(NULL, conn->wait_co);
>> +            conn->wait_co = NULL;
>> +        }
>>       }
>>       do_free = conn->detached;
> 
> ->detached is now accessed outside the mutex

Oops. Will fix.

> 
>>   
>> -    qemu_mutex_unlock(&conn->mutex);
>>   
>>       if (do_free) {
>>           nbd_client_connection_do_free(conn);
>> @@ -136,61 +135,54 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>>   QIOChannelSocket *coroutine_fn
>>   nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>   {
>> -    QIOChannelSocket *sioc = NULL;
>>       QemuThread thread;
>>   
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    /*
>> -     * Don't call nbd_co_establish_connection() in several coroutines in
>> -     * parallel. Only one call at once is supported.
>> -     */
>> -    assert(!conn->wait_co);
>> -
>> -    if (!conn->running) {
>> -        if (conn->sioc) {
>> -            /* Previous attempt finally succeeded in background */
>> -            sioc = g_steal_pointer(&conn->sioc);
>> -            qemu_mutex_unlock(&conn->mutex);
>> -
>> -            return sioc;
>> +    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>> +        /*
>> +         * Don't call nbd_co_establish_connection() in several coroutines in
>> +         * parallel. Only one call at once is supported.
>> +         */
>> +        assert(!conn->wait_co);
>> +
>> +        if (!conn->running) {
>> +            if (conn->sioc) {
>> +                /* Previous attempt finally succeeded in background */
>> +                return g_steal_pointer(&conn->sioc);
>> +            }
>> +
>> +            conn->running = true;
>> +            error_free(conn->err);
>> +            conn->err = NULL;
>> +            qemu_thread_create(&thread, "nbd-connect",
>> +                               connect_thread_func, conn, QEMU_THREAD_DETACHED);
>>           }
>>   
>> -        conn->running = true;
>> -        error_free(conn->err);
>> -        conn->err = NULL;
>> -        qemu_thread_create(&thread, "nbd-connect",
>> -                           connect_thread_func, conn, QEMU_THREAD_DETACHED);
>> +        conn->wait_co = qemu_coroutine_self();
>>       }
>>   
>> -    conn->wait_co = qemu_coroutine_self();
>> -
>> -    qemu_mutex_unlock(&conn->mutex);
>> -
>>       /*
>>        * We are going to wait for connect-thread finish, but
>>        * nbd_co_establish_connection_cancel() can interrupt.
>>        */
>>       qemu_coroutine_yield();
>>   
>> -    qemu_mutex_lock(&conn->mutex);
>> -
>> -    if (conn->running) {
>> -        /*
>> -         * Obviously, drained section wants to start. Report the attempt as
>> -         * failed. Still connect thread is executing in background, and its
>> -         * result may be used for next connection attempt.
>> -         */
>> -        error_setg(errp, "Connection attempt cancelled by other operation");
>> -    } else {
>> -        error_propagate(errp, conn->err);
>> -        conn->err = NULL;
>> -        sioc = g_steal_pointer(&conn->sioc);
>> +    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>> +        if (conn->running) {
>> +            /*
>> +             * Obviously, drained section wants to start. Report the attempt as
>> +             * failed. Still connect thread is executing in background, and its
>> +             * result may be used for next connection attempt.
>> +             */
>> +            error_setg(errp, "Connection attempt cancelled by other operation");
>> +            return NULL;
>> +        } else {
>> +            error_propagate(errp, conn->err);
>> +            conn->err = NULL;
>> +            return g_steal_pointer(&conn->sioc);
>> +        }
>>       }
>>   
>> -    qemu_mutex_unlock(&conn->mutex);
>> -
>> -    return sioc;
>> +    abort(); /* unreachable */
>>   }
>>   
>>   /*
>> @@ -201,12 +193,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>    */
>>   void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn)
>>   {
>> -    qemu_mutex_lock(&conn->mutex);
>> +    QEMU_LOCK_GUARD(&conn->mutex);
>>   
>>       if (conn->wait_co) {
>>           aio_co_schedule(NULL, conn->wait_co);
>>           conn->wait_co = NULL;
>>       }
>> -
>> -    qemu_mutex_unlock(&conn->mutex);
>>   }
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation
  2021-04-16  8:08 ` [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation Vladimir Sementsov-Ogievskiy
@ 2021-05-11 10:45   ` Roman Kagan
  2021-05-12  6:42     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-05-11 10:45 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:54AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Add arguments and logic to support nbd negotiation in the same thread
> after successful connection.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     |   9 +++-
>  block/nbd.c             |   4 +-
>  nbd/client-connection.c | 105 ++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 109 insertions(+), 9 deletions(-)
> 
> diff --git a/include/block/nbd.h b/include/block/nbd.h
> index 57381be76f..5d86e6a393 100644
> --- a/include/block/nbd.h
> +++ b/include/block/nbd.h
> @@ -409,11 +409,16 @@ const char *nbd_err_lookup(int err);
>  /* nbd/client-connection.c */
>  typedef struct NBDClientConnection NBDClientConnection;
>  
> -NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
> +                                               bool do_negotiation,
> +                                               const char *export_name,
> +                                               const char *x_dirty_bitmap,
> +                                               QCryptoTLSCreds *tlscreds);
>  void nbd_client_connection_release(NBDClientConnection *conn);
>  
>  QIOChannelSocket *coroutine_fn
> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
> +nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
> +                            QIOChannel **ioc, Error **errp);
>  
>  void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
>  
> diff --git a/block/nbd.c b/block/nbd.c
> index 9bd68dcf10..5e63caaf4b 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -361,7 +361,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>          s->ioc = NULL;
>      }
>  
> -    s->sioc = nbd_co_establish_connection(s->conn, NULL);
> +    s->sioc = nbd_co_establish_connection(s->conn, NULL, NULL, NULL);
>      if (!s->sioc) {
>          ret = -ECONNREFUSED;
>          goto out;
> @@ -2033,7 +2033,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>          goto fail;
>      }
>  
> -    s->conn = nbd_client_connection_new(s->saddr);
> +    s->conn = nbd_client_connection_new(s->saddr, false, NULL, NULL, NULL);
>  
>      /*
>       * establish TCP connection, return error if it fails
> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
> index b45a0bd5f6..ae4a77f826 100644
> --- a/nbd/client-connection.c
> +++ b/nbd/client-connection.c
> @@ -30,14 +30,19 @@
>  #include "qapi/clone-visitor.h"
>  
>  struct NBDClientConnection {
> -    /* Initialization constants */
> +    /* Initialization constants, never change */
>      SocketAddress *saddr; /* address to connect to */
> +    QCryptoTLSCreds *tlscreds;
> +    NBDExportInfo initial_info;
> +    bool do_negotiation;
>  
>      /*
>       * Result of last attempt. Valid in FAIL and SUCCESS states.
>       * If you want to steal error, don't forget to set pointer to NULL.
>       */
> +    NBDExportInfo updated_info;
>      QIOChannelSocket *sioc;
> +    QIOChannel *ioc;
>      Error *err;
>  
>      QemuMutex mutex;
> @@ -47,12 +52,25 @@ struct NBDClientConnection {
>      Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>  };
>  
> -NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
> +                                               bool do_negotiation,
> +                                               const char *export_name,
> +                                               const char *x_dirty_bitmap,
> +                                               QCryptoTLSCreds *tlscreds)
>  {
>      NBDClientConnection *conn = g_new(NBDClientConnection, 1);
>  
> +    object_ref(OBJECT(tlscreds));
>      *conn = (NBDClientConnection) {
>          .saddr = QAPI_CLONE(SocketAddress, saddr),
> +        .tlscreds = tlscreds,
> +        .do_negotiation = do_negotiation,
> +
> +        .initial_info.request_sizes = true,
> +        .initial_info.structured_reply = true,
> +        .initial_info.base_allocation = true,
> +        .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
> +        .initial_info.name = g_strdup(export_name ?: "")
>      };
>  
>      qemu_mutex_init(&conn->mutex);
> @@ -68,9 +86,59 @@ static void nbd_client_connection_do_free(NBDClientConnection *conn)
>      }
>      error_free(conn->err);
>      qapi_free_SocketAddress(conn->saddr);
> +    object_unref(OBJECT(conn->tlscreds));
> +    g_free(conn->initial_info.x_dirty_bitmap);
> +    g_free(conn->initial_info.name);
>      g_free(conn);
>  }
>  
> +/*
> + * Connect to @addr and do NBD negotiation if @info is not null. If @tlscreds
> + * given @outioc is provided. @outioc is provided only on success.  The call may

s/given/are given/
s/provided/returned/g

> + * be cancelled in parallel by simply qio_channel_shutdown(sioc).

I assume by "in parallel" you mean "from another thread", I'd suggest to
spell this out.  I'm also wondering how safe it really is.  In general
sockets should be fine with concurrent send()/recv() and shutdown(): the
sender/receiver will be woken up with an error.  Dunno if it's true for
an arbitrary qio_channel.  Also it may be worth documenting that the
code path that cancels must leave all the cleanup up to the negotiation
code, otherwise it risks conflicting.

> + */
> +static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
> +                       NBDExportInfo *info, QCryptoTLSCreds *tlscreds,
> +                       QIOChannel **outioc, Error **errp)
> +{
> +    int ret;
> +
> +    if (outioc) {
> +        *outioc = NULL;
> +    }
> +
> +    ret = qio_channel_socket_connect_sync(sioc, addr, errp);
> +    if (ret < 0) {
> +        return ret;
> +    }
> +
> +    if (!info) {
> +        return 0;
> +    }
> +
> +    ret = nbd_receive_negotiate(NULL, QIO_CHANNEL(sioc), tlscreds,
> +                                tlscreds ? addr->u.inet.host : NULL,
> +                                outioc, info, errp);
> +    if (ret < 0) {
> +        /*
> +         * nbd_receive_negotiate() may setup tls ioc and return it even on
> +         * failure path. In this case we should use it instead of original
> +         * channel.
> +         */
> +        if (outioc && *outioc) {
> +            qio_channel_close(QIO_CHANNEL(*outioc), NULL);
> +            object_unref(OBJECT(*outioc));
> +            *outioc = NULL;
> +        } else {
> +            qio_channel_close(QIO_CHANNEL(sioc), NULL);
> +        }
> +
> +        return ret;
> +    }
> +
> +    return 0;
> +}
> +
>  static void *connect_thread_func(void *opaque)
>  {
>      NBDClientConnection *conn = opaque;
> @@ -81,12 +149,19 @@ static void *connect_thread_func(void *opaque)
>  
>      error_free(conn->err);
>      conn->err = NULL;
> -    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
> +    conn->updated_info = conn->initial_info;
> +
> +    ret = nbd_connect(conn->sioc, conn->saddr,
> +                      conn->do_negotiation ? &conn->updated_info : NULL,
> +                      conn->tlscreds, &conn->ioc, &conn->err);
>      if (ret < 0) {
>          object_unref(OBJECT(conn->sioc));
>          conn->sioc = NULL;
>      }
>  
> +    conn->updated_info.x_dirty_bitmap = NULL;
> +    conn->updated_info.name = NULL;
> +
>      WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>          assert(conn->running);
>          conn->running = false;
> @@ -94,8 +169,8 @@ static void *connect_thread_func(void *opaque)
>              aio_co_schedule(NULL, conn->wait_co);
>              conn->wait_co = NULL;
>          }
> +        do_free = conn->detached;
>      }
> -    do_free = conn->detached;

This looks like the response to my earlier comment ;)  This hunk just
needs to be squashed into the previous patch.

>  
>  
>      if (do_free) {
> @@ -131,12 +206,24 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>   *   if thread is already succeeded in background, and user didn't get the
>   *     result, just return it now
>   *   otherwise if thread is not running, start a thread and wait for completion
> + *
> + * If @info is not NULL, also do nbd-negotiation after successful connection.
> + * In this case info is used only as out parameter, and is fully initialized by
> + * nbd_co_establish_connection(). "IN" fields of info as well as related only to
> + * nbd_receive_export_list() would be zero (see description of NBDExportInfo in
> + * include/block/nbd.h).
>   */
>  QIOChannelSocket *coroutine_fn
> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
> +nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
> +                            QIOChannel **ioc, Error **errp)
>  {
>      QemuThread thread;
>  
> +    if (conn->do_negotiation) {
> +        assert(info);
> +        assert(ioc);
> +    }
> +
>      WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>          /*
>           * Don't call nbd_co_establish_connection() in several coroutines in
> @@ -147,6 +234,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>          if (!conn->running) {
>              if (conn->sioc) {
>                  /* Previous attempt finally succeeded in background */
> +                if (conn->do_negotiation) {
> +                    *ioc = g_steal_pointer(&conn->ioc);
> +                    memcpy(info, &conn->updated_info, sizeof(*info));
> +                }
>                  return g_steal_pointer(&conn->sioc);
>              }
>  
> @@ -178,6 +269,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>          } else {
>              error_propagate(errp, conn->err);
>              conn->err = NULL;
> +            if (conn->sioc && conn->do_negotiation) {
> +                *ioc = g_steal_pointer(&conn->ioc);
> +                memcpy(info, &conn->updated_info, sizeof(*info));
> +            }
>              return g_steal_pointer(&conn->sioc);
>          }
>      }
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 17/33] nbd/client-connection: implement connection retry
  2021-04-16  8:08 ` [PATCH v3 17/33] nbd/client-connection: implement connection retry Vladimir Sementsov-Ogievskiy
@ 2021-05-11 20:54   ` Roman Kagan
  2021-06-08 10:24     ` Vladimir Sementsov-Ogievskiy
  2021-06-03 16:17   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-05-11 20:54 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:55AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Add an option for thread to retry connection until success. We'll use
> nbd/client-connection both for reconnect and for initial connection in
> nbd_open(), so we need a possibility to use same NBDClientConnection
> instance to connect once in nbd_open() and then use retry semantics for
> reconnect.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     |  2 ++
>  nbd/client-connection.c | 55 +++++++++++++++++++++++++++++------------
>  2 files changed, 41 insertions(+), 16 deletions(-)
> 
> diff --git a/include/block/nbd.h b/include/block/nbd.h
> index 5d86e6a393..5bb54d831c 100644
> --- a/include/block/nbd.h
> +++ b/include/block/nbd.h
> @@ -409,6 +409,8 @@ const char *nbd_err_lookup(int err);
>  /* nbd/client-connection.c */
>  typedef struct NBDClientConnection NBDClientConnection;
>  
> +void nbd_client_connection_enable_retry(NBDClientConnection *conn);
> +
>  NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>                                                 bool do_negotiation,
>                                                 const char *export_name,
> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
> index ae4a77f826..002bd91f42 100644
> --- a/nbd/client-connection.c
> +++ b/nbd/client-connection.c
> @@ -36,6 +36,8 @@ struct NBDClientConnection {
>      NBDExportInfo initial_info;
>      bool do_negotiation;
>  
> +    bool do_retry;
> +
>      /*
>       * Result of last attempt. Valid in FAIL and SUCCESS states.
>       * If you want to steal error, don't forget to set pointer to NULL.
> @@ -52,6 +54,15 @@ struct NBDClientConnection {
>      Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>  };
>  
> +/*
> + * The function isn't protected by any mutex, so call it when thread is not
> + * running.
> + */
> +void nbd_client_connection_enable_retry(NBDClientConnection *conn)
> +{
> +    conn->do_retry = true;
> +}
> +
>  NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>                                                 bool do_negotiation,
>                                                 const char *export_name,
> @@ -144,24 +155,37 @@ static void *connect_thread_func(void *opaque)
>      NBDClientConnection *conn = opaque;
>      bool do_free;
>      int ret;
> +    uint64_t timeout = 1;
> +    uint64_t max_timeout = 16;
> +
> +    while (true) {
> +        conn->sioc = qio_channel_socket_new();
> +
> +        error_free(conn->err);
> +        conn->err = NULL;
> +        conn->updated_info = conn->initial_info;
> +
> +        ret = nbd_connect(conn->sioc, conn->saddr,
> +                          conn->do_negotiation ? &conn->updated_info : NULL,
> +                          conn->tlscreds, &conn->ioc, &conn->err);
> +        conn->updated_info.x_dirty_bitmap = NULL;
> +        conn->updated_info.name = NULL;
> +
> +        if (ret < 0) {
> +            object_unref(OBJECT(conn->sioc));
> +            conn->sioc = NULL;
> +            if (conn->do_retry) {
> +                sleep(timeout);
> +                if (timeout < max_timeout) {
> +                    timeout *= 2;
> +                }
> +                continue;
> +            }
> +        }

How is it supposed to get canceled?

Roman.

> -    conn->sioc = qio_channel_socket_new();
> -
> -    error_free(conn->err);
> -    conn->err = NULL;
> -    conn->updated_info = conn->initial_info;
> -
> -    ret = nbd_connect(conn->sioc, conn->saddr,
> -                      conn->do_negotiation ? &conn->updated_info : NULL,
> -                      conn->tlscreds, &conn->ioc, &conn->err);
> -    if (ret < 0) {
> -        object_unref(OBJECT(conn->sioc));
> -        conn->sioc = NULL;
> +        break;
>      }
>  
> -    conn->updated_info.x_dirty_bitmap = NULL;
> -    conn->updated_info.name = NULL;
> -
>      WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>          assert(conn->running);
>          conn->running = false;
> @@ -172,7 +196,6 @@ static void *connect_thread_func(void *opaque)
>          do_free = conn->detached;
>      }
>  
> -
>      if (do_free) {
>          nbd_client_connection_do_free(conn);
>      }
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 18/33] nbd/client-connection: shutdown connection on release
  2021-04-16  8:08 ` [PATCH v3 18/33] nbd/client-connection: shutdown connection on release Vladimir Sementsov-Ogievskiy
@ 2021-05-11 21:08   ` Roman Kagan
  2021-05-12  6:39     ` Vladimir Sementsov-Ogievskiy
  2021-06-03 16:20   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-05-11 21:08 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:56AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Now, when thread can do negotiation and retry, it may run relatively
> long. We need a mechanism to stop it, when user is not interested in
> result anymore. So, on nbd_client_connection_release() let's shutdown
> the socked, and do not retry connection if thread is detached.

This kinda answers my question to the previous patch about reconnect
cancellation.

> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  nbd/client-connection.c | 36 ++++++++++++++++++++++++++----------
>  1 file changed, 26 insertions(+), 10 deletions(-)
> 
> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
> index 002bd91f42..54f73c6c24 100644
> --- a/nbd/client-connection.c
> +++ b/nbd/client-connection.c
> @@ -158,9 +158,13 @@ static void *connect_thread_func(void *opaque)
>      uint64_t timeout = 1;
>      uint64_t max_timeout = 16;
>  
> -    while (true) {
> +    qemu_mutex_lock(&conn->mutex);
> +    while (!conn->detached) {
> +        assert(!conn->sioc);
>          conn->sioc = qio_channel_socket_new();
>  
> +        qemu_mutex_unlock(&conn->mutex);
> +
>          error_free(conn->err);
>          conn->err = NULL;
>          conn->updated_info = conn->initial_info;
> @@ -171,14 +175,20 @@ static void *connect_thread_func(void *opaque)
>          conn->updated_info.x_dirty_bitmap = NULL;
>          conn->updated_info.name = NULL;
>  
> +        qemu_mutex_lock(&conn->mutex);
> +
>          if (ret < 0) {
>              object_unref(OBJECT(conn->sioc));
>              conn->sioc = NULL;
> -            if (conn->do_retry) {
> +            if (conn->do_retry && !conn->detached) {
> +                qemu_mutex_unlock(&conn->mutex);
> +
>                  sleep(timeout);
>                  if (timeout < max_timeout) {
>                      timeout *= 2;
>                  }
> +
> +                qemu_mutex_lock(&conn->mutex);
>                  continue;
>              }
>          }
> @@ -186,15 +196,17 @@ static void *connect_thread_func(void *opaque)
>          break;
>      }
>  
> -    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
> -        assert(conn->running);
> -        conn->running = false;
> -        if (conn->wait_co) {
> -            aio_co_schedule(NULL, conn->wait_co);
> -            conn->wait_co = NULL;
> -        }
> -        do_free = conn->detached;
> +    /* mutex is locked */
> +
> +    assert(conn->running);
> +    conn->running = false;
> +    if (conn->wait_co) {
> +        aio_co_schedule(NULL, conn->wait_co);
> +        conn->wait_co = NULL;
>      }
> +    do_free = conn->detached;
> +
> +    qemu_mutex_unlock(&conn->mutex);

This basically reverts some hunks from patch 15 "nbd/client-connection:
use QEMU_LOCK_GUARD".  How about dropping them there (they weren't an
obvious improvement even then).

Roman.

>  
>      if (do_free) {
>          nbd_client_connection_do_free(conn);
> @@ -215,6 +227,10 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>      if (conn->running) {
>          conn->detached = true;
>      }
> +    if (conn->sioc) {
> +        qio_channel_shutdown(QIO_CHANNEL(conn->sioc),
> +                             QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
> +    }
>      do_free = !conn->running && !conn->detached;
>      qemu_mutex_unlock(&conn->mutex);
>  
> -- 
> 2.29.2
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 18/33] nbd/client-connection: shutdown connection on release
  2021-05-11 21:08   ` Roman Kagan
@ 2021-05-12  6:39     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-05-12  6:39 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

12.05.2021 00:08, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:56AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Now, when thread can do negotiation and retry, it may run relatively
>> long. We need a mechanism to stop it, when user is not interested in
>> result anymore. So, on nbd_client_connection_release() let's shutdown
>> the socked, and do not retry connection if thread is detached.
> 
> This kinda answers my question to the previous patch about reconnect
> cancellation.
> 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   nbd/client-connection.c | 36 ++++++++++++++++++++++++++----------
>>   1 file changed, 26 insertions(+), 10 deletions(-)
>>
>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>> index 002bd91f42..54f73c6c24 100644
>> --- a/nbd/client-connection.c
>> +++ b/nbd/client-connection.c
>> @@ -158,9 +158,13 @@ static void *connect_thread_func(void *opaque)
>>       uint64_t timeout = 1;
>>       uint64_t max_timeout = 16;
>>   
>> -    while (true) {
>> +    qemu_mutex_lock(&conn->mutex);
>> +    while (!conn->detached) {
>> +        assert(!conn->sioc);
>>           conn->sioc = qio_channel_socket_new();
>>   
>> +        qemu_mutex_unlock(&conn->mutex);
>> +
>>           error_free(conn->err);
>>           conn->err = NULL;
>>           conn->updated_info = conn->initial_info;
>> @@ -171,14 +175,20 @@ static void *connect_thread_func(void *opaque)
>>           conn->updated_info.x_dirty_bitmap = NULL;
>>           conn->updated_info.name = NULL;
>>   
>> +        qemu_mutex_lock(&conn->mutex);
>> +
>>           if (ret < 0) {
>>               object_unref(OBJECT(conn->sioc));
>>               conn->sioc = NULL;
>> -            if (conn->do_retry) {
>> +            if (conn->do_retry && !conn->detached) {
>> +                qemu_mutex_unlock(&conn->mutex);
>> +
>>                   sleep(timeout);
>>                   if (timeout < max_timeout) {
>>                       timeout *= 2;
>>                   }
>> +
>> +                qemu_mutex_lock(&conn->mutex);
>>                   continue;
>>               }
>>           }
>> @@ -186,15 +196,17 @@ static void *connect_thread_func(void *opaque)
>>           break;
>>       }
>>   
>> -    WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>> -        assert(conn->running);
>> -        conn->running = false;
>> -        if (conn->wait_co) {
>> -            aio_co_schedule(NULL, conn->wait_co);
>> -            conn->wait_co = NULL;
>> -        }
>> -        do_free = conn->detached;
>> +    /* mutex is locked */
>> +
>> +    assert(conn->running);
>> +    conn->running = false;
>> +    if (conn->wait_co) {
>> +        aio_co_schedule(NULL, conn->wait_co);
>> +        conn->wait_co = NULL;
>>       }
>> +    do_free = conn->detached;
>> +
>> +    qemu_mutex_unlock(&conn->mutex);
> 
> This basically reverts some hunks from patch 15 "nbd/client-connection:
> use QEMU_LOCK_GUARD".  How about dropping them there (they weren't an
> obvious improvement even then).
> 

OK, will do

> 
>>   
>>       if (do_free) {
>>           nbd_client_connection_do_free(conn);
>> @@ -215,6 +227,10 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>>       if (conn->running) {
>>           conn->detached = true;
>>       }
>> +    if (conn->sioc) {
>> +        qio_channel_shutdown(QIO_CHANNEL(conn->sioc),
>> +                             QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
>> +    }
>>       do_free = !conn->running && !conn->detached;
>>       qemu_mutex_unlock(&conn->mutex);
>>   
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation
  2021-05-11 10:45   ` Roman Kagan
@ 2021-05-12  6:42     ` Vladimir Sementsov-Ogievskiy
  2021-06-08 19:23       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-05-12  6:42 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

11.05.2021 13:45, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:54AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Add arguments and logic to support nbd negotiation in the same thread
>> after successful connection.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/nbd.h     |   9 +++-
>>   block/nbd.c             |   4 +-
>>   nbd/client-connection.c | 105 ++++++++++++++++++++++++++++++++++++++--
>>   3 files changed, 109 insertions(+), 9 deletions(-)
>>
>> diff --git a/include/block/nbd.h b/include/block/nbd.h
>> index 57381be76f..5d86e6a393 100644
>> --- a/include/block/nbd.h
>> +++ b/include/block/nbd.h
>> @@ -409,11 +409,16 @@ const char *nbd_err_lookup(int err);
>>   /* nbd/client-connection.c */
>>   typedef struct NBDClientConnection NBDClientConnection;
>>   
>> -NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
>> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>> +                                               bool do_negotiation,
>> +                                               const char *export_name,
>> +                                               const char *x_dirty_bitmap,
>> +                                               QCryptoTLSCreds *tlscreds);
>>   void nbd_client_connection_release(NBDClientConnection *conn);
>>   
>>   QIOChannelSocket *coroutine_fn
>> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
>> +nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
>> +                            QIOChannel **ioc, Error **errp);
>>   
>>   void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
>>   
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 9bd68dcf10..5e63caaf4b 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -361,7 +361,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>>           s->ioc = NULL;
>>       }
>>   
>> -    s->sioc = nbd_co_establish_connection(s->conn, NULL);
>> +    s->sioc = nbd_co_establish_connection(s->conn, NULL, NULL, NULL);
>>       if (!s->sioc) {
>>           ret = -ECONNREFUSED;
>>           goto out;
>> @@ -2033,7 +2033,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>>           goto fail;
>>       }
>>   
>> -    s->conn = nbd_client_connection_new(s->saddr);
>> +    s->conn = nbd_client_connection_new(s->saddr, false, NULL, NULL, NULL);
>>   
>>       /*
>>        * establish TCP connection, return error if it fails
>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>> index b45a0bd5f6..ae4a77f826 100644
>> --- a/nbd/client-connection.c
>> +++ b/nbd/client-connection.c
>> @@ -30,14 +30,19 @@
>>   #include "qapi/clone-visitor.h"
>>   
>>   struct NBDClientConnection {
>> -    /* Initialization constants */
>> +    /* Initialization constants, never change */
>>       SocketAddress *saddr; /* address to connect to */
>> +    QCryptoTLSCreds *tlscreds;
>> +    NBDExportInfo initial_info;
>> +    bool do_negotiation;
>>   
>>       /*
>>        * Result of last attempt. Valid in FAIL and SUCCESS states.
>>        * If you want to steal error, don't forget to set pointer to NULL.
>>        */
>> +    NBDExportInfo updated_info;
>>       QIOChannelSocket *sioc;
>> +    QIOChannel *ioc;
>>       Error *err;
>>   
>>       QemuMutex mutex;
>> @@ -47,12 +52,25 @@ struct NBDClientConnection {
>>       Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>>   };
>>   
>> -NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
>> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>> +                                               bool do_negotiation,
>> +                                               const char *export_name,
>> +                                               const char *x_dirty_bitmap,
>> +                                               QCryptoTLSCreds *tlscreds)
>>   {
>>       NBDClientConnection *conn = g_new(NBDClientConnection, 1);
>>   
>> +    object_ref(OBJECT(tlscreds));
>>       *conn = (NBDClientConnection) {
>>           .saddr = QAPI_CLONE(SocketAddress, saddr),
>> +        .tlscreds = tlscreds,
>> +        .do_negotiation = do_negotiation,
>> +
>> +        .initial_info.request_sizes = true,
>> +        .initial_info.structured_reply = true,
>> +        .initial_info.base_allocation = true,
>> +        .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
>> +        .initial_info.name = g_strdup(export_name ?: "")
>>       };
>>   
>>       qemu_mutex_init(&conn->mutex);
>> @@ -68,9 +86,59 @@ static void nbd_client_connection_do_free(NBDClientConnection *conn)
>>       }
>>       error_free(conn->err);
>>       qapi_free_SocketAddress(conn->saddr);
>> +    object_unref(OBJECT(conn->tlscreds));
>> +    g_free(conn->initial_info.x_dirty_bitmap);
>> +    g_free(conn->initial_info.name);
>>       g_free(conn);
>>   }
>>   
>> +/*
>> + * Connect to @addr and do NBD negotiation if @info is not null. If @tlscreds
>> + * given @outioc is provided. @outioc is provided only on success.  The call may
> 
> s/given/are given/
> s/provided/returned/g
> 
>> + * be cancelled in parallel by simply qio_channel_shutdown(sioc).
> 
> I assume by "in parallel" you mean "from another thread", I'd suggest to
> spell this out.  I'm also wondering how safe it really is.  In general
> sockets should be fine with concurrent send()/recv() and shutdown(): the
> sender/receiver will be woken up with an error.  Dunno if it's true for
> an arbitrary qio_channel.

Hmm.. Good notice. I'll look at it.

>  Also it may be worth documenting that the
> code path that cancels must leave all the cleanup up to the negotiation
> code, otherwise it risks conflicting.
> 
>> + */
>> +static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
>> +                       NBDExportInfo *info, QCryptoTLSCreds *tlscreds,
>> +                       QIOChannel **outioc, Error **errp)
>> +{
>> +    int ret;
>> +
>> +    if (outioc) {
>> +        *outioc = NULL;
>> +    }
>> +
>> +    ret = qio_channel_socket_connect_sync(sioc, addr, errp);
>> +    if (ret < 0) {
>> +        return ret;
>> +    }
>> +
>> +    if (!info) {
>> +        return 0;
>> +    }
>> +
>> +    ret = nbd_receive_negotiate(NULL, QIO_CHANNEL(sioc), tlscreds,
>> +                                tlscreds ? addr->u.inet.host : NULL,
>> +                                outioc, info, errp);
>> +    if (ret < 0) {
>> +        /*
>> +         * nbd_receive_negotiate() may setup tls ioc and return it even on
>> +         * failure path. In this case we should use it instead of original
>> +         * channel.
>> +         */
>> +        if (outioc && *outioc) {
>> +            qio_channel_close(QIO_CHANNEL(*outioc), NULL);
>> +            object_unref(OBJECT(*outioc));
>> +            *outioc = NULL;
>> +        } else {
>> +            qio_channel_close(QIO_CHANNEL(sioc), NULL);
>> +        }
>> +
>> +        return ret;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>   static void *connect_thread_func(void *opaque)
>>   {
>>       NBDClientConnection *conn = opaque;
>> @@ -81,12 +149,19 @@ static void *connect_thread_func(void *opaque)
>>   
>>       error_free(conn->err);
>>       conn->err = NULL;
>> -    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
>> +    conn->updated_info = conn->initial_info;
>> +
>> +    ret = nbd_connect(conn->sioc, conn->saddr,
>> +                      conn->do_negotiation ? &conn->updated_info : NULL,
>> +                      conn->tlscreds, &conn->ioc, &conn->err);
>>       if (ret < 0) {
>>           object_unref(OBJECT(conn->sioc));
>>           conn->sioc = NULL;
>>       }
>>   
>> +    conn->updated_info.x_dirty_bitmap = NULL;
>> +    conn->updated_info.name = NULL;
>> +
>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>           assert(conn->running);
>>           conn->running = false;
>> @@ -94,8 +169,8 @@ static void *connect_thread_func(void *opaque)
>>               aio_co_schedule(NULL, conn->wait_co);
>>               conn->wait_co = NULL;
>>           }
>> +        do_free = conn->detached;
>>       }
>> -    do_free = conn->detached;
> 
> This looks like the response to my earlier comment ;)  This hunk just
> needs to be squashed into the previous patch.
> 
>>   
>>   
>>       if (do_free) {
>> @@ -131,12 +206,24 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>>    *   if thread is already succeeded in background, and user didn't get the
>>    *     result, just return it now
>>    *   otherwise if thread is not running, start a thread and wait for completion
>> + *
>> + * If @info is not NULL, also do nbd-negotiation after successful connection.
>> + * In this case info is used only as out parameter, and is fully initialized by
>> + * nbd_co_establish_connection(). "IN" fields of info as well as related only to
>> + * nbd_receive_export_list() would be zero (see description of NBDExportInfo in
>> + * include/block/nbd.h).
>>    */
>>   QIOChannelSocket *coroutine_fn
>> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>> +nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
>> +                            QIOChannel **ioc, Error **errp)
>>   {
>>       QemuThread thread;
>>   
>> +    if (conn->do_negotiation) {
>> +        assert(info);
>> +        assert(ioc);
>> +    }
>> +
>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>           /*
>>            * Don't call nbd_co_establish_connection() in several coroutines in
>> @@ -147,6 +234,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>           if (!conn->running) {
>>               if (conn->sioc) {
>>                   /* Previous attempt finally succeeded in background */
>> +                if (conn->do_negotiation) {
>> +                    *ioc = g_steal_pointer(&conn->ioc);
>> +                    memcpy(info, &conn->updated_info, sizeof(*info));
>> +                }
>>                   return g_steal_pointer(&conn->sioc);
>>               }
>>   
>> @@ -178,6 +269,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>           } else {
>>               error_propagate(errp, conn->err);
>>               conn->err = NULL;
>> +            if (conn->sioc && conn->do_negotiation) {
>> +                *ioc = g_steal_pointer(&conn->ioc);
>> +                memcpy(info, &conn->updated_info, sizeof(*info));
>> +            }
>>               return g_steal_pointer(&conn->sioc);
>>           }
>>       }
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 00/33] block/nbd: rework client connection
  2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
                   ` (32 preceding siblings ...)
  2021-04-16  8:09 ` [PATCH v3 33/33] block/nbd: drop connection_co Vladimir Sementsov-Ogievskiy
@ 2021-05-12  6:54 ` Paolo Bonzini
  33 siblings, 0 replies; 121+ messages in thread
From: Paolo Bonzini @ 2021-05-12  6:54 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, qemu-devel, mreitz, rvkagan, den

On 16/04/21 10:08, Vladimir Sementsov-Ogievskiy wrote:
> The series substitutes "[PATCH v2 00/10] block/nbd: move connection code to separate file"
> Supersedes: <20210408140827.332915-1-vsementsov@virtuozzo.com>
> so it's called v3
> 
> block/nbd.c is overcomplicated. These series is a big refactoring, which
> finally drops all the complications around drained sections and context
> switching, including abuse of bs->in_flight counter.
> 
> Also, at the end of the series we don't cancel reconnect on drained
> sections (and don't cancel requests waiting for reconnect on drained
> section begin), which fixes a problem reported by Roman.
> 
> The series is also available at tag up-nbd-client-connection-v3 in
> git https://src.openvz.org/scm/~vsementsov/qemu.git

I have independently done some rework of the connection state machine, 
mostly in order to use the QemuCoSleep API instead of aio_co_wake.  In 
general it seems to be independent of this work.  I'll review this series.

Paolo

> v3:
> Changes in first part of the series (main thing is not using refcnt, but instead (modified) Roman's patch):
> 
> 01-04: new
> 05: add Roman's r-b
> 06: new
> 07: now, new aio_co_schedule(NULL, thr->wait_co) is used
> 08: reworked, we now need also bool detached, as we don't have refcnt
> 09,10: add Roman's r-b
> 11: rebased, don't modify nbd_free_connect_thread() name at this point
> 12: add Roman's r-b
> 13: new
> 14: rebased
> 
> Other patches are new.
> 
> Roman Kagan (2):
>    block/nbd: fix channel object leak
>    block/nbd: ensure ->connection_thread is always valid
> 
> Vladimir Sementsov-Ogievskiy (31):
>    block/nbd: fix how state is cleared on nbd_open() failure paths
>    block/nbd: nbd_client_handshake(): fix leak of s->ioc
>    block/nbd: BDRVNBDState: drop unused connect_err and connect_status
>    util/async: aio_co_schedule(): support reschedule in same ctx
>    block/nbd: simplify waking of nbd_co_establish_connection()
>    block/nbd: drop thr->state
>    block/nbd: bs-independent interface for nbd_co_establish_connection()
>    block/nbd: make nbd_co_establish_connection_cancel() bs-independent
>    block/nbd: rename NBDConnectThread to NBDClientConnection
>    block/nbd: introduce nbd_client_connection_new()
>    block/nbd: introduce nbd_client_connection_release()
>    nbd: move connection code from block/nbd to nbd/client-connection
>    nbd/client-connection: use QEMU_LOCK_GUARD
>    nbd/client-connection: add possibility of negotiation
>    nbd/client-connection: implement connection retry
>    nbd/client-connection: shutdown connection on release
>    block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
>    block/nbd: use negotiation of NBDClientConnection
>    qemu-socket: pass monitor link to socket_get_fd directly
>    block/nbd: pass monitor directly to connection thread
>    block/nbd: nbd_teardown_connection() don't touch s->sioc
>    block/nbd: drop BDRVNBDState::sioc
>    nbd/client-connection: return only one io channel
>    block-coroutine-wrapper: allow non bdrv_ prefix
>    block/nbd: split nbd_co_do_establish_connection out of
>      nbd_reconnect_attempt
>    nbd/client-connection: do qio_channel_set_delay(false)
>    nbd/client-connection: add option for non-blocking connection attempt
>    block/nbd: reuse nbd_co_do_establish_connection() in nbd_open()
>    block/nbd: add nbd_clinent_connected() helper
>    block/nbd: safer transition to receiving request
>    block/nbd: drop connection_co
> 
>   block/coroutines.h                 |   6 +
>   include/block/aio.h                |   2 +-
>   include/block/nbd.h                |  19 +
>   include/io/channel-socket.h        |  20 +
>   include/qemu/sockets.h             |   2 +-
>   block/nbd.c                        | 908 +++++++----------------------
>   io/channel-socket.c                |  17 +-
>   nbd/client-connection.c            | 364 ++++++++++++
>   nbd/client.c                       |   2 -
>   tests/unit/test-util-sockets.c     |  16 +-
>   util/async.c                       |   8 +
>   util/qemu-sockets.c                |  10 +-
>   nbd/meson.build                    |   1 +
>   scripts/block-coroutine-wrapper.py |   7 +-
>   14 files changed, 666 insertions(+), 716 deletions(-)
>   create mode 100644 nbd/client-connection.c
> 



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-04-16  8:08 ` [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx Vladimir Sementsov-Ogievskiy
  2021-04-23 10:09   ` Roman Kagan
@ 2021-05-12  6:56   ` Paolo Bonzini
  2021-05-12  7:15     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 121+ messages in thread
From: Paolo Bonzini @ 2021-05-12  6:56 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, Fam Zheng, qemu-devel, mreitz, rvkagan, Stefan Hajnoczi, den

On 16/04/21 10:08, Vladimir Sementsov-Ogievskiy wrote:
> With the following patch we want to call wake coroutine from thread.
> And it doesn't work with aio_co_wake:
> Assume we have no iothreads.
> Assume we have a coroutine A, which waits in the yield point for
> external aio_co_wake(), and no progress can be done until it happen.
> Main thread is in blocking aio_poll() (for example, in blk_read()).
>
> Now, in a separate thread we do aio_co_wake(). It calls  aio_co_enter(),
> which goes through last "else" branch and do aio_context_acquire(ctx).

I don't understand.  Why doesn't aio_co_enter go through the ctx != 
qemu_get_current_aio_context() branch and just do aio_co_schedule?  That 
was at least the idea behind aio_co_wake and aio_co_enter.

Paolo



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper
  2021-04-16  8:09 ` [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper Vladimir Sementsov-Ogievskiy
@ 2021-05-12  7:06   ` Paolo Bonzini
  2021-05-12  7:19     ` Vladimir Sementsov-Ogievskiy
  2021-06-03 21:08   ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Paolo Bonzini @ 2021-05-12  7:06 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, qemu-devel, mreitz, rvkagan, den

On 16/04/21 10:09, Vladimir Sementsov-Ogievskiy wrote:
> We already have two similar helpers for other state. Let's add another
> one for convenience.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>

The whole usage of load-acquire for the connected state is completely 
meaningless, but that's not your fault. :)  For now let's keep it as is, 
and we'll clean it up later.

Paolo



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-05-12  6:56   ` Paolo Bonzini
@ 2021-05-12  7:15     ` Vladimir Sementsov-Ogievskiy
  2021-05-13 21:04       ` Paolo Bonzini
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-05-12  7:15 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-block
  Cc: qemu-devel, eblake, mreitz, kwolf, rvkagan, den, Stefan Hajnoczi,
	Fam Zheng

12.05.2021 09:56, Paolo Bonzini wrote:
> On 16/04/21 10:08, Vladimir Sementsov-Ogievskiy wrote:
>> With the following patch we want to call wake coroutine from thread.
>> And it doesn't work with aio_co_wake:
>> Assume we have no iothreads.
>> Assume we have a coroutine A, which waits in the yield point for
>> external aio_co_wake(), and no progress can be done until it happen.
>> Main thread is in blocking aio_poll() (for example, in blk_read()).
>>
>> Now, in a separate thread we do aio_co_wake(). It calls  aio_co_enter(),
>> which goes through last "else" branch and do aio_context_acquire(ctx).
> 
> I don't understand.  Why doesn't aio_co_enter go through the ctx != qemu_get_current_aio_context() branch and just do aio_co_schedule?  That was at least the idea behind aio_co_wake and aio_co_enter.
> 

Because ctx is exactly qemu_get_current_aio_context(), as we are not in iothread but in nbd connection thread. So, qemu_get_current_aio_context() returns qemu_aio_context.


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper
  2021-05-12  7:06   ` Paolo Bonzini
@ 2021-05-12  7:19     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-05-12  7:19 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-block; +Cc: qemu-devel, eblake, mreitz, kwolf, rvkagan, den

12.05.2021 10:06, Paolo Bonzini wrote:
> On 16/04/21 10:09, Vladimir Sementsov-Ogievskiy wrote:
>> We already have two similar helpers for other state. Let's add another
>> one for convenience.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
> 
> The whole usage of load-acquire for the connected state is completely meaningless, but that's not your fault. :)  For now let's keep it as is, and we'll clean it up later.
> 

It would be nice, thanks!


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
  2021-04-16  8:08 ` [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake() Vladimir Sementsov-Ogievskiy
@ 2021-05-12  8:40   ` Roman Kagan
  2021-06-03 16:29   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Roman Kagan @ 2021-05-12  8:40 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: kwolf, qemu-block, qemu-devel, mreitz, den

On Fri, Apr 16, 2021 at 11:08:57AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> To be reused in the following patch.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 99 ++++++++++++++++++++++++++++++-----------------------
>  1 file changed, 57 insertions(+), 42 deletions(-)

Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly
  2021-04-19  9:34   ` Daniel P. Berrangé
  2021-04-19 10:09     ` Vladimir Sementsov-Ogievskiy
@ 2021-05-12  9:40     ` Roman Kagan
  2021-05-12  9:59       ` Daniel P. Berrangé
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-05-12  9:40 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: kwolf, Vladimir Sementsov-Ogievskiy, qemu-block, qemu-devel,
	mreitz, Gerd Hoffmann, den

On Mon, Apr 19, 2021 at 10:34:49AM +0100, Daniel P. Berrangé wrote:
> On Fri, Apr 16, 2021 at 11:08:59AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > Detecting monitor by current coroutine works bad when we are not in
> > coroutine context. And that's exactly so in nbd reconnect code, where
> > qio_channel_socket_connect_sync() is called from thread.
> > 
> > Add a possibility to pass monitor by hand, to be used in the following
> > commit.
> > 
> > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > ---
> >  include/io/channel-socket.h    | 20 ++++++++++++++++++++
> >  include/qemu/sockets.h         |  2 +-
> >  io/channel-socket.c            | 17 +++++++++++++----
> >  tests/unit/test-util-sockets.c | 16 ++++++++--------
> >  util/qemu-sockets.c            | 10 +++++-----
> >  5 files changed, 47 insertions(+), 18 deletions(-)
> > 
> > diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> > index e747e63514..6d0915420d 100644
> > --- a/include/io/channel-socket.h
> > +++ b/include/io/channel-socket.h
> > @@ -78,6 +78,23 @@ qio_channel_socket_new_fd(int fd,
> >                            Error **errp);
> >  
> >  
> > +/**
> > + * qio_channel_socket_connect_sync_mon:
> > + * @ioc: the socket channel object
> > + * @addr: the address to connect to
> > + * @mon: current monitor. If NULL, it will be detected by
> > + *       current coroutine.
> > + * @errp: pointer to a NULL-initialized error object
> > + *
> > + * Attempt to connect to the address @addr. This method
> > + * will run in the foreground so the caller will not regain
> > + * execution control until the connection is established or
> > + * an error occurs.
> > + */
> > +int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
> > +                                        SocketAddress *addr,
> > +                                        Monitor *mon,
> > +                                        Error **errp);
> 
> I don't really like exposing the concept of the QEMU monitor in
> the IO layer APIs. IMHO these ought to remain completely separate
> subsystems from the API pov,

Agreed. 

> and we ought to fix this problem by
> making monitor_cur() work better in the scenario required.

Would it make sense instead to resolve the fdstr into actual file
descriptor number in the context where monitor_cur() works and makes
sense, prior to passing it to the connection thread?

Roman.


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly
  2021-05-12  9:40     ` Roman Kagan
@ 2021-05-12  9:59       ` Daniel P. Berrangé
  2021-05-13 11:02         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Daniel P. Berrangé @ 2021-05-12  9:59 UTC (permalink / raw)
  To: Roman Kagan, Vladimir Sementsov-Ogievskiy, qemu-block,
	qemu-devel, eblake, mreitz, kwolf, den, Gerd Hoffmann

On Wed, May 12, 2021 at 12:40:03PM +0300, Roman Kagan wrote:
> On Mon, Apr 19, 2021 at 10:34:49AM +0100, Daniel P. Berrangé wrote:
> > On Fri, Apr 16, 2021 at 11:08:59AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > Detecting monitor by current coroutine works bad when we are not in
> > > coroutine context. And that's exactly so in nbd reconnect code, where
> > > qio_channel_socket_connect_sync() is called from thread.
> > > 
> > > Add a possibility to pass monitor by hand, to be used in the following
> > > commit.
> > > 
> > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > > ---
> > >  include/io/channel-socket.h    | 20 ++++++++++++++++++++
> > >  include/qemu/sockets.h         |  2 +-
> > >  io/channel-socket.c            | 17 +++++++++++++----
> > >  tests/unit/test-util-sockets.c | 16 ++++++++--------
> > >  util/qemu-sockets.c            | 10 +++++-----
> > >  5 files changed, 47 insertions(+), 18 deletions(-)
> > > 
> > > diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> > > index e747e63514..6d0915420d 100644
> > > --- a/include/io/channel-socket.h
> > > +++ b/include/io/channel-socket.h
> > > @@ -78,6 +78,23 @@ qio_channel_socket_new_fd(int fd,
> > >                            Error **errp);
> > >  
> > >  
> > > +/**
> > > + * qio_channel_socket_connect_sync_mon:
> > > + * @ioc: the socket channel object
> > > + * @addr: the address to connect to
> > > + * @mon: current monitor. If NULL, it will be detected by
> > > + *       current coroutine.
> > > + * @errp: pointer to a NULL-initialized error object
> > > + *
> > > + * Attempt to connect to the address @addr. This method
> > > + * will run in the foreground so the caller will not regain
> > > + * execution control until the connection is established or
> > > + * an error occurs.
> > > + */
> > > +int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
> > > +                                        SocketAddress *addr,
> > > +                                        Monitor *mon,
> > > +                                        Error **errp);
> > 
> > I don't really like exposing the concept of the QEMU monitor in
> > the IO layer APIs. IMHO these ought to remain completely separate
> > subsystems from the API pov,
> 
> Agreed. 
> 
> > and we ought to fix this problem by
> > making monitor_cur() work better in the scenario required.
> 
> Would it make sense instead to resolve the fdstr into actual file
> descriptor number in the context where monitor_cur() works and makes
> sense, prior to passing it to the connection thread?

Yes, arguably the root problem is caused by the util/qemu-sockets.c
code directly referring to the current monitor. This has resultde in
the slightly strange scenario where we have two distinct semantics
for the 'fd' SocketAddressType

 @fd: decimal is for file descriptor number, otherwise a file descriptor name.
      Named file descriptors are permitted in monitor commands, in combination
      with the 'getfd' command. Decimal file descriptors are permitted at
      startup or other contexts where no monitor context is active.

Now these distinct semantics kinda make sense from the POV of the
management application, but we've let the dual semantics propagate
all the way down our I/O stack.

Folowing your idea, we could have  'socket_address_resolve_monitor_fd'
method which takes a 'SocketAddress' and returns a new 'SocketAddress'
with the real FD filled in.  We then call this method in any codepath
which is getting a 'SocketAddress' struct from the monitor.

The util/qemu-sockets.c code then only has to think about real FDs,
and the monitor_get_fd() call only occurs right at the top level.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly
  2021-05-12  9:59       ` Daniel P. Berrangé
@ 2021-05-13 11:02         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-05-13 11:02 UTC (permalink / raw)
  To: Daniel P. Berrangé,
	Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den,
	Gerd Hoffmann

12.05.2021 12:59, Daniel P. Berrangé wrote:
> On Wed, May 12, 2021 at 12:40:03PM +0300, Roman Kagan wrote:
>> On Mon, Apr 19, 2021 at 10:34:49AM +0100, Daniel P. Berrangé wrote:
>>> On Fri, Apr 16, 2021 at 11:08:59AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>>> Detecting monitor by current coroutine works bad when we are not in
>>>> coroutine context. And that's exactly so in nbd reconnect code, where
>>>> qio_channel_socket_connect_sync() is called from thread.
>>>>
>>>> Add a possibility to pass monitor by hand, to be used in the following
>>>> commit.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>   include/io/channel-socket.h    | 20 ++++++++++++++++++++
>>>>   include/qemu/sockets.h         |  2 +-
>>>>   io/channel-socket.c            | 17 +++++++++++++----
>>>>   tests/unit/test-util-sockets.c | 16 ++++++++--------
>>>>   util/qemu-sockets.c            | 10 +++++-----
>>>>   5 files changed, 47 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>>>> index e747e63514..6d0915420d 100644
>>>> --- a/include/io/channel-socket.h
>>>> +++ b/include/io/channel-socket.h
>>>> @@ -78,6 +78,23 @@ qio_channel_socket_new_fd(int fd,
>>>>                             Error **errp);
>>>>   
>>>>   
>>>> +/**
>>>> + * qio_channel_socket_connect_sync_mon:
>>>> + * @ioc: the socket channel object
>>>> + * @addr: the address to connect to
>>>> + * @mon: current monitor. If NULL, it will be detected by
>>>> + *       current coroutine.
>>>> + * @errp: pointer to a NULL-initialized error object
>>>> + *
>>>> + * Attempt to connect to the address @addr. This method
>>>> + * will run in the foreground so the caller will not regain
>>>> + * execution control until the connection is established or
>>>> + * an error occurs.
>>>> + */
>>>> +int qio_channel_socket_connect_sync_mon(QIOChannelSocket *ioc,
>>>> +                                        SocketAddress *addr,
>>>> +                                        Monitor *mon,
>>>> +                                        Error **errp);
>>>
>>> I don't really like exposing the concept of the QEMU monitor in
>>> the IO layer APIs. IMHO these ought to remain completely separate
>>> subsystems from the API pov,
>>
>> Agreed.
>>
>>> and we ought to fix this problem by
>>> making monitor_cur() work better in the scenario required.
>>
>> Would it make sense instead to resolve the fdstr into actual file
>> descriptor number in the context where monitor_cur() works and makes
>> sense, prior to passing it to the connection thread?
> 
> Yes, arguably the root problem is caused by the util/qemu-sockets.c
> code directly referring to the current monitor. This has resultde in
> the slightly strange scenario where we have two distinct semantics
> for the 'fd' SocketAddressType
> 
>   @fd: decimal is for file descriptor number, otherwise a file descriptor name.
>        Named file descriptors are permitted in monitor commands, in combination
>        with the 'getfd' command. Decimal file descriptors are permitted at
>        startup or other contexts where no monitor context is active.
> 
> Now these distinct semantics kinda make sense from the POV of the
> management application, but we've let the dual semantics propagate
> all the way down our I/O stack.
> 
> Folowing your idea, we could have  'socket_address_resolve_monitor_fd'
> method which takes a 'SocketAddress' and returns a new 'SocketAddress'
> with the real FD filled in.  We then call this method in any codepath
> which is getting a 'SocketAddress' struct from the monitor.
> 
> The util/qemu-sockets.c code then only has to think about real FDs,
> and the monitor_get_fd() call only occurs right at the top level.
> 

Reasonable, I'll try this way


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-05-12  7:15     ` Vladimir Sementsov-Ogievskiy
@ 2021-05-13 21:04       ` Paolo Bonzini
  2021-05-14 17:27         ` Roman Kagan
  2021-06-08 18:45         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 2 replies; 121+ messages in thread
From: Paolo Bonzini @ 2021-05-13 21:04 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, Fam Zheng, qemu-devel, mreitz, rvkagan, Stefan Hajnoczi, den

On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
>>>
>>
>> I don't understand.  Why doesn't aio_co_enter go through the ctx != 
>> qemu_get_current_aio_context() branch and just do aio_co_schedule?  
>> That was at least the idea behind aio_co_wake and aio_co_enter.
>>
> 
> Because ctx is exactly qemu_get_current_aio_context(), as we are not in 
> iothread but in nbd connection thread. So, 
> qemu_get_current_aio_context() returns qemu_aio_context.

So the problem is that threads other than the main thread and
the I/O thread should not return qemu_aio_context.  The vCPU thread
may need to return it when running with BQL taken, though.

Something like this (untested):

diff --git a/include/block/aio.h b/include/block/aio.h
index 5f342267d5..10fcae1515 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct Coroutine *co);
   * Return the AioContext whose event loop runs in the current thread.
   *
   * If called from an IOThread this will be the IOThread's AioContext.  If
- * called from another thread it will be the main loop AioContext.
+ * called from the main thread or with the "big QEMU lock" taken it
+ * will be the main loop AioContext.
   */
  AioContext *qemu_get_current_aio_context(void);
  
+void qemu_set_current_aio_context(AioContext *ctx);
+
  /**
   * aio_context_setup:
   * @ctx: the aio context
diff --git a/iothread.c b/iothread.c
index 7f086387be..22b967e77c 100644
--- a/iothread.c
+++ b/iothread.c
@@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
  #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
  #endif
  
-static __thread IOThread *my_iothread;
+static __thread AioContext *my_aiocontext;
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+    assert(!my_aiocontext);
+    my_aiocontext = ctx;
+}
  
  AioContext *qemu_get_current_aio_context(void)
  {
-    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
+    if (my_aiocontext) {
+        return my_aiocontext;
+    }
+    if (qemu_mutex_iothread_locked()) {
+        return qemu_get_aio_context();
+    }
+    return NULL;
  }
  
  static void *iothread_run(void *opaque)
@@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
       * in this new thread uses glib.
       */
      g_main_context_push_thread_default(iothread->worker_context);
-    my_iothread = iothread;
+    qemu_set_current_aio_context(iothread->ctx);
      iothread->thread_id = qemu_get_thread_id();
      qemu_sem_post(&iothread->init_done_sem);
  
diff --git a/stubs/iothread.c b/stubs/iothread.c
index 8cc9e28c55..25ff398894 100644
--- a/stubs/iothread.c
+++ b/stubs/iothread.c
@@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
  {
      return qemu_get_aio_context();
  }
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+}
diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
index afde12b4ef..cab38b3da8 100644
--- a/tests/unit/iothread.c
+++ b/tests/unit/iothread.c
@@ -30,13 +30,26 @@ struct IOThread {
      bool stopping;
  };
  
-static __thread IOThread *my_iothread;
+static __thread AioContext *my_aiocontext;
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+    assert(!my_aiocontext);
+    my_aiocontext = ctx;
+}
  
  AioContext *qemu_get_current_aio_context(void)
  {
-    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
+    if (my_aiocontext) {
+        return my_aiocontext;
+    }
+    if (qemu_mutex_iothread_locked()) {
+        return qemu_get_aio_context();
+    }
+    return NULL;
  }
  
+
  static void iothread_init_gcontext(IOThread *iothread)
  {
      GSource *source;
@@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)
  
      rcu_register_thread();
  
-    my_iothread = iothread;
+    qemu_set_current_aio_context(iothread->ctx);
      qemu_mutex_lock(&iothread->init_done_lock);
      iothread->ctx = aio_context_new(&error_abort);
  
diff --git a/util/main-loop.c b/util/main-loop.c
index d9c55df6f5..4ae5b23e99 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
      if (!qemu_aio_context) {
          return -EMFILE;
      }
+    qemu_set_current_aio_context(qemu_aio_context);
      qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
      gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
      src = aio_get_g_source(qemu_aio_context);

If it works for you, I can post it as a formal patch.

Paolo



^ permalink raw reply related	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-05-13 21:04       ` Paolo Bonzini
@ 2021-05-14 17:27         ` Roman Kagan
  2021-05-14 21:19           ` Paolo Bonzini
  2021-06-08 18:45         ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 121+ messages in thread
From: Roman Kagan @ 2021-05-14 17:27 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kwolf, Fam Zheng, Vladimir Sementsov-Ogievskiy, qemu-block,
	qemu-devel, mreitz, Stefan Hajnoczi, den

On Thu, May 13, 2021 at 11:04:37PM +0200, Paolo Bonzini wrote:
> On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
> > > > 
> > > 
> > > I don't understand.  Why doesn't aio_co_enter go through the ctx !=
> > > qemu_get_current_aio_context() branch and just do aio_co_schedule?
> > > That was at least the idea behind aio_co_wake and aio_co_enter.
> > > 
> > 
> > Because ctx is exactly qemu_get_current_aio_context(), as we are not in
> > iothread but in nbd connection thread. So,
> > qemu_get_current_aio_context() returns qemu_aio_context.
> 
> So the problem is that threads other than the main thread and
> the I/O thread should not return qemu_aio_context.  The vCPU thread
> may need to return it when running with BQL taken, though.

I'm not sure this is the only case.

AFAICS your patch has basically the same effect as Vladimir's
patch "util/async: aio_co_enter(): do aio_co_schedule in general case"
(https://lore.kernel.org/qemu-devel/20210408140827.332915-4-vsementsov@virtuozzo.com/).
That one was found to break e.g. aio=threads cases.  I guessed it
implicitly relied upon aio_co_enter() acquiring the aio_context but we
didn't dig further to pinpoint the exact scenario.

Roman.

> Something like this (untested):
> 
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 5f342267d5..10fcae1515 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct Coroutine *co);
>   * Return the AioContext whose event loop runs in the current thread.
>   *
>   * If called from an IOThread this will be the IOThread's AioContext.  If
> - * called from another thread it will be the main loop AioContext.
> + * called from the main thread or with the "big QEMU lock" taken it
> + * will be the main loop AioContext.
>   */
>  AioContext *qemu_get_current_aio_context(void);
> +void qemu_set_current_aio_context(AioContext *ctx);
> +
>  /**
>   * aio_context_setup:
>   * @ctx: the aio context
> diff --git a/iothread.c b/iothread.c
> index 7f086387be..22b967e77c 100644
> --- a/iothread.c
> +++ b/iothread.c
> @@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
>  #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
>  #endif
> -static __thread IOThread *my_iothread;
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +    assert(!my_aiocontext);
> +    my_aiocontext = ctx;
> +}
>  AioContext *qemu_get_current_aio_context(void)
>  {
> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
> +    if (my_aiocontext) {
> +        return my_aiocontext;
> +    }
> +    if (qemu_mutex_iothread_locked()) {
> +        return qemu_get_aio_context();
> +    }
> +    return NULL;
>  }
>  static void *iothread_run(void *opaque)
> @@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
>       * in this new thread uses glib.
>       */
>      g_main_context_push_thread_default(iothread->worker_context);
> -    my_iothread = iothread;
> +    qemu_set_current_aio_context(iothread->ctx);
>      iothread->thread_id = qemu_get_thread_id();
>      qemu_sem_post(&iothread->init_done_sem);
> diff --git a/stubs/iothread.c b/stubs/iothread.c
> index 8cc9e28c55..25ff398894 100644
> --- a/stubs/iothread.c
> +++ b/stubs/iothread.c
> @@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
>  {
>      return qemu_get_aio_context();
>  }
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +}
> diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
> index afde12b4ef..cab38b3da8 100644
> --- a/tests/unit/iothread.c
> +++ b/tests/unit/iothread.c
> @@ -30,13 +30,26 @@ struct IOThread {
>      bool stopping;
>  };
> -static __thread IOThread *my_iothread;
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +    assert(!my_aiocontext);
> +    my_aiocontext = ctx;
> +}
>  AioContext *qemu_get_current_aio_context(void)
>  {
> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
> +    if (my_aiocontext) {
> +        return my_aiocontext;
> +    }
> +    if (qemu_mutex_iothread_locked()) {
> +        return qemu_get_aio_context();
> +    }
> +    return NULL;
>  }
> +
>  static void iothread_init_gcontext(IOThread *iothread)
>  {
>      GSource *source;
> @@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)
>      rcu_register_thread();
> -    my_iothread = iothread;
> +    qemu_set_current_aio_context(iothread->ctx);
>      qemu_mutex_lock(&iothread->init_done_lock);
>      iothread->ctx = aio_context_new(&error_abort);
> diff --git a/util/main-loop.c b/util/main-loop.c
> index d9c55df6f5..4ae5b23e99 100644
> --- a/util/main-loop.c
> +++ b/util/main-loop.c
> @@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
>      if (!qemu_aio_context) {
>          return -EMFILE;
>      }
> +    qemu_set_current_aio_context(qemu_aio_context);
>      qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
>      gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
>      src = aio_get_g_source(qemu_aio_context);
> 
> If it works for you, I can post it as a formal patch.
> 
> Paolo
> 


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-05-14 17:27         ` Roman Kagan
@ 2021-05-14 21:19           ` Paolo Bonzini
  0 siblings, 0 replies; 121+ messages in thread
From: Paolo Bonzini @ 2021-05-14 21:19 UTC (permalink / raw)
  To: Roman Kagan, Vladimir Sementsov-Ogievskiy, qemu-block,
	qemu-devel, eblake, mreitz, kwolf, den, Stefan Hajnoczi,
	Fam Zheng

On 14/05/21 19:27, Roman Kagan wrote:
> 
> AFAICS your patch has basically the same effect as Vladimir's
> patch "util/async: aio_co_enter(): do aio_co_schedule in general case"
> (https://lore.kernel.org/qemu-devel/20210408140827.332915-4-vsementsov@virtuozzo.com/).
> That one was found to break e.g. aio=threads cases.  I guessed it
> implicitly relied upon aio_co_enter() acquiring the aio_context but we
> didn't dig further to pinpoint the exact scenario.

That one is much more intrusive, because it goes through a bottom half 
unnecessarily in the case of aio_co_wake being called from an I/O 
callback (or from another bottom half).  I'll test my patch with 
aio=threads.

Paolo



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 01/33] block/nbd: fix channel object leak
  2021-04-16  8:08 ` [PATCH v3 01/33] block/nbd: fix channel object leak Vladimir Sementsov-Ogievskiy
@ 2021-05-24 21:31   ` Eric Blake
  2021-05-25  4:47     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-05-24 21:31 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:39AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> From: Roman Kagan <rvkagan@yandex-team.ru>
> 
> nbd_free_connect_thread leaks the channel object if it hasn't been
> stolen.
> 
> Unref it and fix the leak.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
>  block/nbd.c | 1 +
>  1 file changed, 1 insertion(+)

Does nbd_yank() have a similar problem?

At any rate, this makes sense to me.
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 01/33] block/nbd: fix channel object leak
  2021-05-24 21:31   ` Eric Blake
@ 2021-05-25  4:47     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-05-25  4:47 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

25.05.2021 00:31, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:08:39AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> From: Roman Kagan <rvkagan@yandex-team.ru>
>>
>> nbd_free_connect_thread leaks the channel object if it hasn't been
>> stolen.
>>
>> Unref it and fix the leak.
>>
>> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
>> ---
>>   block/nbd.c | 1 +
>>   1 file changed, 1 insertion(+)
> 
> Does nbd_yank() have a similar problem?

No, I think not. nbd_yank() just shutdown the socket to cancel in-flight requests. I doesn't release the state.

> 
> At any rate, this makes sense to me.
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths
  2021-04-16  8:08 ` [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths Vladimir Sementsov-Ogievskiy
  2021-04-21 14:00   ` Roman Kagan
@ 2021-06-01 21:39   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-01 21:39 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:40AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We have two "return error" paths in nbd_open() after
> nbd_process_options(). Actually we should call nbd_clear_bdrvstate()
> on these paths. Interesting that nbd_process_options() calls
> nbd_clear_bdrvstate() by itself.
> 
> Let's fix leaks and refactor things to be more obvious:
> 
> - intialize yank at top of nbd_open()
> - move yank cleanup to nbd_clear_bdrvstate()
> - refactor nbd_open() so that all failure paths except for
>   yank-register goes through nbd_clear_bdrvstate()
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 36 ++++++++++++++++++------------------
>  1 file changed, 18 insertions(+), 18 deletions(-)
>

Reviewed-by: Eric Blake <eblake@redhat.com>

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 03/33] block/nbd: ensure ->connection_thread is always valid
  2021-04-16  8:08 ` [PATCH v3 03/33] block/nbd: ensure ->connection_thread is always valid Vladimir Sementsov-Ogievskiy
@ 2021-06-01 21:41   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-01 21:41 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:41AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> From: Roman Kagan <rvkagan@yandex-team.ru>
> 
> Simplify lifetime management of BDRVNBDState->connect_thread by
> delaying the possible cleanup of it until the BDRVNBDState itself goes
> away.
> 
> This also reverts
>  0267101af6 "block/nbd: fix possible use after free of s->connect_thread"
> as now s->connect_thread can't be cleared until the very end.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
>  [vsementsov: rebase, revert 0267101af6 changes]
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 56 ++++++++++++++++++++---------------------------------
>  1 file changed, 21 insertions(+), 35 deletions(-)
> 

>  static void nbd_clear_bdrvstate(BlockDriverState *bs)
>  {
>      BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
> +    NBDConnectThread *thr = s->connect_thread;
> +    bool thr_running;
> +
> +    qemu_mutex_lock(&thr->mutex);
> +    thr_running = thr->state == CONNECT_THREAD_RUNNING;
> +    if (thr_running) {
> +        thr->state = CONNECT_THREAD_RUNNING_DETACHED;
> +    }
> +    qemu_mutex_unlock(&thr->mutex);
> +
> +    /* the runaway thread will clean it up itself */

s/clean it up/clean up/

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 05/33] block/nbd: BDRVNBDState: drop unused connect_err and connect_status
  2021-04-16  8:08 ` [PATCH v3 05/33] block/nbd: BDRVNBDState: drop unused connect_err and connect_status Vladimir Sementsov-Ogievskiy
@ 2021-06-01 21:43   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-01 21:43 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:43AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> These fields are write-only. Drop them.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
>  block/nbd.c | 12 ++----------
>  1 file changed, 2 insertions(+), 10 deletions(-)
>

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection()
  2021-04-16  8:08 ` [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
  2021-04-27 21:44   ` Roman Kagan
@ 2021-06-02 19:05   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-02 19:05 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:45AM +0300, Vladimir Sementsov-Ogievskiy wrote:

Grammar suggestions:

> Instead of connect_bh, bh_ctx and wait_connect fields we can live with
> only one link to waiting coroutine, protected by mutex.

Instead of managing connect_bh, bh_ctx, and wait_connect fields, we
can use a single link to the waiting coroutine with proper mutex
protection.

> 
> So new logic is:
> 
> nbd_co_establish_connection() sets wait_co under mutex, release the
> mutex and do yield(). Note, that wait_co may be scheduled by thread
> immediately after unlocking the mutex. Still, in main thread (or
> iothread) we'll not reach the code for entering the coroutine until the
> yield() so we are safe.

nbd_co_establish_connection() sets wait_co under the mutex, releases
the mutex, then yield()s.  Note that wait_co may be scheduled by the
thread immediately after unlocking the mutex.  Still, the main thread
(or iothread) will not reach the code for entering the coroutine until
the yield(), so we are safe.

> 
> Both connect_thread_func() and nbd_co_establish_connection_cancel() do
> the following to handle wait_co:
> 
> Under mutex, if thr->wait_co is not NULL, call aio_co_wake() (which
> never tries to acquire aio context since previous commit, so we are
> safe to do it under thr->mutex) and set thr->wait_co to NULL.
> This way we protect ourselves of scheduling it twice.

Under the mutex, if thr_wait_co is not NULL, call aio_co_wake() (the
previous commit ensures it never tries to acquire the aio context, so
we are safe even while under thr->mutex), then sets thr->wait_co to
NULL.  This way, we avoid scheduling the coroutine twice.

> 
> Also this commit make nbd_co_establish_connection() less connected to
> bs (we have generic pointer to the coroutine, not use s->connection_co
> directly). So, we are on the way of splitting connection API out of
> nbd.c (which is overcomplicated now).

Also, this commit reduces the dependence of
nbd_co_establish_connection() on the internals of bs (we now use a
generic pointer to the coroutine, instead of direct use of
s->connection_co).  This is a step towards splitting the connection
API out of nbd.c.

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 49 +++++++++----------------------------------------
>  1 file changed, 9 insertions(+), 40 deletions(-)
> 

> +++ b/block/nbd.c

> @@ -101,10 +95,10 @@ typedef struct NBDConnectThread {
>      QIOChannelSocket *sioc;
>      Error *err;
>  
> -    /* state and bh_ctx are protected by mutex */
>      QemuMutex mutex;
> +    /* All further fields are protected by mutex */
>      NBDConnectThreadState state; /* current state of the thread */
> -    AioContext *bh_ctx; /* where to schedule bh (NULL means don't schedule) */
> +    Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */

I'm not sure if that comment is the most legible, but I'm not coming
up with an alternative.  Maybe:

/*
 * if non-NULL, which coroutine to wake in
 * nbd_co_establish_connection() after yield()
 */


But the simplification looks nice, and I didn't spot any obvious
problems with the refactoring.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection()
  2021-04-16  8:08 ` [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
@ 2021-06-02 19:14   ` Eric Blake
  2021-06-08 10:12     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-02 19:14 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:47AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to split connection code to separate file. Now we are

to a separate

> ready to give nbd_co_establish_connection() clean and bs-independent
> interface.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
>  block/nbd.c | 49 +++++++++++++++++++++++++++++++------------------
>  1 file changed, 31 insertions(+), 18 deletions(-)
> 

> -static int coroutine_fn
> -nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
> +/*
> + * Get a new connection in context of @thr:
> + *   if thread is running, wait for completion

if the thread is running,...

> + *   if thread is already succeeded in background, and user didn't get the

if the thread already succeeded in the background,...

> + *     result, just return it now
> + *   otherwise if thread is not running, start a thread and wait for completion

otherwise, the thread is not running, so start...

> + */
> +static coroutine_fn QIOChannelSocket *
> +nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
>  {
> +    QIOChannelSocket *sioc = NULL;
>      QemuThread thread;
> -    BDRVNBDState *s = bs->opaque;
> -    NBDConnectThread *thr = s->connect_thread;
> -
> -    assert(!s->sioc);
>  
>      qemu_mutex_lock(&thr->mutex);
>  
> +    /*
> +     * Don't call nbd_co_establish_connection() in several coroutines in
> +     * parallel. Only one call at once is supported.
> +     */
> +    assert(!thr->wait_co);
> +
>      if (!thr->running) {
>          if (thr->sioc) {
>              /* Previous attempt finally succeeded in background */
> -            goto out;
> +            sioc = g_steal_pointer(&thr->sioc);
> +            qemu_mutex_unlock(&thr->mutex);

Worth using QEMU_LOCK_GUARD() here?

> +
> +            return sioc;
>          }
> +
>          thr->running = true;
>          error_free(thr->err);
>          thr->err = NULL;

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 10/33] block/nbd: make nbd_co_establish_connection_cancel() bs-independent
  2021-04-16  8:08 ` [PATCH v3 10/33] block/nbd: make nbd_co_establish_connection_cancel() bs-independent Vladimir Sementsov-Ogievskiy
@ 2021-06-02 21:18   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-02 21:18 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:48AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> nbd_co_establish_connection_cancel() actually needs only pointer to
> NBDConnectThread. So, make it clean.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
>  block/nbd.c | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index dd97ea0916..dab73bdf3b 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c

>  /*
>   * nbd_co_establish_connection_cancel
> - * Cancel nbd_co_establish_connection asynchronously: it will finish soon, to
> - * allow drained section to begin.
> + * Cancel nbd_co_establish_connection() asynchronously. Note, that it doesn't
> + * stop the thread itself neither close the socket. It just safely wakes
> + * nbd_co_establish_connection() sleeping in the yield().

Grammar suggestion:

Note that this function neither directly stops the thread nor closes
the socket, but rather safely wakes nbd_co_establish_connection()
which is sleeping in yield(), triggering subsequent cleanup there.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection
  2021-04-16  8:08 ` [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection Vladimir Sementsov-Ogievskiy
  2021-04-27 22:28   ` Roman Kagan
@ 2021-06-02 21:21   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-02 21:21 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:49AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to move connection code to own file and want clear names
> and APIs.

We are going to move the connection code to its own file, and want
clear names and APIs first.

> 
> The structure is shared between user and (possibly) several runs of
> connect-thread. So it's wrong to call it "thread". Let's rename to
> something more generic.
> 
> Appropriately rename connect_thread and thr variables to conn.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 137 ++++++++++++++++++++++++++--------------------------
>  1 file changed, 68 insertions(+), 69 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 12/33] block/nbd: introduce nbd_client_connection_new()
  2021-04-16  8:08 ` [PATCH v3 12/33] block/nbd: introduce nbd_client_connection_new() Vladimir Sementsov-Ogievskiy
@ 2021-06-02 21:22   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-02 21:22 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:50AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> This is the last step of creating bs-independent nbd connection
> interface. With next commit we can finally move it to separate file.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
>  block/nbd.c | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
>

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-04-16  8:08 ` [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release() Vladimir Sementsov-Ogievskiy
  2021-04-27 22:35   ` Roman Kagan
@ 2021-06-02 21:27   ` Eric Blake
  2021-06-03 11:59     ` Vladimir Sementsov-Ogievskiy
  2021-06-08 10:00     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 2 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-02 21:27 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)

Commit message said what, but not why.  Presumably this is one more
bit of refactoring to make the upcoming file split in a later patch
easier.  But patch 12/33 said it was the last step before a new file,
and this patch isn't yet at a new file.  Missing some continuity in
your commit messages?

> 
> diff --git a/block/nbd.c b/block/nbd.c
> index 21a4039359..8531d019b2 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
>      NBDClientConnection *conn;
>  } BDRVNBDState;
>  
> -static void nbd_free_connect_thread(NBDClientConnection *conn);
> +static void nbd_client_connection_release(NBDClientConnection *conn);

Is it necessary for a forward declaration, or can you just implement
the new function prior to its users?

At any rate, the refactoring looks sane.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-06-02 21:27   ` Eric Blake
@ 2021-06-03 11:59     ` Vladimir Sementsov-Ogievskiy
  2021-06-08 10:00     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-03 11:59 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 00:27, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
>>   1 file changed, 26 insertions(+), 17 deletions(-)
> 
> Commit message said what, but not why.  Presumably this is one more
> bit of refactoring to make the upcoming file split in a later patch
> easier.  But patch 12/33 said it was the last step before a new file,
> and this patch isn't yet at a new file.  Missing some continuity in
> your commit messages?

Seems like one more small additional step after last step )

> 
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 21a4039359..8531d019b2 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
>>       NBDClientConnection *conn;
>>   } BDRVNBDState;
>>   
>> -static void nbd_free_connect_thread(NBDClientConnection *conn);
>> +static void nbd_client_connection_release(NBDClientConnection *conn);
> 
> Is it necessary for a forward declaration, or can you just implement
> the new function prior to its users?

Hmm, seems it could be easily moved.

> 
> At any rate, the refactoring looks sane.
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection
  2021-04-16  8:08 ` [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection Vladimir Sementsov-Ogievskiy
  2021-04-27 22:45   ` Roman Kagan
@ 2021-06-03 15:55   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 15:55 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:52AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We now have bs-independent connection API, which consists of four
> functions:
> 
>   nbd_client_connection_new()
>   nbd_client_connection_unref()
>   nbd_co_establish_connection()
>   nbd_co_establish_connection_cancel()
> 
> Move them to a separate file together with NBDClientConnection
> structure which becomes private to the new API.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     |  11 +++
>  block/nbd.c             | 187 -----------------------------------
>  nbd/client-connection.c | 212 ++++++++++++++++++++++++++++++++++++++++
>  nbd/meson.build         |   1 +

> +++ b/include/block/nbd.h
> @@ -406,4 +406,15 @@ const char *nbd_info_lookup(uint16_t info);
>  const char *nbd_cmd_lookup(uint16_t info);
>  const char *nbd_err_lookup(int err);
>  
> +/* nbd/client-connection.c */
> +typedef struct NBDClientConnection NBDClientConnection;
> +
> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
> +void nbd_client_connection_release(NBDClientConnection *conn);
> +
> +QIOChannelSocket *coroutine_fn
> +nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);

To me, the placement of coroutine_fn looks a bit odd here, like it is
the name of a pointer declaration.  But I see that we have precedence
for doing it that way (such as block.c:bdrv_co_enter()); the
difference being that none of the other locations split the return
type on one line and the function name on another.  I don't really
have any changes to suggest, though, so I'm fine keeping it the way
you wrote.

> +++ b/nbd/client-connection.c

There may be fallout in v4 based on what you tweak in the code that
got moved here, but the split to a new file looks sane to me.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 17/33] nbd/client-connection: implement connection retry
  2021-04-16  8:08 ` [PATCH v3 17/33] nbd/client-connection: implement connection retry Vladimir Sementsov-Ogievskiy
  2021-05-11 20:54   ` Roman Kagan
@ 2021-06-03 16:17   ` Eric Blake
  2021-06-03 17:49     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 16:17 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:55AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Add an option for thread to retry connection until success. We'll use

for a thread to retry connection until it succeeds.

> nbd/client-connection both for reconnect and for initial connection in
> nbd_open(), so we need a possibility to use same NBDClientConnection
> instance to connect once in nbd_open() and then use retry semantics for
> reconnect.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     |  2 ++
>  nbd/client-connection.c | 55 +++++++++++++++++++++++++++++------------
>  2 files changed, 41 insertions(+), 16 deletions(-)
> 
> +++ b/nbd/client-connection.c
> @@ -36,6 +36,8 @@ struct NBDClientConnection {
>      NBDExportInfo initial_info;
>      bool do_negotiation;
>  
> +    bool do_retry;
> +
>      /*
>       * Result of last attempt. Valid in FAIL and SUCCESS states.
>       * If you want to steal error, don't forget to set pointer to NULL.
> @@ -52,6 +54,15 @@ struct NBDClientConnection {
>      Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>  };
>  
> +/*
> + * The function isn't protected by any mutex, so call it when thread is not

so only call it when the thread is not yet running

or maybe even

only call it when the client connection attempt has not yet started

> + * running.
> + */
> +void nbd_client_connection_enable_retry(NBDClientConnection *conn)
> +{
> +    conn->do_retry = true;
> +}
> +
>  NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>                                                 bool do_negotiation,
>                                                 const char *export_name,
> @@ -144,24 +155,37 @@ static void *connect_thread_func(void *opaque)
>      NBDClientConnection *conn = opaque;
>      bool do_free;
>      int ret;
> +    uint64_t timeout = 1;
> +    uint64_t max_timeout = 16;
> +
> +    while (true) {
> +        conn->sioc = qio_channel_socket_new();
> +
> +        error_free(conn->err);
> +        conn->err = NULL;
> +        conn->updated_info = conn->initial_info;
> +
> +        ret = nbd_connect(conn->sioc, conn->saddr,
> +                          conn->do_negotiation ? &conn->updated_info : NULL,
> +                          conn->tlscreds, &conn->ioc, &conn->err);
> +        conn->updated_info.x_dirty_bitmap = NULL;
> +        conn->updated_info.name = NULL;

I'm not quite sure I follow the allocation here: if we passed in
&conn->updated_info which got modified in-place by nbd_connect, then
are we risking a memory leak by ignoring the x_dirty_bitmap and name
set by that call?

> +
> +        if (ret < 0) {
> +            object_unref(OBJECT(conn->sioc));
> +            conn->sioc = NULL;
> +            if (conn->do_retry) {
> +                sleep(timeout);

This is a bare sleep in a function not marked as coroutine_fn.  Do we
need to instead use coroutine sleep for better response to an early
exit if initialization is taking too long?

> +                if (timeout < max_timeout) {
> +                    timeout *= 2;
> +                }
> +                continue;
> +            }
> +        }
>  
> -    conn->sioc = qio_channel_socket_new();
> -
> -    error_free(conn->err);
> -    conn->err = NULL;
> -    conn->updated_info = conn->initial_info;
> -
> -    ret = nbd_connect(conn->sioc, conn->saddr,
> -                      conn->do_negotiation ? &conn->updated_info : NULL,
> -                      conn->tlscreds, &conn->ioc, &conn->err);
> -    if (ret < 0) {
> -        object_unref(OBJECT(conn->sioc));
> -        conn->sioc = NULL;
> +        break;
>      }
>  
> -    conn->updated_info.x_dirty_bitmap = NULL;
> -    conn->updated_info.name = NULL;
> -
>      WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>          assert(conn->running);
>          conn->running = false;
> @@ -172,7 +196,6 @@ static void *connect_thread_func(void *opaque)
>          do_free = conn->detached;
>      }
>  
> -
>      if (do_free) {
>          nbd_client_connection_do_free(conn);

Spurious hunk?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 18/33] nbd/client-connection: shutdown connection on release
  2021-04-16  8:08 ` [PATCH v3 18/33] nbd/client-connection: shutdown connection on release Vladimir Sementsov-Ogievskiy
  2021-05-11 21:08   ` Roman Kagan
@ 2021-06-03 16:20   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 16:20 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:56AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Now, when thread can do negotiation and retry, it may run relatively

when a thread

> long. We need a mechanism to stop it, when user is not interested in

when the user

> result anymore. So, on nbd_client_connection_release() let's shutdown

in a result any more.

> the socked, and do not retry connection if thread is detached.

socket

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  nbd/client-connection.c | 36 ++++++++++++++++++++++++++----------
>  1 file changed, 26 insertions(+), 10 deletions(-)
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
  2021-04-16  8:08 ` [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake() Vladimir Sementsov-Ogievskiy
  2021-05-12  8:40   ` Roman Kagan
@ 2021-06-03 16:29   ` Eric Blake
  2021-06-09 17:23     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 16:29 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:57AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> To be reused in the following patch.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 99 ++++++++++++++++++++++++++++++-----------------------
>  1 file changed, 57 insertions(+), 42 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index 5e63caaf4b..03ffe95231 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -318,6 +318,50 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
>      return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
>  }
>  
> +/*
> + * Check s->info updated by negotiation process.

The parameter name is bs, not s; so this comment is a bit confusing...

> + * Update @bs correspondingly to new options.
> + */
> +static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
> +{
> +    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;

...until here.  Maybe rewrite the entire comment as:

Update @bs with information learned during a completed negotiation
process.  Return failure if the server's advertised options are
incompatible with the client's needs.

> +    int ret;
> +
> +    if (s->x_dirty_bitmap) {
> +        if (!s->info.base_allocation) {
> +            error_setg(errp, "requested x-dirty-bitmap %s not found",
> +                       s->x_dirty_bitmap);
> +            return -EINVAL;
> +        }
> +        if (strcmp(s->x_dirty_bitmap, "qemu:allocation-depth") == 0) {
> +            s->alloc_depth = true;
> +        }
> +    }
> +
> +    if (s->info.flags & NBD_FLAG_READ_ONLY) {
> +        ret = bdrv_apply_auto_read_only(bs, "NBD export is read-only", errp);
> +        if (ret < 0) {
> +            return ret;
> +        }
> +    }
> +
> +    if (s->info.flags & NBD_FLAG_SEND_FUA) {
> +        bs->supported_write_flags = BDRV_REQ_FUA;
> +        bs->supported_zero_flags |= BDRV_REQ_FUA;

Code motion, so it is correct, but it looks odd to use = for one
assignment and |= for the other.  Using |= in both places would be
more consistent.

> +    }
> +
> +    if (s->info.flags & NBD_FLAG_SEND_WRITE_ZEROES) {
> +        bs->supported_zero_flags |= BDRV_REQ_MAY_UNMAP;
> +        if (s->info.flags & NBD_FLAG_SEND_FAST_ZERO) {
> +            bs->supported_zero_flags |= BDRV_REQ_NO_FALLBACK;
> +        }
> +    }
> +
> +    trace_nbd_client_handshake_success(s->export);
> +
> +    return 0;
> +}
> +
>  static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>  {
>      int ret;
> @@ -1579,49 +1623,13 @@ static int nbd_client_handshake(BlockDriverState *bs, Error **errp)

As updating the comment doesn't affect code correctness,
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 17/33] nbd/client-connection: implement connection retry
  2021-06-03 16:17   ` Eric Blake
@ 2021-06-03 17:49     ` Vladimir Sementsov-Ogievskiy
  2021-06-08 10:38       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-03 17:49 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 19:17, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:08:55AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Add an option for thread to retry connection until success. We'll use
> 
> for a thread to retry connection until it succeeds.
> 
>> nbd/client-connection both for reconnect and for initial connection in
>> nbd_open(), so we need a possibility to use same NBDClientConnection
>> instance to connect once in nbd_open() and then use retry semantics for
>> reconnect.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/nbd.h     |  2 ++
>>   nbd/client-connection.c | 55 +++++++++++++++++++++++++++++------------
>>   2 files changed, 41 insertions(+), 16 deletions(-)
>>
>> +++ b/nbd/client-connection.c
>> @@ -36,6 +36,8 @@ struct NBDClientConnection {
>>       NBDExportInfo initial_info;
>>       bool do_negotiation;
>>   
>> +    bool do_retry;
>> +
>>       /*
>>        * Result of last attempt. Valid in FAIL and SUCCESS states.
>>        * If you want to steal error, don't forget to set pointer to NULL.
>> @@ -52,6 +54,15 @@ struct NBDClientConnection {
>>       Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>>   };
>>   
>> +/*
>> + * The function isn't protected by any mutex, so call it when thread is not
> 
> so only call it when the thread is not yet running
> 
> or maybe even
> 
> only call it when the client connection attempt has not yet started
> 
>> + * running.
>> + */
>> +void nbd_client_connection_enable_retry(NBDClientConnection *conn)
>> +{
>> +    conn->do_retry = true;
>> +}
>> +
>>   NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>>                                                  bool do_negotiation,
>>                                                  const char *export_name,
>> @@ -144,24 +155,37 @@ static void *connect_thread_func(void *opaque)
>>       NBDClientConnection *conn = opaque;
>>       bool do_free;
>>       int ret;
>> +    uint64_t timeout = 1;
>> +    uint64_t max_timeout = 16;
>> +
>> +    while (true) {
>> +        conn->sioc = qio_channel_socket_new();
>> +
>> +        error_free(conn->err);
>> +        conn->err = NULL;
>> +        conn->updated_info = conn->initial_info;
>> +
>> +        ret = nbd_connect(conn->sioc, conn->saddr,
>> +                          conn->do_negotiation ? &conn->updated_info : NULL,
>> +                          conn->tlscreds, &conn->ioc, &conn->err);
>> +        conn->updated_info.x_dirty_bitmap = NULL;
>> +        conn->updated_info.name = NULL;
> 
> I'm not quite sure I follow the allocation here: if we passed in
> &conn->updated_info which got modified in-place by nbd_connect, then
> are we risking a memory leak by ignoring the x_dirty_bitmap and name
> set by that call?

Yes, that looks strange :\. Will check when prepare new version and fix or leave a comment here.

> 
>> +
>> +        if (ret < 0) {
>> +            object_unref(OBJECT(conn->sioc));
>> +            conn->sioc = NULL;
>> +            if (conn->do_retry) {
>> +                sleep(timeout);
> 
> This is a bare sleep in a function not marked as coroutine_fn.  Do we
> need to instead use coroutine sleep for better response to an early
> exit if initialization is taking too long?

We are in a separate, by-hand created thread, which knows nothing about coroutines, iothreads, aio contexts etc.. I think bare sleep is what should be here.

> 
>> +                if (timeout < max_timeout) {
>> +                    timeout *= 2;
>> +                }
>> +                continue;
>> +            }
>> +        }
>>   
>> -    conn->sioc = qio_channel_socket_new();
>> -
>> -    error_free(conn->err);
>> -    conn->err = NULL;
>> -    conn->updated_info = conn->initial_info;
>> -
>> -    ret = nbd_connect(conn->sioc, conn->saddr,
>> -                      conn->do_negotiation ? &conn->updated_info : NULL,
>> -                      conn->tlscreds, &conn->ioc, &conn->err);
>> -    if (ret < 0) {
>> -        object_unref(OBJECT(conn->sioc));
>> -        conn->sioc = NULL;
>> +        break;
>>       }
>>   
>> -    conn->updated_info.x_dirty_bitmap = NULL;
>> -    conn->updated_info.name = NULL;
>> -
>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>           assert(conn->running);
>>           conn->running = false;
>> @@ -172,7 +196,6 @@ static void *connect_thread_func(void *opaque)
>>           do_free = conn->detached;
>>       }
>>   
>> -
>>       if (do_free) {
>>           nbd_client_connection_do_free(conn);
> 
> Spurious hunk?
> 

wull drop

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 20/33] block/nbd: use negotiation of NBDClientConnection
  2021-04-16  8:08 ` [PATCH v3 20/33] block/nbd: use negotiation of NBDClientConnection Vladimir Sementsov-Ogievskiy
@ 2021-06-03 18:05   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 18:05 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:08:58AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Use a new possibility of negotiation in connect thread. Now we are on
> the way of simplifying connection_co. We want to move the whole
> reconnect code to NBDClientConnection. NBDClientConnection already
> updated to support negotiation and retry. Use now the first thing.

Maybe:

Now that we can opt in to negotiation as part of the client connection
thread, use that to simplify connection_co.  This is another step on
the way to moving all reconnect code into NBDClientConnection.

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 44 ++++++++++++++++++++++++++++++--------------
>  1 file changed, 30 insertions(+), 14 deletions(-)
>

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread
  2021-04-16  8:09 ` [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread Vladimir Sementsov-Ogievskiy
@ 2021-06-03 18:16   ` Eric Blake
  2021-06-03 18:31     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 18:16 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:00AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> monitor_cur() is used by socket_get_fd, but it doesn't work in
> connection thread. Let's monitor directly to cover this thing. We are
> going to unify connection establishing path in nbd_open and reconnect,
> so we should support fd-passing.

Grammar suggestion:

Let's pass in the monitor directly to work around this.  This gets us
closer to unifing the path for establishing a connection in nbd_open
and reconnect, by supporting fd-passing.


But given Dan's review on 21/33, I suspect you won't be using this
patch in this form after all (instead, the caller of
nbd_client_connection_new will use the new monitor_resolve_fd or
whatever we call that, so that nbd_client_connection_new remains
oblivious to the monitor).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread
  2021-06-03 18:16   ` Eric Blake
@ 2021-06-03 18:31     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-03 18:31 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 21:16, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:09:00AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> monitor_cur() is used by socket_get_fd, but it doesn't work in
>> connection thread. Let's monitor directly to cover this thing. We are
>> going to unify connection establishing path in nbd_open and reconnect,
>> so we should support fd-passing.
> 
> Grammar suggestion:
> 
> Let's pass in the monitor directly to work around this.  This gets us
> closer to unifing the path for establishing a connection in nbd_open
> and reconnect, by supporting fd-passing.
> 
> 
> But given Dan's review on 21/33, I suspect you won't be using this
> patch in this form after all (instead, the caller of
> nbd_client_connection_new will use the new monitor_resolve_fd or
> whatever we call that, so that nbd_client_connection_new remains
> oblivious to the monitor).
> 

Yes.. I even have some patches for it locally. Seems I didn't send them, don't remember why :/ Will check tomorrow.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 23/33] block/nbd: nbd_teardown_connection() don't touch s->sioc
  2021-04-16  8:09 ` [PATCH v3 23/33] block/nbd: nbd_teardown_connection() don't touch s->sioc Vladimir Sementsov-Ogievskiy
@ 2021-06-03 19:04   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 19:04 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:01AM +0300, Vladimir Sementsov-Ogievskiy wrote:

For the subject line, might read better as:

block/nbd: don't touch s->sioc in nbd_teardown_connection()

> Negotiation during reconnect is now done in thread, and s->sioc is not

in a thread

> available during negotiation. Negotiation in thread will be cancelled
> by nbd_client_connection_release() called from nbd_clear_bdrvstate().
> So, we don't need this code chunk anymore.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 4 ----
>  1 file changed, 4 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 24/33] block/nbd: drop BDRVNBDState::sioc
  2021-04-16  8:09 ` [PATCH v3 24/33] block/nbd: drop BDRVNBDState::sioc Vladimir Sementsov-Ogievskiy
@ 2021-06-03 19:12   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 19:12 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:02AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Currently sioc pointer is used just to pass from socket-connection to
> nbd negotiation. Drop the field, and use local variables instead. With
> next commit we'll update nbd/client-connection.c to behave
> appropriately (return only top-most ioc, not two channels).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 98 ++++++++++++++++++++++++++---------------------------
>  1 file changed, 48 insertions(+), 50 deletions(-)
>

It's a bit hard to review 's->ioc' vs. 'sioc' when reading aloud ;)

But the change looks reasonable, and I'm not spotting any memory leak
in the refactoring.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 25/33] nbd/client-connection: return only one io channel
  2021-04-16  8:09 ` [PATCH v3 25/33] nbd/client-connection: return only one io channel Vladimir Sementsov-Ogievskiy
@ 2021-06-03 19:58   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 19:58 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:03AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> block/nbd doesn't need underlying sioc channel anymore. So, we can
> update nbd/client-connection interface to return only one top-most io
> channel, which is more straight forward.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     |  4 ++--
>  block/nbd.c             | 13 ++-----------
>  nbd/client-connection.c | 33 +++++++++++++++++++++++++--------
>  3 files changed, 29 insertions(+), 21 deletions(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix
  2021-04-16  8:09 ` [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix Vladimir Sementsov-Ogievskiy
@ 2021-06-03 20:00   ` Eric Blake
  2021-06-03 20:53   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 20:00 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, Eduardo Habkost, qemu-block, qemu-devel, mreitz, rvkagan,
	Cleber Rosa, den

On Fri, Apr 16, 2021 at 11:09:04AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to reuse the script to generate a qcow2_ function in
> further commit. Prepare the script now.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  scripts/block-coroutine-wrapper.py | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt
  2021-04-16  8:09 ` [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt Vladimir Sementsov-Ogievskiy
@ 2021-06-03 20:04   ` Eric Blake
  2021-06-04  5:30     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 20:04 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:05AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Split out part, that we want to reuse for nbd_open().

Split out the part that we want to reuse for nbd_open().

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 79 +++++++++++++++++++++++++++--------------------------
>  1 file changed, 41 insertions(+), 38 deletions(-)
> 
> diff --git a/block/nbd.c b/block/nbd.c
> index 15b5921725..59971bfba8 100644
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -361,11 +361,49 @@ static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
>      return 0;
>  }
>  
> -static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
> +static int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)

Given the _co_ in the name, don't you need a coroutine_fn marker?

Otherwise looks sane.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false)
  2021-04-16  8:09 ` [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false) Vladimir Sementsov-Ogievskiy
@ 2021-06-03 20:48   ` Eric Blake
  2021-06-04  5:32     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 20:48 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:06AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> nbd_open() does it (through nbd_establish_connection()).
> Actually we lost that call on reconnect path in 1dc4718d849e1a1fe
> "block/nbd: use non-blocking connect: fix vm hang on connect()"
> when we have introduced reconnect thread.

s/have introduced/introduced the/

> 
> Fixes: 1dc4718d849e1a1fe665ce5241ed79048cfa2cfc
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  nbd/client-connection.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
> index 36d2c7c693..00efff293f 100644
> --- a/nbd/client-connection.c
> +++ b/nbd/client-connection.c
> @@ -126,6 +126,8 @@ static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
>          return ret;
>      }
>  
> +    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
> +
>      if (!info) {
>          return 0;
>      }

Reviewed-by: Eric Blake <eblake@redhat.com>

Is this bug fix something that can be cherry-picked in isolation, or
does it depend too much on the rest of the series?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 29/33] nbd/client-connection: add option for non-blocking connection attempt
  2021-04-16  8:09 ` [PATCH v3 29/33] nbd/client-connection: add option for non-blocking connection attempt Vladimir Sementsov-Ogievskiy
@ 2021-06-03 20:51   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 20:51 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:07AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We'll need a possibility of non-blocking nbd_co_establish_connection(),
> so that it returns immediately, and it returns success only if
> connections was previously established in background.

only if a connection was

> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/block/nbd.h     | 2 +-
>  block/nbd.c             | 2 +-
>  nbd/client-connection.c | 8 +++++++-
>  3 files changed, 9 insertions(+), 3 deletions(-)
>

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix
  2021-04-16  8:09 ` [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix Vladimir Sementsov-Ogievskiy
  2021-06-03 20:00   ` Eric Blake
@ 2021-06-03 20:53   ` Eric Blake
  2021-06-04  5:29     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 20:53 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, Eduardo Habkost, qemu-block, qemu-devel, mreitz, rvkagan,
	Cleber Rosa, den

On Fri, Apr 16, 2021 at 11:09:04AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to reuse the script to generate a qcow2_ function in

Is qcow2_ right here?  Looking ahead to patch 30/33, it looks like you
meant nbd_, at least in the context of this series.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 30/33] block/nbd: reuse nbd_co_do_establish_connection() in nbd_open()
  2021-04-16  8:09 ` [PATCH v3 30/33] block/nbd: reuse nbd_co_do_establish_connection() in nbd_open() Vladimir Sementsov-Ogievskiy
@ 2021-06-03 20:57   ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 20:57 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:08AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> The only last step we need to reuse the function is coroutine-wrapper.
> nbd_open() may be called from non-coroutine context. So, generate the
> wrapper and use it.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/coroutines.h |   6 +++
>  block/nbd.c        | 101 ++-------------------------------------------
>  2 files changed, 10 insertions(+), 97 deletions(-)
> 
> diff --git a/block/coroutines.h b/block/coroutines.h
> index 4cfb4946e6..514d169d23 100644
> --- a/block/coroutines.h
> +++ b/block/coroutines.h
> @@ -66,4 +66,10 @@ int coroutine_fn bdrv_co_readv_vmstate(BlockDriverState *bs,
>  int coroutine_fn bdrv_co_writev_vmstate(BlockDriverState *bs,
>                                          QEMUIOVector *qiov, int64_t pos);
>  
> +int generated_co_wrapper
> +nbd_do_establish_connection(BlockDriverState *bs, Error **errp);
> +int coroutine_fn
> +nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp);

Tagged coroutine_fn here,...

> +++ b/block/nbd.c
>  
> -static int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
> +int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)

...but not here.  Is it worth being consistent?

> @@ -2056,22 +1974,11 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>                                          s->x_dirty_bitmap, s->tlscreds,
>                                          monitor_cur());
>  
> -    /*
> -     * establish TCP connection, return error if it fails
> -     * TODO: Configurable retry-until-timeout behaviour.
> -     */
> -    sioc = nbd_establish_connection(bs, s->saddr, errp);
> -    if (!sioc) {
> -        ret = -ECONNREFUSED;
> -        goto fail;
> -    }
> -
> -    ret = nbd_client_handshake(bs, sioc, errp);
> +    /* TODO: Configurable retry-until-timeout behaviour.*/

Space before */

Nice diffstat, proving the refactoring worked.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper
  2021-04-16  8:09 ` [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper Vladimir Sementsov-Ogievskiy
  2021-05-12  7:06   ` Paolo Bonzini
@ 2021-06-03 21:08   ` Eric Blake
  1 sibling, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-03 21:08 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:09AM +0300, Vladimir Sementsov-Ogievskiy wrote:

In the subject: s/clinent/client/

> We already have two similar helpers for other state. Let's add another
> one for convenience.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 25 ++++++++++++++-----------
>  1 file changed, 14 insertions(+), 11 deletions(-)
>

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 32/33] block/nbd: safer transition to receiving request
  2021-04-16  8:09 ` [PATCH v3 32/33] block/nbd: safer transition to receiving request Vladimir Sementsov-Ogievskiy
@ 2021-06-03 21:11   ` Eric Blake
  2021-06-04  5:36     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 21:11 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:10AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> req->receiving is a flag of request being in one concrete yield point
> in nbd_co_do_receive_one_chunk().
> 
> Such kind of boolean flag is always better to unset before scheduling
> the coroutine, to avoid double scheduling. So, let's be more careful.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 

> @@ -614,7 +616,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
>      if (qiov) {
>          qio_channel_set_cork(s->ioc, true);
>          rc = nbd_send_request(s->ioc, request);
> -        if (nbd_clinet_connected(s) && rc >= 0) {
> +        if (nbd_client_connected(s) && rc >= 0) {

Ouch - typo fix in clinet seems unrelated in this fix; please hoist it
into the correct point in the series so that we don't have the typo in
the first place.

Otherwise,
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 33/33] block/nbd: drop connection_co
  2021-04-16  8:09 ` [PATCH v3 33/33] block/nbd: drop connection_co Vladimir Sementsov-Ogievskiy
  2021-04-16  8:14   ` Vladimir Sementsov-Ogievskiy
@ 2021-06-03 21:27   ` Eric Blake
  2021-06-04  5:39     ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 121+ messages in thread
From: Eric Blake @ 2021-06-03 21:27 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Fri, Apr 16, 2021 at 11:09:11AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> OK, that's a big rewrite of the logic.
> 
> Pre-patch we have an always running coroutine - connection_co. It does
> reply receiving and reconnecting. And it leads to a lot of difficult
> and unobvious code around drained sections and context switch. We also
> abuse bs->in_flight counter which is increased for connection_co and
> temporary decreased in points where we want to allow drained section to
> begin. One of these place is in another file: in nbd_read_eof() in
> nbd/client.c.
> 
> We also cancel reconnect and requests waiting for reconnect on drained
> begin which is not correct.
> 
> Let's finally drop this always running coroutine and go another way:
> 
> 1. reconnect_attempt() goes to nbd_co_send_request and called under
>    send_mutex
> 
> 2. We do receive headers in request coroutine. But we also should
>    dispatch replies for another pending requests. So,
>    nbd_connection_entry() is turned into nbd_receive_replies(), which
>    does reply dispatching until it receive another request headers, and
>    returns when it receive the requested header.
> 
> 3. All old staff around drained sections and context switch is dropped.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  block/nbd.c  | 376 ++++++++++++++++-----------------------------------
>  nbd/client.c |   2 -
>  2 files changed, 119 insertions(+), 259 deletions(-)
> 

> -static coroutine_fn void nbd_connection_entry(void *opaque)
> +static coroutine_fn void nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
>  {
> -    BDRVNBDState *s = opaque;
>      uint64_t i;
>      int ret = 0;
>      Error *local_err = NULL;
>  
> -    while (qatomic_load_acquire(&s->state) != NBD_CLIENT_QUIT) {
> -        /*
> -         * The NBD client can only really be considered idle when it has
> -         * yielded from qio_channel_readv_all_eof(), waiting for data. This is
> -         * the point where the additional scheduled coroutine entry happens
> -         * after nbd_client_attach_aio_context().
> -         *
> -         * Therefore we keep an additional in_flight reference all the time and
> -         * only drop it temporarily here.
> -         */
> +    i = HANDLE_TO_INDEX(s, handle);
> +    if (s->receive_co) {
> +        assert(s->receive_co != qemu_coroutine_self());
>  
> -        if (nbd_client_connecting(s)) {
> -            nbd_co_reconnect_loop(s);
> -        }
> +        /* Another request coroutine is receiving now */
> +        s->requests[i].receiving = true;
> +        qemu_coroutine_yield();
> +        assert(!s->requests[i].receiving);
>  
> -        if (!nbd_client_connected(s)) {
> -            continue;
> +        if (s->receive_co != qemu_coroutine_self()) {
> +            /*
> +             * We are either failed or done, caller uses nbd_client_connected()
> +             * to distinguish.
> +             */
> +            return;
>          }
> +    }
> +
> +    assert(s->receive_co == 0 || s->receive_co == qemu_coroutine_self());

s/0/NULL/ here

> +    s->receive_co = qemu_coroutine_self();
>  
> +    while (nbd_client_connected(s)) {
>          assert(s->reply.handle == 0);
>          ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, &local_err);
>  
> @@ -522,8 +380,21 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
>              local_err = NULL;
>          }
>          if (ret <= 0) {
> -            nbd_channel_error(s, ret ? ret : -EIO);
> -            continue;
> +            ret = ret ? ret : -EIO;
> +            nbd_channel_error(s, ret);
> +            goto out;
> +        }
> +
> +        if (!nbd_client_connected(s)) {
> +            ret = -EIO;
> +            goto out;
> +        }
> +
> +        i = HANDLE_TO_INDEX(s, s->reply.handle);
> +
> +        if (s->reply.handle == handle) {
> +            ret = 0;
> +            goto out;
>          }
>  
>          /*

I know your followup said there is more work to do before v4, but I
look forward to seeing it.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix
  2021-06-03 20:53   ` Eric Blake
@ 2021-06-04  5:29     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-04  5:29 UTC (permalink / raw)
  To: Eric Blake
  Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den,
	Eduardo Habkost, Cleber Rosa

03.06.2021 23:53, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:09:04AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> We are going to reuse the script to generate a qcow2_ function in
> 
> Is qcow2_ right here?  Looking ahead to patch 30/33, it looks like you
> meant nbd_, at least in the context of this series.
> 

will change. This because the patch was taken from my parallel qcow2 series

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt
  2021-06-03 20:04   ` Eric Blake
@ 2021-06-04  5:30     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-04  5:30 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 23:04, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:09:05AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Split out part, that we want to reuse for nbd_open().
> 
> Split out the part that we want to reuse for nbd_open().
> 
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 79 +++++++++++++++++++++++++++--------------------------
>>   1 file changed, 41 insertions(+), 38 deletions(-)
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 15b5921725..59971bfba8 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -361,11 +361,49 @@ static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
>>       return 0;
>>   }
>>   
>> -static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>> +static int nbd_co_do_establish_connection(BlockDriverState *bs, Error **errp)
> 
> Given the _co_ in the name, don't you need a coroutine_fn marker?

Yes, strange that I've dropped it

> 
> Otherwise looks sane.
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false)
  2021-06-03 20:48   ` Eric Blake
@ 2021-06-04  5:32     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-04  5:32 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 23:48, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:09:06AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> nbd_open() does it (through nbd_establish_connection()).
>> Actually we lost that call on reconnect path in 1dc4718d849e1a1fe
>> "block/nbd: use non-blocking connect: fix vm hang on connect()"
>> when we have introduced reconnect thread.
> 
> s/have introduced/introduced the/
> 
>>
>> Fixes: 1dc4718d849e1a1fe665ce5241ed79048cfa2cfc
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   nbd/client-connection.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>> index 36d2c7c693..00efff293f 100644
>> --- a/nbd/client-connection.c
>> +++ b/nbd/client-connection.c
>> @@ -126,6 +126,8 @@ static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
>>           return ret;
>>       }
>>   
>> +    qio_channel_set_delay(QIO_CHANNEL(sioc), false);
>> +
>>       if (!info) {
>>           return 0;
>>       }
> 
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 
> Is this bug fix something that can be cherry-picked in isolation, or
> does it depend too much on the rest of the series?
> 

Will try to move to start of the series

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 32/33] block/nbd: safer transition to receiving request
  2021-06-03 21:11   ` Eric Blake
@ 2021-06-04  5:36     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-04  5:36 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

04.06.2021 00:11, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:09:10AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> req->receiving is a flag of request being in one concrete yield point
>> in nbd_co_do_receive_one_chunk().
>>
>> Such kind of boolean flag is always better to unset before scheduling
>> the coroutine, to avoid double scheduling. So, let's be more careful.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 6 ++++--
>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>
> 
>> @@ -614,7 +616,7 @@ static int nbd_co_send_request(BlockDriverState *bs,
>>       if (qiov) {
>>           qio_channel_set_cork(s->ioc, true);
>>           rc = nbd_send_request(s->ioc, request);
>> -        if (nbd_clinet_connected(s) && rc >= 0) {
>> +        if (nbd_client_connected(s) && rc >= 0) {
> 
> Ouch - typo fix in clinet seems unrelated in this fix; please hoist it
> into the correct point in the series so that we don't have the typo in
> the first place.

That also means that I didn't made my favorite "git rebase -x 'make -j9' master".. Not good, will fix of course.

> 
> Otherwise,
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 33/33] block/nbd: drop connection_co
  2021-06-03 21:27   ` Eric Blake
@ 2021-06-04  5:39     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-04  5:39 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

04.06.2021 00:27, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:09:11AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> OK, that's a big rewrite of the logic.
>>
>> Pre-patch we have an always running coroutine - connection_co. It does
>> reply receiving and reconnecting. And it leads to a lot of difficult
>> and unobvious code around drained sections and context switch. We also
>> abuse bs->in_flight counter which is increased for connection_co and
>> temporary decreased in points where we want to allow drained section to
>> begin. One of these place is in another file: in nbd_read_eof() in
>> nbd/client.c.
>>
>> We also cancel reconnect and requests waiting for reconnect on drained
>> begin which is not correct.
>>
>> Let's finally drop this always running coroutine and go another way:
>>
>> 1. reconnect_attempt() goes to nbd_co_send_request and called under
>>     send_mutex
>>
>> 2. We do receive headers in request coroutine. But we also should
>>     dispatch replies for another pending requests. So,
>>     nbd_connection_entry() is turned into nbd_receive_replies(), which
>>     does reply dispatching until it receive another request headers, and
>>     returns when it receive the requested header.
>>
>> 3. All old staff around drained sections and context switch is dropped.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c  | 376 ++++++++++++++++-----------------------------------
>>   nbd/client.c |   2 -
>>   2 files changed, 119 insertions(+), 259 deletions(-)
>>
> 
>> -static coroutine_fn void nbd_connection_entry(void *opaque)
>> +static coroutine_fn void nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
>>   {
>> -    BDRVNBDState *s = opaque;
>>       uint64_t i;
>>       int ret = 0;
>>       Error *local_err = NULL;
>>   
>> -    while (qatomic_load_acquire(&s->state) != NBD_CLIENT_QUIT) {
>> -        /*
>> -         * The NBD client can only really be considered idle when it has
>> -         * yielded from qio_channel_readv_all_eof(), waiting for data. This is
>> -         * the point where the additional scheduled coroutine entry happens
>> -         * after nbd_client_attach_aio_context().
>> -         *
>> -         * Therefore we keep an additional in_flight reference all the time and
>> -         * only drop it temporarily here.
>> -         */
>> +    i = HANDLE_TO_INDEX(s, handle);
>> +    if (s->receive_co) {
>> +        assert(s->receive_co != qemu_coroutine_self());
>>   
>> -        if (nbd_client_connecting(s)) {
>> -            nbd_co_reconnect_loop(s);
>> -        }
>> +        /* Another request coroutine is receiving now */
>> +        s->requests[i].receiving = true;
>> +        qemu_coroutine_yield();
>> +        assert(!s->requests[i].receiving);
>>   
>> -        if (!nbd_client_connected(s)) {
>> -            continue;
>> +        if (s->receive_co != qemu_coroutine_self()) {
>> +            /*
>> +             * We are either failed or done, caller uses nbd_client_connected()
>> +             * to distinguish.
>> +             */
>> +            return;
>>           }
>> +    }
>> +
>> +    assert(s->receive_co == 0 || s->receive_co == qemu_coroutine_self());
> 
> s/0/NULL/ here
> 
>> +    s->receive_co = qemu_coroutine_self();
>>   
>> +    while (nbd_client_connected(s)) {
>>           assert(s->reply.handle == 0);
>>           ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, &local_err);
>>   
>> @@ -522,8 +380,21 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
>>               local_err = NULL;
>>           }
>>           if (ret <= 0) {
>> -            nbd_channel_error(s, ret ? ret : -EIO);
>> -            continue;
>> +            ret = ret ? ret : -EIO;
>> +            nbd_channel_error(s, ret);
>> +            goto out;
>> +        }
>> +
>> +        if (!nbd_client_connected(s)) {
>> +            ret = -EIO;
>> +            goto out;
>> +        }
>> +
>> +        i = HANDLE_TO_INDEX(s, s->reply.handle);
>> +
>> +        if (s->reply.handle == handle) {
>> +            ret = 0;
>> +            goto out;
>>           }
>>   
>>           /*
> 
> I know your followup said there is more work to do before v4, but I
> look forward to seeing it.
> 


Great thanks for reviewing this huge series! Now is my turn to make v4.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-06-02 21:27   ` Eric Blake
  2021-06-03 11:59     ` Vladimir Sementsov-Ogievskiy
@ 2021-06-08 10:00     ` Vladimir Sementsov-Ogievskiy
  2021-06-08 14:18       ` Eric Blake
  1 sibling, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-08 10:00 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 00:27, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
>>   1 file changed, 26 insertions(+), 17 deletions(-)
> 
> Commit message said what, but not why.  Presumably this is one more
> bit of refactoring to make the upcoming file split in a later patch
> easier.  But patch 12/33 said it was the last step before a new file,
> and this patch isn't yet at a new file.  Missing some continuity in
> your commit messages?
> 
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 21a4039359..8531d019b2 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
>>       NBDClientConnection *conn;
>>   } BDRVNBDState;
>>   
>> -static void nbd_free_connect_thread(NBDClientConnection *conn);
>> +static void nbd_client_connection_release(NBDClientConnection *conn);
> 
> Is it necessary for a forward declaration, or can you just implement
> the new function prior to its users?
> 

Actually, otherwise we'll need a forward declaration for nbd_client_connection_do_free(). Anyway, this all doesn't make real sense before moving to separate file


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection()
  2021-06-02 19:14   ` Eric Blake
@ 2021-06-08 10:12     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-08 10:12 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

02.06.2021 22:14, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:08:47AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> We are going to split connection code to separate file. Now we are
> 
> to a separate
> 
>> ready to give nbd_co_establish_connection() clean and bs-independent
>> interface.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> Reviewed-by: Roman Kagan <rvkagan@yandex-team.ru>
>> ---
>>   block/nbd.c | 49 +++++++++++++++++++++++++++++++------------------
>>   1 file changed, 31 insertions(+), 18 deletions(-)
>>
> 
>> -static int coroutine_fn
>> -nbd_co_establish_connection(BlockDriverState *bs, Error **errp)
>> +/*
>> + * Get a new connection in context of @thr:
>> + *   if thread is running, wait for completion
> 
> if the thread is running,...
> 
>> + *   if thread is already succeeded in background, and user didn't get the
> 
> if the thread already succeeded in the background,...
> 
>> + *     result, just return it now
>> + *   otherwise if thread is not running, start a thread and wait for completion
> 
> otherwise, the thread is not running, so start...
> 
>> + */
>> +static coroutine_fn QIOChannelSocket *
>> +nbd_co_establish_connection(NBDConnectThread *thr, Error **errp)
>>   {
>> +    QIOChannelSocket *sioc = NULL;
>>       QemuThread thread;
>> -    BDRVNBDState *s = bs->opaque;
>> -    NBDConnectThread *thr = s->connect_thread;
>> -
>> -    assert(!s->sioc);
>>   
>>       qemu_mutex_lock(&thr->mutex);
>>   
>> +    /*
>> +     * Don't call nbd_co_establish_connection() in several coroutines in
>> +     * parallel. Only one call at once is supported.
>> +     */
>> +    assert(!thr->wait_co);
>> +
>>       if (!thr->running) {
>>           if (thr->sioc) {
>>               /* Previous attempt finally succeeded in background */
>> -            goto out;
>> +            sioc = g_steal_pointer(&thr->sioc);
>> +            qemu_mutex_unlock(&thr->mutex);
> 
> Worth using QEMU_LOCK_GUARD() here?

Refactored together with other critical sections in patch 15

> 
>> +
>> +            return sioc;
>>           }
>> +
>>           thr->running = true;
>>           error_free(thr->err);
>>           thr->err = NULL;
> 
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 17/33] nbd/client-connection: implement connection retry
  2021-05-11 20:54   ` Roman Kagan
@ 2021-06-08 10:24     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-08 10:24 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

11.05.2021 23:54, Roman Kagan wrote:
> On Fri, Apr 16, 2021 at 11:08:55AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Add an option for thread to retry connection until success. We'll use
>> nbd/client-connection both for reconnect and for initial connection in
>> nbd_open(), so we need a possibility to use same NBDClientConnection
>> instance to connect once in nbd_open() and then use retry semantics for
>> reconnect.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/nbd.h     |  2 ++
>>   nbd/client-connection.c | 55 +++++++++++++++++++++++++++++------------
>>   2 files changed, 41 insertions(+), 16 deletions(-)
>>
>> diff --git a/include/block/nbd.h b/include/block/nbd.h
>> index 5d86e6a393..5bb54d831c 100644
>> --- a/include/block/nbd.h
>> +++ b/include/block/nbd.h
>> @@ -409,6 +409,8 @@ const char *nbd_err_lookup(int err);
>>   /* nbd/client-connection.c */
>>   typedef struct NBDClientConnection NBDClientConnection;
>>   
>> +void nbd_client_connection_enable_retry(NBDClientConnection *conn);
>> +
>>   NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>>                                                  bool do_negotiation,
>>                                                  const char *export_name,
>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>> index ae4a77f826..002bd91f42 100644
>> --- a/nbd/client-connection.c
>> +++ b/nbd/client-connection.c
>> @@ -36,6 +36,8 @@ struct NBDClientConnection {
>>       NBDExportInfo initial_info;
>>       bool do_negotiation;
>>   
>> +    bool do_retry;
>> +
>>       /*
>>        * Result of last attempt. Valid in FAIL and SUCCESS states.
>>        * If you want to steal error, don't forget to set pointer to NULL.
>> @@ -52,6 +54,15 @@ struct NBDClientConnection {
>>       Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>>   };
>>   
>> +/*
>> + * The function isn't protected by any mutex, so call it when thread is not
>> + * running.
>> + */
>> +void nbd_client_connection_enable_retry(NBDClientConnection *conn)
>> +{
>> +    conn->do_retry = true;
>> +}
>> +
>>   NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>>                                                  bool do_negotiation,
>>                                                  const char *export_name,
>> @@ -144,24 +155,37 @@ static void *connect_thread_func(void *opaque)
>>       NBDClientConnection *conn = opaque;
>>       bool do_free;
>>       int ret;
>> +    uint64_t timeout = 1;
>> +    uint64_t max_timeout = 16;
>> +
>> +    while (true) {
>> +        conn->sioc = qio_channel_socket_new();
>> +
>> +        error_free(conn->err);
>> +        conn->err = NULL;
>> +        conn->updated_info = conn->initial_info;
>> +
>> +        ret = nbd_connect(conn->sioc, conn->saddr,
>> +                          conn->do_negotiation ? &conn->updated_info : NULL,
>> +                          conn->tlscreds, &conn->ioc, &conn->err);
>> +        conn->updated_info.x_dirty_bitmap = NULL;
>> +        conn->updated_info.name = NULL;
>> +
>> +        if (ret < 0) {
>> +            object_unref(OBJECT(conn->sioc));
>> +            conn->sioc = NULL;
>> +            if (conn->do_retry) {
>> +                sleep(timeout);
>> +                if (timeout < max_timeout) {
>> +                    timeout *= 2;
>> +                }
>> +                continue;
>> +            }
>> +        }
> 
> How is it supposed to get canceled?
> 

Next commit does it
  
>> -    conn->sioc = qio_channel_socket_new();
>> -
>> -    error_free(conn->err);
>> -    conn->err = NULL;
>> -    conn->updated_info = conn->initial_info;
>> -
>> -    ret = nbd_connect(conn->sioc, conn->saddr,
>> -                      conn->do_negotiation ? &conn->updated_info : NULL,
>> -                      conn->tlscreds, &conn->ioc, &conn->err);
>> -    if (ret < 0) {
>> -        object_unref(OBJECT(conn->sioc));
>> -        conn->sioc = NULL;
>> +        break;
>>       }
>>   
>> -    conn->updated_info.x_dirty_bitmap = NULL;
>> -    conn->updated_info.name = NULL;
>> -
>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>           assert(conn->running);
>>           conn->running = false;
>> @@ -172,7 +196,6 @@ static void *connect_thread_func(void *opaque)
>>           do_free = conn->detached;
>>       }
>>   
>> -
>>       if (do_free) {
>>           nbd_client_connection_do_free(conn);
>>       }
>> -- 
>> 2.29.2
>>


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 17/33] nbd/client-connection: implement connection retry
  2021-06-03 17:49     ` Vladimir Sementsov-Ogievskiy
@ 2021-06-08 10:38       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-08 10:38 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 20:49, Vladimir Sementsov-Ogievskiy wrote:
> 03.06.2021 19:17, Eric Blake wrote:
>> On Fri, Apr 16, 2021 at 11:08:55AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>> Add an option for thread to retry connection until success. We'll use
>>
>> for a thread to retry connection until it succeeds.
>>
>>> nbd/client-connection both for reconnect and for initial connection in
>>> nbd_open(), so we need a possibility to use same NBDClientConnection
>>> instance to connect once in nbd_open() and then use retry semantics for
>>> reconnect.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   include/block/nbd.h     |  2 ++
>>>   nbd/client-connection.c | 55 +++++++++++++++++++++++++++++------------
>>>   2 files changed, 41 insertions(+), 16 deletions(-)
>>>
>>> +++ b/nbd/client-connection.c
>>> @@ -36,6 +36,8 @@ struct NBDClientConnection {
>>>       NBDExportInfo initial_info;
>>>       bool do_negotiation;
>>> +    bool do_retry;
>>> +
>>>       /*
>>>        * Result of last attempt. Valid in FAIL and SUCCESS states.
>>>        * If you want to steal error, don't forget to set pointer to NULL.
>>> @@ -52,6 +54,15 @@ struct NBDClientConnection {
>>>       Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>>>   };
>>> +/*
>>> + * The function isn't protected by any mutex, so call it when thread is not
>>
>> so only call it when the thread is not yet running
>>
>> or maybe even
>>
>> only call it when the client connection attempt has not yet started
>>
>>> + * running.
>>> + */
>>> +void nbd_client_connection_enable_retry(NBDClientConnection *conn)
>>> +{
>>> +    conn->do_retry = true;
>>> +}
>>> +
>>>   NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>>>                                                  bool do_negotiation,
>>>                                                  const char *export_name,
>>> @@ -144,24 +155,37 @@ static void *connect_thread_func(void *opaque)
>>>       NBDClientConnection *conn = opaque;
>>>       bool do_free;
>>>       int ret;
>>> +    uint64_t timeout = 1;
>>> +    uint64_t max_timeout = 16;
>>> +
>>> +    while (true) {
>>> +        conn->sioc = qio_channel_socket_new();
>>> +
>>> +        error_free(conn->err);
>>> +        conn->err = NULL;
>>> +        conn->updated_info = conn->initial_info;
>>> +
>>> +        ret = nbd_connect(conn->sioc, conn->saddr,
>>> +                          conn->do_negotiation ? &conn->updated_info : NULL,
>>> +                          conn->tlscreds, &conn->ioc, &conn->err);
>>> +        conn->updated_info.x_dirty_bitmap = NULL;
>>> +        conn->updated_info.name = NULL;
>>
>> I'm not quite sure I follow the allocation here: if we passed in
>> &conn->updated_info which got modified in-place by nbd_connect, then
>> are we risking a memory leak by ignoring the x_dirty_bitmap and name
>> set by that call?
> 
> Yes, that looks strange :\. Will check when prepare new version and fix or leave a comment here.

x_dirty_bitmap and name are not set by nbd_connect, they are IN parameters of nbd_receive_negotiate(). Their allocations are owned by conn->initial_info. So, here we've copied pointers into conn->updated_info. And then zero out them, when they are not needed anymore (and actually, to not store them and not finally return to the user our internal allocations). I'll add a comment here.

> 
>>
>>> +
>>> +        if (ret < 0) {
>>> +            object_unref(OBJECT(conn->sioc));
>>> +            conn->sioc = NULL;
>>> +            if (conn->do_retry) {
>>> +                sleep(timeout);
>>
>> This is a bare sleep in a function not marked as coroutine_fn.  Do we
>> need to instead use coroutine sleep for better response to an early
>> exit if initialization is taking too long?
> 
> We are in a separate, by-hand created thread, which knows nothing about coroutines, iothreads, aio contexts etc.. I think bare sleep is what should be here.
> 
>>
>>> +                if (timeout < max_timeout) {
>>> +                    timeout *= 2;
>>> +                }
>>> +                continue;
>>> +            }
>>> +        }
>>> -    conn->sioc = qio_channel_socket_new();
>>> -
>>> -    error_free(conn->err);
>>> -    conn->err = NULL;
>>> -    conn->updated_info = conn->initial_info;
>>> -
>>> -    ret = nbd_connect(conn->sioc, conn->saddr,
>>> -                      conn->do_negotiation ? &conn->updated_info : NULL,
>>> -                      conn->tlscreds, &conn->ioc, &conn->err);
>>> -    if (ret < 0) {
>>> -        object_unref(OBJECT(conn->sioc));
>>> -        conn->sioc = NULL;
>>> +        break;
>>>       }
>>> -    conn->updated_info.x_dirty_bitmap = NULL;
>>> -    conn->updated_info.name = NULL;
>>> -
>>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>>           assert(conn->running);
>>>           conn->running = false;
>>> @@ -172,7 +196,6 @@ static void *connect_thread_func(void *opaque)
>>>           do_free = conn->detached;
>>>       }
>>> -
>>>       if (do_free) {
>>>           nbd_client_connection_do_free(conn);
>>
>> Spurious hunk?
>>
> 
> wull drop
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release()
  2021-06-08 10:00     ` Vladimir Sementsov-Ogievskiy
@ 2021-06-08 14:18       ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-08 14:18 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Tue, Jun 08, 2021 at 01:00:08PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 03.06.2021 00:27, Eric Blake wrote:
> > On Fri, Apr 16, 2021 at 11:08:51AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > > ---
> > >   block/nbd.c | 43 ++++++++++++++++++++++++++-----------------
> > >   1 file changed, 26 insertions(+), 17 deletions(-)
> > 
> > Commit message said what, but not why.  Presumably this is one more
> > bit of refactoring to make the upcoming file split in a later patch
> > easier.  But patch 12/33 said it was the last step before a new file,
> > and this patch isn't yet at a new file.  Missing some continuity in
> > your commit messages?
> > 
> > > 
> > > diff --git a/block/nbd.c b/block/nbd.c
> > > index 21a4039359..8531d019b2 100644
> > > --- a/block/nbd.c
> > > +++ b/block/nbd.c
> > > @@ -118,7 +118,7 @@ typedef struct BDRVNBDState {
> > >       NBDClientConnection *conn;
> > >   } BDRVNBDState;
> > > -static void nbd_free_connect_thread(NBDClientConnection *conn);
> > > +static void nbd_client_connection_release(NBDClientConnection *conn);
> > 
> > Is it necessary for a forward declaration, or can you just implement
> > the new function prior to its users?
> > 
> 
> Actually, otherwise we'll need a forward declaration for nbd_client_connection_do_free(). Anyway, this all doesn't make real sense before moving to separate file

Fair enough.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-05-13 21:04       ` Paolo Bonzini
  2021-05-14 17:27         ` Roman Kagan
@ 2021-06-08 18:45         ` Vladimir Sementsov-Ogievskiy
  2021-06-09  9:35           ` Paolo Bonzini
  1 sibling, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-08 18:45 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-block
  Cc: qemu-devel, eblake, mreitz, kwolf, rvkagan, den, Stefan Hajnoczi,
	Fam Zheng

14.05.2021 00:04, Paolo Bonzini wrote:
> On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
>>>>
>>>
>>> I don't understand.  Why doesn't aio_co_enter go through the ctx != qemu_get_current_aio_context() branch and just do aio_co_schedule? That was at least the idea behind aio_co_wake and aio_co_enter.
>>>
>>
>> Because ctx is exactly qemu_get_current_aio_context(), as we are not in iothread but in nbd connection thread. So, qemu_get_current_aio_context() returns qemu_aio_context.
> 
> So the problem is that threads other than the main thread and
> the I/O thread should not return qemu_aio_context.  The vCPU thread
> may need to return it when running with BQL taken, though.
> 
> Something like this (untested):
> 
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 5f342267d5..10fcae1515 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct Coroutine *co);
>    * Return the AioContext whose event loop runs in the current thread.
>    *
>    * If called from an IOThread this will be the IOThread's AioContext.  If
> - * called from another thread it will be the main loop AioContext.
> + * called from the main thread or with the "big QEMU lock" taken it
> + * will be the main loop AioContext.
>    */
>   AioContext *qemu_get_current_aio_context(void);
> 
> +void qemu_set_current_aio_context(AioContext *ctx);
> +
>   /**
>    * aio_context_setup:
>    * @ctx: the aio context
> diff --git a/iothread.c b/iothread.c
> index 7f086387be..22b967e77c 100644
> --- a/iothread.c
> +++ b/iothread.c
> @@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
>   #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
>   #endif
> 
> -static __thread IOThread *my_iothread;
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +    assert(!my_aiocontext);
> +    my_aiocontext = ctx;
> +}
> 
>   AioContext *qemu_get_current_aio_context(void)
>   {
> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
> +    if (my_aiocontext) {
> +        return my_aiocontext;
> +    }
> +    if (qemu_mutex_iothread_locked()) {
> +        return qemu_get_aio_context();
> +    }
> +    return NULL;
>   }
> 
>   static void *iothread_run(void *opaque)
> @@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
>        * in this new thread uses glib.
>        */
>       g_main_context_push_thread_default(iothread->worker_context);
> -    my_iothread = iothread;
> +    qemu_set_current_aio_context(iothread->ctx);
>       iothread->thread_id = qemu_get_thread_id();
>       qemu_sem_post(&iothread->init_done_sem);
> 
> diff --git a/stubs/iothread.c b/stubs/iothread.c
> index 8cc9e28c55..25ff398894 100644
> --- a/stubs/iothread.c
> +++ b/stubs/iothread.c
> @@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
>   {
>       return qemu_get_aio_context();
>   }
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +}
> diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
> index afde12b4ef..cab38b3da8 100644
> --- a/tests/unit/iothread.c
> +++ b/tests/unit/iothread.c
> @@ -30,13 +30,26 @@ struct IOThread {
>       bool stopping;
>   };
> 
> -static __thread IOThread *my_iothread;
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +    assert(!my_aiocontext);
> +    my_aiocontext = ctx;
> +}
> 
>   AioContext *qemu_get_current_aio_context(void)
>   {
> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
> +    if (my_aiocontext) {
> +        return my_aiocontext;
> +    }
> +    if (qemu_mutex_iothread_locked()) {
> +        return qemu_get_aio_context();
> +    }
> +    return NULL;
>   }
> 
> +
>   static void iothread_init_gcontext(IOThread *iothread)
>   {
>       GSource *source;
> @@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)
> 
>       rcu_register_thread();
> 
> -    my_iothread = iothread;
> +    qemu_set_current_aio_context(iothread->ctx);
>       qemu_mutex_lock(&iothread->init_done_lock);
>       iothread->ctx = aio_context_new(&error_abort);
> 
> diff --git a/util/main-loop.c b/util/main-loop.c
> index d9c55df6f5..4ae5b23e99 100644
> --- a/util/main-loop.c
> +++ b/util/main-loop.c
> @@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
>       if (!qemu_aio_context) {
>           return -EMFILE;
>       }
> +    qemu_set_current_aio_context(qemu_aio_context);
>       qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
>       gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
>       src = aio_get_g_source(qemu_aio_context);
> 
> If it works for you, I can post it as a formal patch.
> 

This doesn't work for iotests.. qemu-io goes through version in stub. It works if I add:

diff --git a/stubs/iothread.c b/stubs/iothread.c
index 8cc9e28c55..967a01c4f0 100644
--- a/stubs/iothread.c
+++ b/stubs/iothread.c
@@ -2,7 +2,18 @@
  #include "block/aio.h"
  #include "qemu/main-loop.h"
  
+static __thread AioContext *my_aiocontext;
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+    assert(!my_aiocontext);
+    my_aiocontext = ctx;
+}
+
  AioContext *qemu_get_current_aio_context(void)
  {
-    return qemu_get_aio_context();
+    if (my_aiocontext) {
+        return my_aiocontext;
+    }
+    return NULL;
  }
  


-- 
Best regards,
Vladimir


^ permalink raw reply related	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation
  2021-05-12  6:42     ` Vladimir Sementsov-Ogievskiy
@ 2021-06-08 19:23       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-08 19:23 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

12.05.2021 09:42, Vladimir Sementsov-Ogievskiy wrote:
> 11.05.2021 13:45, Roman Kagan wrote:
>> On Fri, Apr 16, 2021 at 11:08:54AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>> Add arguments and logic to support nbd negotiation in the same thread
>>> after successful connection.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>   include/block/nbd.h     |   9 +++-
>>>   block/nbd.c             |   4 +-
>>>   nbd/client-connection.c | 105 ++++++++++++++++++++++++++++++++++++++--
>>>   3 files changed, 109 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/include/block/nbd.h b/include/block/nbd.h
>>> index 57381be76f..5d86e6a393 100644
>>> --- a/include/block/nbd.h
>>> +++ b/include/block/nbd.h
>>> @@ -409,11 +409,16 @@ const char *nbd_err_lookup(int err);
>>>   /* nbd/client-connection.c */
>>>   typedef struct NBDClientConnection NBDClientConnection;
>>> -NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
>>> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>>> +                                               bool do_negotiation,
>>> +                                               const char *export_name,
>>> +                                               const char *x_dirty_bitmap,
>>> +                                               QCryptoTLSCreds *tlscreds);
>>>   void nbd_client_connection_release(NBDClientConnection *conn);
>>>   QIOChannelSocket *coroutine_fn
>>> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
>>> +nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
>>> +                            QIOChannel **ioc, Error **errp);
>>>   void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection *conn);
>>> diff --git a/block/nbd.c b/block/nbd.c
>>> index 9bd68dcf10..5e63caaf4b 100644
>>> --- a/block/nbd.c
>>> +++ b/block/nbd.c
>>> @@ -361,7 +361,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>>>           s->ioc = NULL;
>>>       }
>>> -    s->sioc = nbd_co_establish_connection(s->conn, NULL);
>>> +    s->sioc = nbd_co_establish_connection(s->conn, NULL, NULL, NULL);
>>>       if (!s->sioc) {
>>>           ret = -ECONNREFUSED;
>>>           goto out;
>>> @@ -2033,7 +2033,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
>>>           goto fail;
>>>       }
>>> -    s->conn = nbd_client_connection_new(s->saddr);
>>> +    s->conn = nbd_client_connection_new(s->saddr, false, NULL, NULL, NULL);
>>>       /*
>>>        * establish TCP connection, return error if it fails
>>> diff --git a/nbd/client-connection.c b/nbd/client-connection.c
>>> index b45a0bd5f6..ae4a77f826 100644
>>> --- a/nbd/client-connection.c
>>> +++ b/nbd/client-connection.c
>>> @@ -30,14 +30,19 @@
>>>   #include "qapi/clone-visitor.h"
>>>   struct NBDClientConnection {
>>> -    /* Initialization constants */
>>> +    /* Initialization constants, never change */
>>>       SocketAddress *saddr; /* address to connect to */
>>> +    QCryptoTLSCreds *tlscreds;
>>> +    NBDExportInfo initial_info;
>>> +    bool do_negotiation;
>>>       /*
>>>        * Result of last attempt. Valid in FAIL and SUCCESS states.
>>>        * If you want to steal error, don't forget to set pointer to NULL.
>>>        */
>>> +    NBDExportInfo updated_info;
>>>       QIOChannelSocket *sioc;
>>> +    QIOChannel *ioc;
>>>       Error *err;
>>>       QemuMutex mutex;
>>> @@ -47,12 +52,25 @@ struct NBDClientConnection {
>>>       Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
>>>   };
>>> -NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
>>> +NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
>>> +                                               bool do_negotiation,
>>> +                                               const char *export_name,
>>> +                                               const char *x_dirty_bitmap,
>>> +                                               QCryptoTLSCreds *tlscreds)
>>>   {
>>>       NBDClientConnection *conn = g_new(NBDClientConnection, 1);
>>> +    object_ref(OBJECT(tlscreds));
>>>       *conn = (NBDClientConnection) {
>>>           .saddr = QAPI_CLONE(SocketAddress, saddr),
>>> +        .tlscreds = tlscreds,
>>> +        .do_negotiation = do_negotiation,
>>> +
>>> +        .initial_info.request_sizes = true,
>>> +        .initial_info.structured_reply = true,
>>> +        .initial_info.base_allocation = true,
>>> +        .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
>>> +        .initial_info.name = g_strdup(export_name ?: "")
>>>       };
>>>       qemu_mutex_init(&conn->mutex);
>>> @@ -68,9 +86,59 @@ static void nbd_client_connection_do_free(NBDClientConnection *conn)
>>>       }
>>>       error_free(conn->err);
>>>       qapi_free_SocketAddress(conn->saddr);
>>> +    object_unref(OBJECT(conn->tlscreds));
>>> +    g_free(conn->initial_info.x_dirty_bitmap);
>>> +    g_free(conn->initial_info.name);
>>>       g_free(conn);
>>>   }
>>> +/*
>>> + * Connect to @addr and do NBD negotiation if @info is not null. If @tlscreds
>>> + * given @outioc is provided. @outioc is provided only on success.  The call may
>>
>> s/given/are given/
>> s/provided/returned/g
>>
>>> + * be cancelled in parallel by simply qio_channel_shutdown(sioc).
>>
>> I assume by "in parallel" you mean "from another thread", I'd suggest to
>> spell this out.  I'm also wondering how safe it really is.  In general
>> sockets should be fine with concurrent send()/recv() and shutdown(): the
>> sender/receiver will be woken up with an error.  Dunno if it's true for
>> an arbitrary qio_channel.
> 
> Hmm.. Good notice. I'll look at it.

At least, it should be safe, as in documentation of qio_channel_shutdown() I read:


      This function is thread-safe, terminates quickly and does not block

> 
>>  Also it may be worth documenting that the
>> code path that cancels must leave all the cleanup up to the negotiation
>> code, otherwise it risks conflicting.
>>
>>> + */
>>> +static int nbd_connect(QIOChannelSocket *sioc, SocketAddress *addr,
>>> +                       NBDExportInfo *info, QCryptoTLSCreds *tlscreds,
>>> +                       QIOChannel **outioc, Error **errp)
>>> +{
>>> +    int ret;
>>> +
>>> +    if (outioc) {
>>> +        *outioc = NULL;
>>> +    }
>>> +
>>> +    ret = qio_channel_socket_connect_sync(sioc, addr, errp);
>>> +    if (ret < 0) {
>>> +        return ret;
>>> +    }
>>> +
>>> +    if (!info) {
>>> +        return 0;
>>> +    }
>>> +
>>> +    ret = nbd_receive_negotiate(NULL, QIO_CHANNEL(sioc), tlscreds,
>>> +                                tlscreds ? addr->u.inet.host : NULL,
>>> +                                outioc, info, errp);
>>> +    if (ret < 0) {
>>> +        /*
>>> +         * nbd_receive_negotiate() may setup tls ioc and return it even on
>>> +         * failure path. In this case we should use it instead of original
>>> +         * channel.
>>> +         */
>>> +        if (outioc && *outioc) {
>>> +            qio_channel_close(QIO_CHANNEL(*outioc), NULL);
>>> +            object_unref(OBJECT(*outioc));
>>> +            *outioc = NULL;
>>> +        } else {
>>> +            qio_channel_close(QIO_CHANNEL(sioc), NULL);
>>> +        }
>>> +
>>> +        return ret;
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>   static void *connect_thread_func(void *opaque)
>>>   {
>>>       NBDClientConnection *conn = opaque;
>>> @@ -81,12 +149,19 @@ static void *connect_thread_func(void *opaque)
>>>       error_free(conn->err);
>>>       conn->err = NULL;
>>> -    ret = qio_channel_socket_connect_sync(conn->sioc, conn->saddr, &conn->err);
>>> +    conn->updated_info = conn->initial_info;
>>> +
>>> +    ret = nbd_connect(conn->sioc, conn->saddr,
>>> +                      conn->do_negotiation ? &conn->updated_info : NULL,
>>> +                      conn->tlscreds, &conn->ioc, &conn->err);
>>>       if (ret < 0) {
>>>           object_unref(OBJECT(conn->sioc));
>>>           conn->sioc = NULL;
>>>       }
>>> +    conn->updated_info.x_dirty_bitmap = NULL;
>>> +    conn->updated_info.name = NULL;
>>> +
>>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>>           assert(conn->running);
>>>           conn->running = false;
>>> @@ -94,8 +169,8 @@ static void *connect_thread_func(void *opaque)
>>>               aio_co_schedule(NULL, conn->wait_co);
>>>               conn->wait_co = NULL;
>>>           }
>>> +        do_free = conn->detached;
>>>       }
>>> -    do_free = conn->detached;
>>
>> This looks like the response to my earlier comment ;)  This hunk just
>> needs to be squashed into the previous patch.
>>
>>>       if (do_free) {
>>> @@ -131,12 +206,24 @@ void nbd_client_connection_release(NBDClientConnection *conn)
>>>    *   if thread is already succeeded in background, and user didn't get the
>>>    *     result, just return it now
>>>    *   otherwise if thread is not running, start a thread and wait for completion
>>> + *
>>> + * If @info is not NULL, also do nbd-negotiation after successful connection.
>>> + * In this case info is used only as out parameter, and is fully initialized by
>>> + * nbd_co_establish_connection(). "IN" fields of info as well as related only to
>>> + * nbd_receive_export_list() would be zero (see description of NBDExportInfo in
>>> + * include/block/nbd.h).
>>>    */
>>>   QIOChannelSocket *coroutine_fn
>>> -nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>> +nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
>>> +                            QIOChannel **ioc, Error **errp)
>>>   {
>>>       QemuThread thread;
>>> +    if (conn->do_negotiation) {
>>> +        assert(info);
>>> +        assert(ioc);
>>> +    }
>>> +
>>>       WITH_QEMU_LOCK_GUARD(&conn->mutex) {
>>>           /*
>>>            * Don't call nbd_co_establish_connection() in several coroutines in
>>> @@ -147,6 +234,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>>           if (!conn->running) {
>>>               if (conn->sioc) {
>>>                   /* Previous attempt finally succeeded in background */
>>> +                if (conn->do_negotiation) {
>>> +                    *ioc = g_steal_pointer(&conn->ioc);
>>> +                    memcpy(info, &conn->updated_info, sizeof(*info));
>>> +                }
>>>                   return g_steal_pointer(&conn->sioc);
>>>               }
>>> @@ -178,6 +269,10 @@ nbd_co_establish_connection(NBDClientConnection *conn, Error **errp)
>>>           } else {
>>>               error_propagate(errp, conn->err);
>>>               conn->err = NULL;
>>> +            if (conn->sioc && conn->do_negotiation) {
>>> +                *ioc = g_steal_pointer(&conn->ioc);
>>> +                memcpy(info, &conn->updated_info, sizeof(*info));
>>> +            }
>>>               return g_steal_pointer(&conn->sioc);
>>>           }
>>>       }
>>> -- 
>>> 2.29.2
>>>
> 
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-06-08 18:45         ` Vladimir Sementsov-Ogievskiy
@ 2021-06-09  9:35           ` Paolo Bonzini
  2021-06-09 10:24             ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 121+ messages in thread
From: Paolo Bonzini @ 2021-06-09  9:35 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, Fam Zheng, qemu-devel, mreitz, rvkagan, Stefan Hajnoczi,
	den, eblake

On 08/06/21 20:45, Vladimir Sementsov-Ogievskiy wrote:
> 14.05.2021 00:04, Paolo Bonzini wrote:
>> On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
>>>>>
>>>>
>>>> I don't understand.  Why doesn't aio_co_enter go through the ctx != 
>>>> qemu_get_current_aio_context() branch and just do aio_co_schedule? 
>>>> That was at least the idea behind aio_co_wake and aio_co_enter.
>>>>
>>>
>>> Because ctx is exactly qemu_get_current_aio_context(), as we are not 
>>> in iothread but in nbd connection thread. So, 
>>> qemu_get_current_aio_context() returns qemu_aio_context.
>>
>> So the problem is that threads other than the main thread and
>> the I/O thread should not return qemu_aio_context.  The vCPU thread
>> may need to return it when running with BQL taken, though.
>>
>> Something like this (untested):
>>
>> diff --git a/include/block/aio.h b/include/block/aio.h
>> index 5f342267d5..10fcae1515 100644
>> --- a/include/block/aio.h
>> +++ b/include/block/aio.h
>> @@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct 
>> Coroutine *co);
>>    * Return the AioContext whose event loop runs in the current thread.
>>    *
>>    * If called from an IOThread this will be the IOThread's 
>> AioContext.  If
>> - * called from another thread it will be the main loop AioContext.
>> + * called from the main thread or with the "big QEMU lock" taken it
>> + * will be the main loop AioContext.
>>    */
>>   AioContext *qemu_get_current_aio_context(void);
>>
>> +void qemu_set_current_aio_context(AioContext *ctx);
>> +
>>   /**
>>    * aio_context_setup:
>>    * @ctx: the aio context
>> diff --git a/iothread.c b/iothread.c
>> index 7f086387be..22b967e77c 100644
>> --- a/iothread.c
>> +++ b/iothread.c
>> @@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
>>   #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
>>   #endif
>>
>> -static __thread IOThread *my_iothread;
>> +static __thread AioContext *my_aiocontext;
>> +
>> +void qemu_set_current_aio_context(AioContext *ctx)
>> +{
>> +    assert(!my_aiocontext);
>> +    my_aiocontext = ctx;
>> +}
>>
>>   AioContext *qemu_get_current_aio_context(void)
>>   {
>> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
>> +    if (my_aiocontext) {
>> +        return my_aiocontext;
>> +    }
>> +    if (qemu_mutex_iothread_locked()) {
>> +        return qemu_get_aio_context();
>> +    }
>> +    return NULL;
>>   }
>>
>>   static void *iothread_run(void *opaque)
>> @@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
>>        * in this new thread uses glib.
>>        */
>>       g_main_context_push_thread_default(iothread->worker_context);
>> -    my_iothread = iothread;
>> +    qemu_set_current_aio_context(iothread->ctx);
>>       iothread->thread_id = qemu_get_thread_id();
>>       qemu_sem_post(&iothread->init_done_sem);
>>
>> diff --git a/stubs/iothread.c b/stubs/iothread.c
>> index 8cc9e28c55..25ff398894 100644
>> --- a/stubs/iothread.c
>> +++ b/stubs/iothread.c
>> @@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
>>   {
>>       return qemu_get_aio_context();
>>   }
>> +
>> +void qemu_set_current_aio_context(AioContext *ctx)
>> +{
>> +}
>> diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
>> index afde12b4ef..cab38b3da8 100644
>> --- a/tests/unit/iothread.c
>> +++ b/tests/unit/iothread.c
>> @@ -30,13 +30,26 @@ struct IOThread {
>>       bool stopping;
>>   };
>>
>> -static __thread IOThread *my_iothread;
>> +static __thread AioContext *my_aiocontext;
>> +
>> +void qemu_set_current_aio_context(AioContext *ctx)
>> +{
>> +    assert(!my_aiocontext);
>> +    my_aiocontext = ctx;
>> +}
>>
>>   AioContext *qemu_get_current_aio_context(void)
>>   {
>> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
>> +    if (my_aiocontext) {
>> +        return my_aiocontext;
>> +    }
>> +    if (qemu_mutex_iothread_locked()) {
>> +        return qemu_get_aio_context();
>> +    }
>> +    return NULL;
>>   }
>>
>> +
>>   static void iothread_init_gcontext(IOThread *iothread)
>>   {
>>       GSource *source;
>> @@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)
>>
>>       rcu_register_thread();
>>
>> -    my_iothread = iothread;
>> +    qemu_set_current_aio_context(iothread->ctx);
>>       qemu_mutex_lock(&iothread->init_done_lock);
>>       iothread->ctx = aio_context_new(&error_abort);
>>
>> diff --git a/util/main-loop.c b/util/main-loop.c
>> index d9c55df6f5..4ae5b23e99 100644
>> --- a/util/main-loop.c
>> +++ b/util/main-loop.c
>> @@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
>>       if (!qemu_aio_context) {
>>           return -EMFILE;
>>       }
>> +    qemu_set_current_aio_context(qemu_aio_context);
>>       qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
>>       gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
>>       src = aio_get_g_source(qemu_aio_context);
>>
>> If it works for you, I can post it as a formal patch.
>>
> 
> This doesn't work for iotests.. qemu-io goes through version in stub. It 
> works if I add:

Great, thanks.  I'll try the patch, also with the test that broke with 
the earlier version, and post if it works.

Paolo

> diff --git a/stubs/iothread.c b/stubs/iothread.c
> index 8cc9e28c55..967a01c4f0 100644
> --- a/stubs/iothread.c
> +++ b/stubs/iothread.c
> @@ -2,7 +2,18 @@
>   #include "block/aio.h"
>   #include "qemu/main-loop.h"
> 
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +    assert(!my_aiocontext);
> +    my_aiocontext = ctx;
> +}
> +
>   AioContext *qemu_get_current_aio_context(void)
>   {
> -    return qemu_get_aio_context();
> +    if (my_aiocontext) {
> +        return my_aiocontext;
> +    }
> +    return NULL;
>   }
> 
> 
> 



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-06-09  9:35           ` Paolo Bonzini
@ 2021-06-09 10:24             ` Vladimir Sementsov-Ogievskiy
  2021-06-09 12:17               ` Paolo Bonzini
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-09 10:24 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-block
  Cc: qemu-devel, eblake, mreitz, kwolf, rvkagan, den, Stefan Hajnoczi,
	Fam Zheng

09.06.2021 12:35, Paolo Bonzini wrote:
> On 08/06/21 20:45, Vladimir Sementsov-Ogievskiy wrote:
>> 14.05.2021 00:04, Paolo Bonzini wrote:
>>> On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>
>>>>>
>>>>> I don't understand.  Why doesn't aio_co_enter go through the ctx != qemu_get_current_aio_context() branch and just do aio_co_schedule? That was at least the idea behind aio_co_wake and aio_co_enter.
>>>>>
>>>>
>>>> Because ctx is exactly qemu_get_current_aio_context(), as we are not in iothread but in nbd connection thread. So, qemu_get_current_aio_context() returns qemu_aio_context.
>>>
>>> So the problem is that threads other than the main thread and
>>> the I/O thread should not return qemu_aio_context.  The vCPU thread
>>> may need to return it when running with BQL taken, though.
>>>
>>> Something like this (untested):
>>>
>>> diff --git a/include/block/aio.h b/include/block/aio.h
>>> index 5f342267d5..10fcae1515 100644
>>> --- a/include/block/aio.h
>>> +++ b/include/block/aio.h
>>> @@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct Coroutine *co);
>>>    * Return the AioContext whose event loop runs in the current thread.
>>>    *
>>>    * If called from an IOThread this will be the IOThread's AioContext.  If
>>> - * called from another thread it will be the main loop AioContext.
>>> + * called from the main thread or with the "big QEMU lock" taken it
>>> + * will be the main loop AioContext.
>>>    */
>>>   AioContext *qemu_get_current_aio_context(void);
>>>
>>> +void qemu_set_current_aio_context(AioContext *ctx);
>>> +
>>>   /**
>>>    * aio_context_setup:
>>>    * @ctx: the aio context
>>> diff --git a/iothread.c b/iothread.c
>>> index 7f086387be..22b967e77c 100644
>>> --- a/iothread.c
>>> +++ b/iothread.c
>>> @@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
>>>   #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
>>>   #endif
>>>
>>> -static __thread IOThread *my_iothread;
>>> +static __thread AioContext *my_aiocontext;
>>> +
>>> +void qemu_set_current_aio_context(AioContext *ctx)
>>> +{
>>> +    assert(!my_aiocontext);
>>> +    my_aiocontext = ctx;
>>> +}
>>>
>>>   AioContext *qemu_get_current_aio_context(void)
>>>   {
>>> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
>>> +    if (my_aiocontext) {
>>> +        return my_aiocontext;
>>> +    }
>>> +    if (qemu_mutex_iothread_locked()) {
>>> +        return qemu_get_aio_context();
>>> +    }
>>> +    return NULL;
>>>   }
>>>
>>>   static void *iothread_run(void *opaque)
>>> @@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
>>>        * in this new thread uses glib.
>>>        */
>>>       g_main_context_push_thread_default(iothread->worker_context);
>>> -    my_iothread = iothread;
>>> +    qemu_set_current_aio_context(iothread->ctx);
>>>       iothread->thread_id = qemu_get_thread_id();
>>>       qemu_sem_post(&iothread->init_done_sem);
>>>
>>> diff --git a/stubs/iothread.c b/stubs/iothread.c
>>> index 8cc9e28c55..25ff398894 100644
>>> --- a/stubs/iothread.c
>>> +++ b/stubs/iothread.c
>>> @@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
>>>   {
>>>       return qemu_get_aio_context();
>>>   }
>>> +
>>> +void qemu_set_current_aio_context(AioContext *ctx)
>>> +{
>>> +}
>>> diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
>>> index afde12b4ef..cab38b3da8 100644
>>> --- a/tests/unit/iothread.c
>>> +++ b/tests/unit/iothread.c
>>> @@ -30,13 +30,26 @@ struct IOThread {
>>>       bool stopping;
>>>   };
>>>
>>> -static __thread IOThread *my_iothread;
>>> +static __thread AioContext *my_aiocontext;
>>> +
>>> +void qemu_set_current_aio_context(AioContext *ctx)
>>> +{
>>> +    assert(!my_aiocontext);
>>> +    my_aiocontext = ctx;
>>> +}
>>>
>>>   AioContext *qemu_get_current_aio_context(void)
>>>   {
>>> -    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
>>> +    if (my_aiocontext) {
>>> +        return my_aiocontext;
>>> +    }
>>> +    if (qemu_mutex_iothread_locked()) {
>>> +        return qemu_get_aio_context();
>>> +    }
>>> +    return NULL;
>>>   }
>>>
>>> +
>>>   static void iothread_init_gcontext(IOThread *iothread)
>>>   {
>>>       GSource *source;
>>> @@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)
>>>
>>>       rcu_register_thread();
>>>
>>> -    my_iothread = iothread;
>>> +    qemu_set_current_aio_context(iothread->ctx);
>>>       qemu_mutex_lock(&iothread->init_done_lock);
>>>       iothread->ctx = aio_context_new(&error_abort);
>>>
>>> diff --git a/util/main-loop.c b/util/main-loop.c
>>> index d9c55df6f5..4ae5b23e99 100644
>>> --- a/util/main-loop.c
>>> +++ b/util/main-loop.c
>>> @@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
>>>       if (!qemu_aio_context) {
>>>           return -EMFILE;
>>>       }
>>> +    qemu_set_current_aio_context(qemu_aio_context);
>>>       qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
>>>       gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
>>>       src = aio_get_g_source(qemu_aio_context);
>>>
>>> If it works for you, I can post it as a formal patch.
>>>
>>
>> This doesn't work for iotests.. qemu-io goes through version in stub. It works if I add:
> 
> Great, thanks.  I'll try the patch, also with the test that broke with the earlier version, and post if it works.
> 

Thanks, I'll base v4 of nbd patches on it.

I now run make check. test-aio-multithread crashes on assertion:

(gdb) bt
#0  0x00007f4af8d839d5 in raise () from /lib64/libc.so.6
#1  0x00007f4af8d6c8a4 in abort () from /lib64/libc.so.6
#2  0x00007f4af8d6c789 in __assert_fail_base.cold () from /lib64/libc.so.6
#3  0x00007f4af8d7c026 in __assert_fail () from /lib64/libc.so.6
#4  0x000055daebfdab95 in aio_poll (ctx=0x7f4ae0000b60, blocking=true) at ../util/aio-posix.c:567
#5  0x000055daebea096c in iothread_run (opaque=0x55daed81bc90) at ../tests/unit/iothread.c:91
#6  0x000055daebfc6c4a in qemu_thread_start (args=0x55daed7d9940) at ../util/qemu-thread-posix.c:521
#7  0x00007f4af8f1a3f9 in start_thread () from /lib64/libpthread.so.0
#8  0x00007f4af8e47b53 in clone () from /lib64/libc.so.6
(gdb) fr 4
#4  0x000055daebfdab95 in aio_poll (ctx=0x7f4ae0000b60, blocking=true) at ../util/aio-posix.c:567
567         assert(in_aio_context_home_thread(ctx == iohandler_get_aio_context() ?
(gdb) list
562          *
563          * aio_poll() may only be called in the AioContext's thread. iohandler_ctx
564          * is special in that it runs in the main thread, but that thread's context
565          * is qemu_aio_context.
566          */
567         assert(in_aio_context_home_thread(ctx == iohandler_get_aio_context() ?
568                                           qemu_get_aio_context() : ctx));
569
570         qemu_lockcnt_inc(&ctx->list_lock);
571



-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx
  2021-06-09 10:24             ` Vladimir Sementsov-Ogievskiy
@ 2021-06-09 12:17               ` Paolo Bonzini
  0 siblings, 0 replies; 121+ messages in thread
From: Paolo Bonzini @ 2021-06-09 12:17 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy, qemu-block
  Cc: kwolf, Fam Zheng, qemu-devel, mreitz, rvkagan, Stefan Hajnoczi,
	den, eblake

On 09/06/21 12:24, Vladimir Sementsov-Ogievskiy wrote:
> Thanks, I'll base v4 of nbd patches on it.
> 
> I now run make check. test-aio-multithread crashes on assertion:

With the patch I've sent it doesn't, so hopefully you can go ahead.

Paolo

> (gdb) bt
> #0  0x00007f4af8d839d5 in raise () from /lib64/libc.so.6
> #1  0x00007f4af8d6c8a4 in abort () from /lib64/libc.so.6
> #2  0x00007f4af8d6c789 in __assert_fail_base.cold () from /lib64/libc.so.6
> #3  0x00007f4af8d7c026 in __assert_fail () from /lib64/libc.so.6
> #4  0x000055daebfdab95 in aio_poll (ctx=0x7f4ae0000b60, blocking=true) 
> at ../util/aio-posix.c:567
> #5  0x000055daebea096c in iothread_run (opaque=0x55daed81bc90) at 
> ../tests/unit/iothread.c:91
> #6  0x000055daebfc6c4a in qemu_thread_start (args=0x55daed7d9940) at 
> ../util/qemu-thread-posix.c:521
> #7  0x00007f4af8f1a3f9 in start_thread () from /lib64/libpthread.so.0
> #8  0x00007f4af8e47b53 in clone () from /lib64/libc.so.6
> (gdb) fr 4
> #4  0x000055daebfdab95 in aio_poll (ctx=0x7f4ae0000b60, blocking=true) 
> at ../util/aio-posix.c:567
> 567         assert(in_aio_context_home_thread(ctx == 
> iohandler_get_aio_context() ?
> (gdb) list
> 562          *
> 563          * aio_poll() may only be called in the AioContext's thread. 
> iohandler_ctx
> 564          * is special in that it runs in the main thread, but that 
> thread's context
> 565          * is qemu_aio_context.
> 566          */
> 567         assert(in_aio_context_home_thread(ctx == 
> iohandler_get_aio_context() ?
> 568                                           qemu_get_aio_context() : 
> ctx));
> 569
> 570         qemu_lockcnt_inc(&ctx->list_lock);



^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection
  2021-04-28  8:14     ` Vladimir Sementsov-Ogievskiy
@ 2021-06-09 15:49       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-09 15:49 UTC (permalink / raw)
  To: Roman Kagan, qemu-block, qemu-devel, eblake, mreitz, kwolf, den

28.04.2021 11:14, Vladimir Sementsov-Ogievskiy wrote:
>>> +struct NBDClientConnection {
>>> +    /* Initialization constants */
>>> +    SocketAddress *saddr; /* address to connect to */
>>> +
>>> +    /*
>>> +     * Result of last attempt. Valid in FAIL and SUCCESS states.
>>> +     * If you want to steal error, don't forget to set pointer to NULL.
>>> +     */
>>> +    QIOChannelSocket *sioc;
>>> +    Error *err;
>>
>> These two are also manipulated under the mutex.  Consider also updating
>> the comment: both these pointers are to be "stolen" by the caller, with
>> the former being valid when the connection succeeds and the latter
>> otherwise.
>>
> 
> Hmm. I should move mutex and "All further" comment above these two fields.
> 
> Ok, I'll think on updating the comment (probably as an additional patch, to keep this as a simple movement). I don't like to document that they are stolen by caller(). For me it sounds like caller is user of the interface. And caller of nbd_co_establish_connection() doesn't stole anything: the structure is private now..

Finally, I decided to improve the comment as part of "[PATCH v3 08/33] block/nbd: drop thr->state" commit, as "FAIL and SUCCESS states" string becomes outdated when we drop these states.

-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
  2021-06-03 16:29   ` Eric Blake
@ 2021-06-09 17:23     ` Vladimir Sementsov-Ogievskiy
  2021-06-09 18:28       ` Eric Blake
  0 siblings, 1 reply; 121+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2021-06-09 17:23 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-block, qemu-devel, mreitz, kwolf, rvkagan, den

03.06.2021 19:29, Eric Blake wrote:
> On Fri, Apr 16, 2021 at 11:08:57AM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> To be reused in the following patch.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/nbd.c | 99 ++++++++++++++++++++++++++++++-----------------------
>>   1 file changed, 57 insertions(+), 42 deletions(-)
>>
>> diff --git a/block/nbd.c b/block/nbd.c
>> index 5e63caaf4b..03ffe95231 100644
>> --- a/block/nbd.c
>> +++ b/block/nbd.c
>> @@ -318,6 +318,50 @@ static bool nbd_client_connecting_wait(BDRVNBDState *s)
>>       return qatomic_load_acquire(&s->state) == NBD_CLIENT_CONNECTING_WAIT;
>>   }
>>   
>> +/*
>> + * Check s->info updated by negotiation process.
> 
> The parameter name is bs, not s; so this comment is a bit confusing...
> 
>> + * Update @bs correspondingly to new options.
>> + */
>> +static int nbd_handle_updated_info(BlockDriverState *bs, Error **errp)
>> +{
>> +    BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
> 
> ...until here.  Maybe rewrite the entire comment as:
> 
> Update @bs with information learned during a completed negotiation
> process.  Return failure if the server's advertised options are
> incompatible with the client's needs.
> 
>> +    int ret;
>> +
>> +    if (s->x_dirty_bitmap) {
>> +        if (!s->info.base_allocation) {
>> +            error_setg(errp, "requested x-dirty-bitmap %s not found",
>> +                       s->x_dirty_bitmap);
>> +            return -EINVAL;
>> +        }
>> +        if (strcmp(s->x_dirty_bitmap, "qemu:allocation-depth") == 0) {
>> +            s->alloc_depth = true;
>> +        }
>> +    }
>> +
>> +    if (s->info.flags & NBD_FLAG_READ_ONLY) {
>> +        ret = bdrv_apply_auto_read_only(bs, "NBD export is read-only", errp);
>> +        if (ret < 0) {
>> +            return ret;
>> +        }
>> +    }
>> +
>> +    if (s->info.flags & NBD_FLAG_SEND_FUA) {
>> +        bs->supported_write_flags = BDRV_REQ_FUA;
>> +        bs->supported_zero_flags |= BDRV_REQ_FUA;
> 
> Code motion, so it is correct, but it looks odd to use = for one
> assignment and |= for the other.  Using |= in both places would be
> more consistent.

Actually I see bugs here:

1. we should do =, not |=, as on reconnect info changes, so we should reset supported flags.

2. in-fligth requests that are in retying loops are not prepared to flags changing. I afraid, that some malicious server may even do some bad thing

Still, let's fix it after these series. To avoid more conflicts.

> 
>> +    }
>> +
>> +    if (s->info.flags & NBD_FLAG_SEND_WRITE_ZEROES) {
>> +        bs->supported_zero_flags |= BDRV_REQ_MAY_UNMAP;
>> +        if (s->info.flags & NBD_FLAG_SEND_FAST_ZERO) {
>> +            bs->supported_zero_flags |= BDRV_REQ_NO_FALLBACK;
>> +        }
>> +    }
>> +
>> +    trace_nbd_client_handshake_success(s->export);
>> +
>> +    return 0;
>> +}
>> +
>>   static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
>>   {
>>       int ret;
>> @@ -1579,49 +1623,13 @@ static int nbd_client_handshake(BlockDriverState *bs, Error **errp)
> 
> As updating the comment doesn't affect code correctness,
> Reviewed-by: Eric Blake <eblake@redhat.com>
> 


-- 
Best regards,
Vladimir


^ permalink raw reply	[flat|nested] 121+ messages in thread

* Re: [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake()
  2021-06-09 17:23     ` Vladimir Sementsov-Ogievskiy
@ 2021-06-09 18:28       ` Eric Blake
  0 siblings, 0 replies; 121+ messages in thread
From: Eric Blake @ 2021-06-09 18:28 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: kwolf, qemu-block, qemu-devel, mreitz, rvkagan, den

On Wed, Jun 09, 2021 at 08:23:06PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > +    if (s->x_dirty_bitmap) {
> > > +        if (!s->info.base_allocation) {
> > > +            error_setg(errp, "requested x-dirty-bitmap %s not found",
> > > +                       s->x_dirty_bitmap);
> > > +            return -EINVAL;
> > > +        }
> > > +        if (strcmp(s->x_dirty_bitmap, "qemu:allocation-depth") == 0) {
> > > +            s->alloc_depth = true;
> > > +        }
> > > +    }
> > > +
> > > +    if (s->info.flags & NBD_FLAG_READ_ONLY) {
> > > +        ret = bdrv_apply_auto_read_only(bs, "NBD export is read-only", errp);
> > > +        if (ret < 0) {
> > > +            return ret;
> > > +        }
> > > +    }
> > > +
> > > +    if (s->info.flags & NBD_FLAG_SEND_FUA) {
> > > +        bs->supported_write_flags = BDRV_REQ_FUA;
> > > +        bs->supported_zero_flags |= BDRV_REQ_FUA;
> > 
> > Code motion, so it is correct, but it looks odd to use = for one
> > assignment and |= for the other.  Using |= in both places would be
> > more consistent.
> 
> Actually I see bugs here:
> 
> 1. we should do =, not |=, as on reconnect info changes, so we should reset supported flags.
> 
> 2. in-fligth requests that are in retying loops are not prepared to flags changing. I afraid, that some malicious server may even do some bad thing
> 
> Still, let's fix it after these series. To avoid more conflicts.

Oh, you raise some good points.  And it's not just bs->*flags; qemu as
server uses constant metacontext ids (base:allocation is always
context 0), but even that might not be stable across reconnect.  For
example, with my proposed patch of adding qemu:joint-allocation
metacontext, if the reason we have to reconnect is because the server
is upgrading from qemu 6.0 to 6.1 temporarily bouncing the server, and
the client was paying attention to qemu:dirty-bitmap:FOO, that context
would now have a different id.

Yeah, making this code safer across potential changes in server
information (either to fail the reconnect because the reconnected
server dropped something we were previously depending on, or
gracefully handling the downgrade, or ...) is worth leaving for a
later series while we focus on the more immediate issue of making
reconnect itself stable.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 121+ messages in thread

end of thread, other threads:[~2021-06-09 18:29 UTC | newest]

Thread overview: 121+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-16  8:08 [PATCH v3 00/33] block/nbd: rework client connection Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 01/33] block/nbd: fix channel object leak Vladimir Sementsov-Ogievskiy
2021-05-24 21:31   ` Eric Blake
2021-05-25  4:47     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 02/33] block/nbd: fix how state is cleared on nbd_open() failure paths Vladimir Sementsov-Ogievskiy
2021-04-21 14:00   ` Roman Kagan
2021-04-21 22:27     ` Vladimir Sementsov-Ogievskiy
2021-04-22  8:49       ` Roman Kagan
2021-06-01 21:39   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 03/33] block/nbd: ensure ->connection_thread is always valid Vladimir Sementsov-Ogievskiy
2021-06-01 21:41   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 04/33] block/nbd: nbd_client_handshake(): fix leak of s->ioc Vladimir Sementsov-Ogievskiy
2021-04-22  8:59   ` Roman Kagan
2021-04-16  8:08 ` [PATCH v3 05/33] block/nbd: BDRVNBDState: drop unused connect_err and connect_status Vladimir Sementsov-Ogievskiy
2021-06-01 21:43   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx Vladimir Sementsov-Ogievskiy
2021-04-23 10:09   ` Roman Kagan
2021-04-26  8:52     ` Vladimir Sementsov-Ogievskiy
2021-05-12  6:56   ` Paolo Bonzini
2021-05-12  7:15     ` Vladimir Sementsov-Ogievskiy
2021-05-13 21:04       ` Paolo Bonzini
2021-05-14 17:27         ` Roman Kagan
2021-05-14 21:19           ` Paolo Bonzini
2021-06-08 18:45         ` Vladimir Sementsov-Ogievskiy
2021-06-09  9:35           ` Paolo Bonzini
2021-06-09 10:24             ` Vladimir Sementsov-Ogievskiy
2021-06-09 12:17               ` Paolo Bonzini
2021-04-16  8:08 ` [PATCH v3 07/33] block/nbd: simplify waking of nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
2021-04-27 21:44   ` Roman Kagan
2021-06-02 19:05   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 08/33] block/nbd: drop thr->state Vladimir Sementsov-Ogievskiy
2021-04-27 22:23   ` Roman Kagan
2021-04-28  8:01     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 09/33] block/nbd: bs-independent interface for nbd_co_establish_connection() Vladimir Sementsov-Ogievskiy
2021-06-02 19:14   ` Eric Blake
2021-06-08 10:12     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 10/33] block/nbd: make nbd_co_establish_connection_cancel() bs-independent Vladimir Sementsov-Ogievskiy
2021-06-02 21:18   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 11/33] block/nbd: rename NBDConnectThread to NBDClientConnection Vladimir Sementsov-Ogievskiy
2021-04-27 22:28   ` Roman Kagan
2021-06-02 21:21   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 12/33] block/nbd: introduce nbd_client_connection_new() Vladimir Sementsov-Ogievskiy
2021-06-02 21:22   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 13/33] block/nbd: introduce nbd_client_connection_release() Vladimir Sementsov-Ogievskiy
2021-04-27 22:35   ` Roman Kagan
2021-04-28  8:06     ` Vladimir Sementsov-Ogievskiy
2021-06-02 21:27   ` Eric Blake
2021-06-03 11:59     ` Vladimir Sementsov-Ogievskiy
2021-06-08 10:00     ` Vladimir Sementsov-Ogievskiy
2021-06-08 14:18       ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 14/33] nbd: move connection code from block/nbd to nbd/client-connection Vladimir Sementsov-Ogievskiy
2021-04-27 22:45   ` Roman Kagan
2021-04-28  8:14     ` Vladimir Sementsov-Ogievskiy
2021-06-09 15:49       ` Vladimir Sementsov-Ogievskiy
2021-06-03 15:55   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 15/33] nbd/client-connection: use QEMU_LOCK_GUARD Vladimir Sementsov-Ogievskiy
2021-04-28  6:08   ` Roman Kagan
2021-04-28  8:17     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation Vladimir Sementsov-Ogievskiy
2021-05-11 10:45   ` Roman Kagan
2021-05-12  6:42     ` Vladimir Sementsov-Ogievskiy
2021-06-08 19:23       ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 17/33] nbd/client-connection: implement connection retry Vladimir Sementsov-Ogievskiy
2021-05-11 20:54   ` Roman Kagan
2021-06-08 10:24     ` Vladimir Sementsov-Ogievskiy
2021-06-03 16:17   ` Eric Blake
2021-06-03 17:49     ` Vladimir Sementsov-Ogievskiy
2021-06-08 10:38       ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:08 ` [PATCH v3 18/33] nbd/client-connection: shutdown connection on release Vladimir Sementsov-Ogievskiy
2021-05-11 21:08   ` Roman Kagan
2021-05-12  6:39     ` Vladimir Sementsov-Ogievskiy
2021-06-03 16:20   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 19/33] block/nbd: split nbd_handle_updated_info out of nbd_client_handshake() Vladimir Sementsov-Ogievskiy
2021-05-12  8:40   ` Roman Kagan
2021-06-03 16:29   ` Eric Blake
2021-06-09 17:23     ` Vladimir Sementsov-Ogievskiy
2021-06-09 18:28       ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 20/33] block/nbd: use negotiation of NBDClientConnection Vladimir Sementsov-Ogievskiy
2021-06-03 18:05   ` Eric Blake
2021-04-16  8:08 ` [PATCH v3 21/33] qemu-socket: pass monitor link to socket_get_fd directly Vladimir Sementsov-Ogievskiy
2021-04-19  9:34   ` Daniel P. Berrangé
2021-04-19 10:09     ` Vladimir Sementsov-Ogievskiy
2021-05-12  9:40     ` Roman Kagan
2021-05-12  9:59       ` Daniel P. Berrangé
2021-05-13 11:02         ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:09 ` [PATCH v3 22/33] block/nbd: pass monitor directly to connection thread Vladimir Sementsov-Ogievskiy
2021-06-03 18:16   ` Eric Blake
2021-06-03 18:31     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:09 ` [PATCH v3 23/33] block/nbd: nbd_teardown_connection() don't touch s->sioc Vladimir Sementsov-Ogievskiy
2021-06-03 19:04   ` Eric Blake
2021-04-16  8:09 ` [PATCH v3 24/33] block/nbd: drop BDRVNBDState::sioc Vladimir Sementsov-Ogievskiy
2021-06-03 19:12   ` Eric Blake
2021-04-16  8:09 ` [PATCH v3 25/33] nbd/client-connection: return only one io channel Vladimir Sementsov-Ogievskiy
2021-06-03 19:58   ` Eric Blake
2021-04-16  8:09 ` [PATCH v3 26/33] block-coroutine-wrapper: allow non bdrv_ prefix Vladimir Sementsov-Ogievskiy
2021-06-03 20:00   ` Eric Blake
2021-06-03 20:53   ` Eric Blake
2021-06-04  5:29     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:09 ` [PATCH v3 27/33] block/nbd: split nbd_co_do_establish_connection out of nbd_reconnect_attempt Vladimir Sementsov-Ogievskiy
2021-06-03 20:04   ` Eric Blake
2021-06-04  5:30     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:09 ` [PATCH v3 28/33] nbd/client-connection: do qio_channel_set_delay(false) Vladimir Sementsov-Ogievskiy
2021-06-03 20:48   ` Eric Blake
2021-06-04  5:32     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:09 ` [PATCH v3 29/33] nbd/client-connection: add option for non-blocking connection attempt Vladimir Sementsov-Ogievskiy
2021-06-03 20:51   ` Eric Blake
2021-04-16  8:09 ` [PATCH v3 30/33] block/nbd: reuse nbd_co_do_establish_connection() in nbd_open() Vladimir Sementsov-Ogievskiy
2021-06-03 20:57   ` Eric Blake
2021-04-16  8:09 ` [PATCH v3 31/33] block/nbd: add nbd_clinent_connected() helper Vladimir Sementsov-Ogievskiy
2021-05-12  7:06   ` Paolo Bonzini
2021-05-12  7:19     ` Vladimir Sementsov-Ogievskiy
2021-06-03 21:08   ` Eric Blake
2021-04-16  8:09 ` [PATCH v3 32/33] block/nbd: safer transition to receiving request Vladimir Sementsov-Ogievskiy
2021-06-03 21:11   ` Eric Blake
2021-06-04  5:36     ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:09 ` [PATCH v3 33/33] block/nbd: drop connection_co Vladimir Sementsov-Ogievskiy
2021-04-16  8:14   ` Vladimir Sementsov-Ogievskiy
2021-04-16  8:21     ` Vladimir Sementsov-Ogievskiy
2021-06-03 21:27   ` Eric Blake
2021-06-04  5:39     ` Vladimir Sementsov-Ogievskiy
2021-05-12  6:54 ` [PATCH v3 00/33] block/nbd: rework client connection Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.