qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 0/5] vhost-user block device backend implementation
@ 2020-06-14 18:39 Coiby Xu
  2020-06-14 18:39 ` [PATCH v9 1/5] Allow vu_message_read to be replaced Coiby Xu
                   ` (9 more replies)
  0 siblings, 10 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-14 18:39 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, bharatlkmlkvm, Coiby Xu, stefanha

v9
 - move logical block size check function to a utility function
 - fix issues regarding license, coding style, memory deallocation, etc.

v8
 - re-try connecting to socket server to fix asan error
 - fix license naming issue

v7
 - fix docker-test-debug@fedora errors by freeing malloced memory

v6
 - add missing license header and include guard
 - vhost-user server only serve one client one time
 - fix a bug in custom vu_message_read
 - using qemu-storage-daemon to start vhost-user-blk-server
 - a bug fix to pass docker-test-clang@ubuntu

v5:
 * re-use vu_kick_cb in libvhost-user
 * keeping processing VhostUserMsg in the same coroutine until there is
   detachment/attachment of AIOContext
 * Spawn separate coroutine for each VuVirtqElement
 * Other changes including relocating vhost-user-blk-server.c, coding
   style etc.

v4:
 * add object properties in class_init
 * relocate vhost-user-blk-test
 * other changes including using SocketAddress, coding style, etc.

v3:
 * separate generic vhost-user-server code from vhost-user-blk-server
   code
 * re-write vu_message_read and kick hander function as coroutines to
   directly call blk_co_preadv, blk_co_pwritev, etc.
 * add aio_context notifier functions to support multi-threading model
 * other fixes regarding coding style, warning report, etc.

v2:
 * Only enable this feature for Linux because eventfd is a Linux-specific
   feature


This patch series is an implementation of vhost-user block device
backend server, thanks to Stefan and Kevin's guidance.

Vhost-user block device backend server is a UserCreatable object and can be
started using object_add,

 (qemu) object_add vhost-user-blk-server,id=ID,unix-socket=/tmp/vhost-user-blk_vhost.socket,node-name=DRIVE_NAME,writable=off,logical-block-size=512
 (qemu) object_del ID

or appending the "-object" option when starting QEMU,

  $ -object vhost-user-blk-server,id=disk,unix-socket=/tmp/vhost-user-blk_vhost.socket,node-name=DRIVE_NAME,writable=off,logical-block-size=512

Then vhost-user client can connect to the server backend.
For example, QEMU could act as a client,

  $ -m 256 -object memory-backend-memfd,id=mem,size=256M,share=on -numa node,memdev=mem -chardev socket,id=char1,path=/tmp/vhost-user-blk_vhost.socket -device vhost-user-blk-pci,id=blk0,chardev=char1

And guest OS could access this vhost-user block device after mounting it.


Coiby Xu (5):
  Allow vu_message_read to be replaced
  generic vhost user server
  move logical block size check function to a common utility function
  vhost-user block device backend server
  new qTest case to test the vhost-user-blk-server

 block/Makefile.objs                        |   1 +
 block/export/vhost-user-blk-server.c       | 669 +++++++++++++++++++
 block/export/vhost-user-blk-server.h       |  35 +
 contrib/libvhost-user/libvhost-user-glib.c |   2 +-
 contrib/libvhost-user/libvhost-user.c      |  11 +-
 contrib/libvhost-user/libvhost-user.h      |  21 +
 hw/core/qdev-properties.c                  |  18 +-
 softmmu/vl.c                               |   4 +
 tests/Makefile.include                     |   3 +-
 tests/qtest/Makefile.include               |   2 +
 tests/qtest/libqos/vhost-user-blk.c        | 130 ++++
 tests/qtest/libqos/vhost-user-blk.h        |  48 ++
 tests/qtest/libqtest.c                     |  35 +-
 tests/qtest/libqtest.h                     |  17 +
 tests/qtest/vhost-user-blk-test.c          | 739 +++++++++++++++++++++
 tests/vhost-user-bridge.c                  |   2 +
 tools/virtiofsd/fuse_virtio.c              |   4 +-
 util/Makefile.objs                         |   2 +
 util/block-helpers.c                       |  46 ++
 util/block-helpers.h                       |   7 +
 util/vhost-user-server.c                   | 400 +++++++++++
 util/vhost-user-server.h                   |  61 ++
 22 files changed, 2231 insertions(+), 26 deletions(-)
 create mode 100644 block/export/vhost-user-blk-server.c
 create mode 100644 block/export/vhost-user-blk-server.h
 create mode 100644 tests/qtest/libqos/vhost-user-blk.c
 create mode 100644 tests/qtest/libqos/vhost-user-blk.h
 create mode 100644 tests/qtest/vhost-user-blk-test.c
 create mode 100644 util/block-helpers.c
 create mode 100644 util/block-helpers.h
 create mode 100644 util/vhost-user-server.c
 create mode 100644 util/vhost-user-server.h

--
2.27.0



^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v9 1/5] Allow vu_message_read to be replaced
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
@ 2020-06-14 18:39 ` Coiby Xu
  2020-06-18 10:43   ` Kevin Wolf
  2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 51+ messages in thread
From: Coiby Xu @ 2020-06-14 18:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, bharatlkmlkvm, Coiby Xu, stefanha, Dr. David Alan Gilbert

Allow vu_message_read to be replaced by one which will make use of the
QIOChannel functions. Thus reading vhost-user message won't stall the
guest.

Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
---
 contrib/libvhost-user/libvhost-user-glib.c |  2 +-
 contrib/libvhost-user/libvhost-user.c      | 11 ++++++-----
 contrib/libvhost-user/libvhost-user.h      | 21 +++++++++++++++++++++
 tests/vhost-user-bridge.c                  |  2 ++
 tools/virtiofsd/fuse_virtio.c              |  4 ++--
 5 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/contrib/libvhost-user/libvhost-user-glib.c b/contrib/libvhost-user/libvhost-user-glib.c
index 53f1ca4cdd..0df2ec9271 100644
--- a/contrib/libvhost-user/libvhost-user-glib.c
+++ b/contrib/libvhost-user/libvhost-user-glib.c
@@ -147,7 +147,7 @@ vug_init(VugDev *dev, uint16_t max_queues, int socket,
     g_assert(dev);
     g_assert(iface);
 
-    if (!vu_init(&dev->parent, max_queues, socket, panic, set_watch,
+    if (!vu_init(&dev->parent, max_queues, socket, panic, NULL, set_watch,
                  remove_watch, iface)) {
         return false;
     }
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index 3bca996c62..0c7368baa2 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -67,8 +67,6 @@
 /* The version of inflight buffer */
 #define INFLIGHT_VERSION 1
 
-#define VHOST_USER_HDR_SIZE offsetof(VhostUserMsg, payload.u64)
-
 /* The version of the protocol we support */
 #define VHOST_USER_VERSION 1
 #define LIBVHOST_USER_DEBUG 0
@@ -412,7 +410,7 @@ vu_process_message_reply(VuDev *dev, const VhostUserMsg *vmsg)
         goto out;
     }
 
-    if (!vu_message_read(dev, dev->slave_fd, &msg_reply)) {
+    if (!dev->read_msg(dev, dev->slave_fd, &msg_reply)) {
         goto out;
     }
 
@@ -647,7 +645,7 @@ vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg)
     /* Wait for QEMU to confirm that it's registered the handler for the
      * faults.
      */
-    if (!vu_message_read(dev, dev->sock, vmsg) ||
+    if (!dev->read_msg(dev, dev->sock, vmsg) ||
         vmsg->size != sizeof(vmsg->payload.u64) ||
         vmsg->payload.u64 != 0) {
         vu_panic(dev, "failed to receive valid ack for postcopy set-mem-table");
@@ -1653,7 +1651,7 @@ vu_dispatch(VuDev *dev)
     int reply_requested;
     bool need_reply, success = false;
 
-    if (!vu_message_read(dev, dev->sock, &vmsg)) {
+    if (!dev->read_msg(dev, dev->sock, &vmsg)) {
         goto end;
     }
 
@@ -1704,6 +1702,7 @@ vu_deinit(VuDev *dev)
         }
 
         if (vq->kick_fd != -1) {
+            dev->remove_watch(dev, vq->kick_fd);
             close(vq->kick_fd);
             vq->kick_fd = -1;
         }
@@ -1751,6 +1750,7 @@ vu_init(VuDev *dev,
         uint16_t max_queues,
         int socket,
         vu_panic_cb panic,
+        vu_read_msg_cb read_msg,
         vu_set_watch_cb set_watch,
         vu_remove_watch_cb remove_watch,
         const VuDevIface *iface)
@@ -1768,6 +1768,7 @@ vu_init(VuDev *dev,
 
     dev->sock = socket;
     dev->panic = panic;
+    dev->read_msg = read_msg ? read_msg : vu_message_read;
     dev->set_watch = set_watch;
     dev->remove_watch = remove_watch;
     dev->iface = iface;
diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
index f30394fab6..d756da8548 100644
--- a/contrib/libvhost-user/libvhost-user.h
+++ b/contrib/libvhost-user/libvhost-user.h
@@ -30,6 +30,8 @@
 
 #define VHOST_MEMORY_MAX_NREGIONS 8
 
+#define VHOST_USER_HDR_SIZE offsetof(VhostUserMsg, payload.u64)
+
 typedef enum VhostSetConfigType {
     VHOST_SET_CONFIG_TYPE_MASTER = 0,
     VHOST_SET_CONFIG_TYPE_MIGRATION = 1,
@@ -205,6 +207,7 @@ typedef uint64_t (*vu_get_features_cb) (VuDev *dev);
 typedef void (*vu_set_features_cb) (VuDev *dev, uint64_t features);
 typedef int (*vu_process_msg_cb) (VuDev *dev, VhostUserMsg *vmsg,
                                   int *do_reply);
+typedef bool (*vu_read_msg_cb) (VuDev *dev, int sock, VhostUserMsg *vmsg);
 typedef void (*vu_queue_set_started_cb) (VuDev *dev, int qidx, bool started);
 typedef bool (*vu_queue_is_processed_in_order_cb) (VuDev *dev, int qidx);
 typedef int (*vu_get_config_cb) (VuDev *dev, uint8_t *config, uint32_t len);
@@ -373,6 +376,23 @@ struct VuDev {
     bool broken;
     uint16_t max_queues;
 
+    /* @read_msg: custom method to read vhost-user message
+     *
+     * Read data from vhost_user socket fd and fill up
+     * the passed VhostUserMsg *vmsg struct.
+     *
+     * If reading fails, it should close the received set of file
+     * descriptors as socket message's auxiliary data.
+     *
+     * For the details, please refer to vu_message_read in libvhost-user.c
+     * which will be used by default if not custom method is provided when
+     * calling vu_init
+     *
+     * Returns: true if vhost-user message successfully received,
+     *          otherwise return false.
+     *
+     */
+    vu_read_msg_cb read_msg;
     /* @set_watch: add or update the given fd to the watch set,
      * call cb when condition is met */
     vu_set_watch_cb set_watch;
@@ -416,6 +436,7 @@ bool vu_init(VuDev *dev,
              uint16_t max_queues,
              int socket,
              vu_panic_cb panic,
+             vu_read_msg_cb read_msg,
              vu_set_watch_cb set_watch,
              vu_remove_watch_cb remove_watch,
              const VuDevIface *iface);
diff --git a/tests/vhost-user-bridge.c b/tests/vhost-user-bridge.c
index 6c3d490611..bd43607a4d 100644
--- a/tests/vhost-user-bridge.c
+++ b/tests/vhost-user-bridge.c
@@ -520,6 +520,7 @@ vubr_accept_cb(int sock, void *ctx)
                  VHOST_USER_BRIDGE_MAX_QUEUES,
                  conn_fd,
                  vubr_panic,
+                 NULL,
                  vubr_set_watch,
                  vubr_remove_watch,
                  &vuiface)) {
@@ -573,6 +574,7 @@ vubr_new(const char *path, bool client)
                      VHOST_USER_BRIDGE_MAX_QUEUES,
                      dev->sock,
                      vubr_panic,
+                     NULL,
                      vubr_set_watch,
                      vubr_remove_watch,
                      &vuiface)) {
diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
index 3b6d16a041..666945c897 100644
--- a/tools/virtiofsd/fuse_virtio.c
+++ b/tools/virtiofsd/fuse_virtio.c
@@ -980,8 +980,8 @@ int virtio_session_mount(struct fuse_session *se)
     se->vu_socketfd = data_sock;
     se->virtio_dev->se = se;
     pthread_rwlock_init(&se->virtio_dev->vu_dispatch_rwlock, NULL);
-    vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, fv_set_watch,
-            fv_remove_watch, &fv_iface);
+    vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL,
+            fv_set_watch, fv_remove_watch, &fv_iface);
 
     return 0;
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v9 2/5] generic vhost user server
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
  2020-06-14 18:39 ` [PATCH v9 1/5] Allow vu_message_read to be replaced Coiby Xu
@ 2020-06-14 18:39 ` Coiby Xu
  2020-06-18 13:29   ` Kevin Wolf
                     ` (2 more replies)
  2020-06-14 18:39 ` [PATCH v9 3/5] move logical block size check function to a common utility function Coiby Xu
                   ` (7 subsequent siblings)
  9 siblings, 3 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-14 18:39 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, bharatlkmlkvm, Coiby Xu, stefanha

Sharing QEMU devices via vhost-user protocol.

Only one vhost-user client can connect to the server one time.

Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
---
 util/Makefile.objs       |   1 +
 util/vhost-user-server.c | 400 +++++++++++++++++++++++++++++++++++++++
 util/vhost-user-server.h |  61 ++++++
 3 files changed, 462 insertions(+)
 create mode 100644 util/vhost-user-server.c
 create mode 100644 util/vhost-user-server.h

diff --git a/util/Makefile.objs b/util/Makefile.objs
index cc5e37177a..b4d4af06dc 100644
--- a/util/Makefile.objs
+++ b/util/Makefile.objs
@@ -66,6 +66,7 @@ util-obj-y += hbitmap.o
 util-obj-y += main-loop.o
 util-obj-y += nvdimm-utils.o
 util-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
+util-obj-$(CONFIG_LINUX) += vhost-user-server.o
 util-obj-y += qemu-coroutine-sleep.o
 util-obj-y += qemu-co-shared-resource.o
 util-obj-y += qemu-sockets.o
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
new file mode 100644
index 0000000000..393beeb6b9
--- /dev/null
+++ b/util/vhost-user-server.c
@@ -0,0 +1,400 @@
+/*
+ * Sharing QEMU devices via vhost-user protocol
+ *
+ * Author: Coiby Xu <coiby.xu@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+#include "qemu/osdep.h"
+#include <sys/eventfd.h>
+#include "qemu/main-loop.h"
+#include "vhost-user-server.h"
+
+static void vmsg_close_fds(VhostUserMsg *vmsg)
+{
+    int i;
+    for (i = 0; i < vmsg->fd_num; i++) {
+        close(vmsg->fds[i]);
+    }
+}
+
+static void vmsg_unblock_fds(VhostUserMsg *vmsg)
+{
+    int i;
+    for (i = 0; i < vmsg->fd_num; i++) {
+        qemu_set_nonblock(vmsg->fds[i]);
+    }
+}
+
+static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
+                      gpointer opaque);
+
+static void close_client(VuServer *server)
+{
+    vu_deinit(&server->vu_dev);
+    object_unref(OBJECT(server->sioc));
+    object_unref(OBJECT(server->ioc));
+    server->sioc_slave = NULL;
+    object_unref(OBJECT(server->ioc_slave));
+    /*
+     * Set the callback function for network listener so another
+     * vhost-user client can connect to this server
+     */
+    qio_net_listener_set_client_func(server->listener,
+                                     vu_accept,
+                                     server,
+                                     NULL);
+}
+
+static void panic_cb(VuDev *vu_dev, const char *buf)
+{
+    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
+
+    if (buf) {
+        error_report("vu_panic: %s", buf);
+    }
+
+    if (server->sioc) {
+        close_client(server);
+        server->sioc = NULL;
+    }
+
+    if (server->device_panic_notifier) {
+        server->device_panic_notifier(server);
+    }
+}
+
+static QIOChannel *slave_io_channel(VuServer *server, int fd,
+                                    Error **local_err)
+{
+    if (server->sioc_slave) {
+        if (fd == server->sioc_slave->fd) {
+            return server->ioc_slave;
+        }
+    } else {
+        server->sioc_slave = qio_channel_socket_new_fd(fd, local_err);
+        if (!*local_err) {
+            server->ioc_slave = QIO_CHANNEL(server->sioc_slave);
+            return server->ioc_slave;
+        }
+    }
+
+    return NULL;
+}
+
+static bool coroutine_fn
+vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
+{
+    struct iovec iov = {
+        .iov_base = (char *)vmsg,
+        .iov_len = VHOST_USER_HDR_SIZE,
+    };
+    int rc, read_bytes = 0;
+    Error *local_err = NULL;
+    /*
+     * Store fds/nfds returned from qio_channel_readv_full into
+     * temporary variables.
+     *
+     * VhostUserMsg is a packed structure, gcc will complain about passing
+     * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
+     * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
+     * thus two temporary variables nfds and fds are used here.
+     */
+    size_t nfds = 0, nfds_t = 0;
+    int *fds_t = NULL;
+    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
+    QIOChannel *ioc = NULL;
+
+    if (conn_fd == server->sioc->fd) {
+        ioc = server->ioc;
+    } else {
+        /* Slave communication will also use this function to read msg */
+        ioc = slave_io_channel(server, conn_fd, &local_err);
+    }
+
+    if (!ioc) {
+        error_report_err(local_err);
+        goto fail;
+    }
+
+    assert(qemu_in_coroutine());
+    do {
+        /*
+         * qio_channel_readv_full may have short reads, keeping calling it
+         * until getting VHOST_USER_HDR_SIZE or 0 bytes in total
+         */
+        rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
+        if (rc < 0) {
+            if (rc == QIO_CHANNEL_ERR_BLOCK) {
+                qio_channel_yield(ioc, G_IO_IN);
+                continue;
+            } else {
+                error_report_err(local_err);
+                return false;
+            }
+        }
+        read_bytes += rc;
+        if (nfds_t > 0) {
+            if (nfds + nfds_t > G_N_ELEMENTS(vmsg->fds)) {
+                error_report("A maximum of %d fds are allowed, "
+                             "however got %lu fds now",
+                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
+                goto fail;
+            }
+            memcpy(vmsg->fds + nfds, fds_t,
+                   nfds_t *sizeof(vmsg->fds[0]));
+            nfds += nfds_t;
+            g_free(fds_t);
+        }
+        if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
+            break;
+        }
+        iov.iov_base = (char *)vmsg + read_bytes;
+        iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
+    } while (true);
+
+    vmsg->fd_num = nfds;
+    /* qio_channel_readv_full will make socket fds blocking, unblock them */
+    vmsg_unblock_fds(vmsg);
+    if (vmsg->size > sizeof(vmsg->payload)) {
+        error_report("Error: too big message request: %d, "
+                     "size: vmsg->size: %u, "
+                     "while sizeof(vmsg->payload) = %zu",
+                     vmsg->request, vmsg->size, sizeof(vmsg->payload));
+        goto fail;
+    }
+
+    struct iovec iov_payload = {
+        .iov_base = (char *)&vmsg->payload,
+        .iov_len = vmsg->size,
+    };
+    if (vmsg->size) {
+        rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
+        if (rc == -1) {
+            error_report_err(local_err);
+            goto fail;
+        }
+    }
+
+    return true;
+
+fail:
+    vmsg_close_fds(vmsg);
+
+    return false;
+}
+
+
+static void vu_client_start(VuServer *server);
+static coroutine_fn void vu_client_trip(void *opaque)
+{
+    VuServer *server = opaque;
+
+    while (!server->aio_context_changed && server->sioc) {
+        vu_dispatch(&server->vu_dev);
+    }
+
+    if (server->aio_context_changed && server->sioc) {
+        server->aio_context_changed = false;
+        vu_client_start(server);
+    }
+}
+
+static void vu_client_start(VuServer *server)
+{
+    server->co_trip = qemu_coroutine_create(vu_client_trip, server);
+    aio_co_enter(server->ctx, server->co_trip);
+}
+
+/*
+ * a wrapper for vu_kick_cb
+ *
+ * since aio_dispatch can only pass one user data pointer to the
+ * callback function, pack VuDev and pvt into a struct. Then unpack it
+ * and pass them to vu_kick_cb
+ */
+static void kick_handler(void *opaque)
+{
+    KickInfo *kick_info = opaque;
+    kick_info->cb(kick_info->vu_dev, 0, (void *) kick_info->index);
+}
+
+
+static void
+set_watch(VuDev *vu_dev, int fd, int vu_evt,
+          vu_watch_cb cb, void *pvt)
+{
+
+    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
+    g_assert(vu_dev);
+    g_assert(fd >= 0);
+    long index = (intptr_t) pvt;
+    g_assert(cb);
+    KickInfo *kick_info = &server->kick_info[index];
+    if (!kick_info->cb) {
+        kick_info->fd = fd;
+        kick_info->cb = cb;
+        qemu_set_nonblock(fd);
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+                           NULL, NULL, kick_info);
+        kick_info->vu_dev = vu_dev;
+    }
+}
+
+
+static void remove_watch(VuDev *vu_dev, int fd)
+{
+    VuServer *server;
+    int i;
+    int index = -1;
+    g_assert(vu_dev);
+    g_assert(fd >= 0);
+
+    server = container_of(vu_dev, VuServer, vu_dev);
+    for (i = 0; i < vu_dev->max_queues; i++) {
+        if (server->kick_info[i].fd == fd) {
+            index = i;
+            break;
+        }
+    }
+
+    if (index == -1) {
+        return;
+    }
+    server->kick_info[i].cb = NULL;
+    aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL);
+}
+
+
+static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
+                      gpointer opaque)
+{
+    VuServer *server = opaque;
+
+    if (server->sioc) {
+        warn_report("Only one vhost-user client is allowed to "
+                    "connect the server one time");
+        return;
+    }
+
+    if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
+                 vu_message_read, set_watch, remove_watch, server->vu_iface)) {
+        error_report("Failed to initialized libvhost-user");
+        return;
+    }
+
+    /*
+     * Unset the callback function for network listener to make another
+     * vhost-user client keeping waiting until this client disconnects
+     */
+    qio_net_listener_set_client_func(server->listener,
+                                     NULL,
+                                     NULL,
+                                     NULL);
+    server->sioc = sioc;
+    server->kick_info = g_new0(KickInfo, server->max_queues);
+    /*
+     * Increase the object reference, so sioc will not freed by
+     * qio_net_listener_channel_func which will call object_unref(OBJECT(sioc))
+     */
+    object_ref(OBJECT(server->sioc));
+    qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
+    server->ioc = QIO_CHANNEL(sioc);
+    object_ref(OBJECT(server->ioc));
+    qio_channel_attach_aio_context(server->ioc, server->ctx);
+    qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
+    vu_client_start(server);
+}
+
+
+void vhost_user_server_stop(VuServer *server)
+{
+    if (!server) {
+        return;
+    }
+
+    if (server->sioc) {
+        close_client(server);
+        object_unref(OBJECT(server->sioc));
+    }
+
+    if (server->listener) {
+        qio_net_listener_disconnect(server->listener);
+        object_unref(OBJECT(server->listener));
+    }
+
+    g_free(server->kick_info);
+}
+
+static void detach_context(VuServer *server)
+{
+    int i;
+    AioContext *ctx = server->ioc->ctx;
+    qio_channel_detach_aio_context(server->ioc);
+    for (i = 0; i < server->vu_dev.max_queues; i++) {
+        if (server->kick_info[i].cb) {
+            aio_set_fd_handler(ctx, server->kick_info[i].fd, false, NULL,
+                               NULL, NULL, NULL);
+        }
+    }
+}
+
+static void attach_context(VuServer *server, AioContext *ctx)
+{
+    int i;
+    qio_channel_attach_aio_context(server->ioc, ctx);
+    server->aio_context_changed = true;
+    if (server->co_trip) {
+        aio_co_schedule(ctx, server->co_trip);
+    }
+    for (i = 0; i < server->vu_dev.max_queues; i++) {
+        if (server->kick_info[i].cb) {
+            aio_set_fd_handler(ctx, server->kick_info[i].fd, false,
+                               kick_handler, NULL, NULL,
+                               &server->kick_info[i]);
+        }
+    }
+}
+
+void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server)
+{
+    server->ctx = ctx ? ctx : qemu_get_aio_context();
+    if (!server->sioc) {
+        return;
+    }
+    if (ctx) {
+        attach_context(server, ctx);
+    } else {
+        detach_context(server);
+    }
+}
+
+
+bool vhost_user_server_start(VuServer *server,
+                             SocketAddress *socket_addr,
+                             AioContext *ctx,
+                             uint16_t max_queues,
+                             DevicePanicNotifierFn *device_panic_notifier,
+                             const VuDevIface *vu_iface,
+                             Error **errp)
+{
+    server->listener = qio_net_listener_new();
+    if (qio_net_listener_open_sync(server->listener, socket_addr, 1,
+                                   errp) < 0) {
+        return false;
+    }
+
+    qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
+
+    server->vu_iface = vu_iface;
+    server->max_queues = max_queues;
+    server->ctx = ctx;
+    server->device_panic_notifier = device_panic_notifier;
+    qio_net_listener_set_client_func(server->listener,
+                                     vu_accept,
+                                     server,
+                                     NULL);
+
+    return true;
+}
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
new file mode 100644
index 0000000000..5baf58f96a
--- /dev/null
+++ b/util/vhost-user-server.h
@@ -0,0 +1,61 @@
+/*
+ * Sharing QEMU devices via vhost-user protocol
+ *
+ * Author: Coiby Xu <coiby.xu@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#ifndef VHOST_USER_SERVER_H
+#define VHOST_USER_SERVER_H
+
+#include "contrib/libvhost-user/libvhost-user.h"
+#include "io/channel-socket.h"
+#include "io/channel-file.h"
+#include "io/net-listener.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "standard-headers/linux/virtio_blk.h"
+
+typedef struct KickInfo {
+    VuDev *vu_dev;
+    int fd; /*kick fd*/
+    long index; /*queue index*/
+    vu_watch_cb cb;
+} KickInfo;
+
+typedef struct VuServer {
+    QIONetListener *listener;
+    AioContext *ctx;
+    void (*device_panic_notifier)(struct VuServer *server) ;
+    int max_queues;
+    const VuDevIface *vu_iface;
+    VuDev vu_dev;
+    QIOChannel *ioc; /* The I/O channel with the client */
+    QIOChannelSocket *sioc; /* The underlying data channel with the client */
+    /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
+    QIOChannel *ioc_slave;
+    QIOChannelSocket *sioc_slave;
+    Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
+    KickInfo *kick_info; /* an array with the length of the queue number */
+    /* restart coroutine co_trip if AIOContext is changed */
+    bool aio_context_changed;
+} VuServer;
+
+
+typedef void DevicePanicNotifierFn(struct VuServer *server);
+
+bool vhost_user_server_start(VuServer *server,
+                             SocketAddress *unix_socket,
+                             AioContext *ctx,
+                             uint16_t max_queues,
+                             DevicePanicNotifierFn *device_panic_notifier,
+                             const VuDevIface *vu_iface,
+                             Error **errp);
+
+void vhost_user_server_stop(VuServer *server);
+
+void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server);
+
+#endif /* VHOST_USER_SERVER_H */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v9 3/5] move logical block size check function to a common utility function
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
  2020-06-14 18:39 ` [PATCH v9 1/5] Allow vu_message_read to be replaced Coiby Xu
  2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
@ 2020-06-14 18:39 ` Coiby Xu
  2020-06-18 13:44   ` Kevin Wolf
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
  2020-06-14 18:39 ` [PATCH v9 4/5] vhost-user block device backend server Coiby Xu
                   ` (6 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-14 18:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, Daniel P. Berrangé,
	Eduardo Habkost, Coiby Xu, bharatlkmlkvm, stefanha,
	Paolo Bonzini

Move logical block size check function in hw/core/qdev-properties.c:set_blocksize() to util/block-helpers.c

Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
---
 hw/core/qdev-properties.c | 18 +++------------
 util/Makefile.objs        |  1 +
 util/block-helpers.c      | 46 +++++++++++++++++++++++++++++++++++++++
 util/block-helpers.h      |  7 ++++++
 4 files changed, 57 insertions(+), 15 deletions(-)
 create mode 100644 util/block-helpers.c
 create mode 100644 util/block-helpers.h

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index cc924815da..a4a6aa5204 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -14,6 +14,7 @@
 #include "qapi/visitor.h"
 #include "chardev/char.h"
 #include "qemu/uuid.h"
+#include "util/block-helpers.h"
 
 void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
                                   Error **errp)
@@ -736,8 +737,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     Property *prop = opaque;
     uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
     Error *local_err = NULL;
-    const int64_t min = 512;
-    const int64_t max = 32768;
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -749,21 +748,10 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         error_propagate(errp, local_err);
         return;
     }
-    /* value of 0 means "unset" */
-    if (value && (value < min || value > max)) {
-        error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
-                   dev->id ? : "", name, (int64_t)value, min, max);
+    check_logical_block_size(dev->id ? : "", name, value, errp);
+    if (errp) {
         return;
     }
-
-    /* We rely on power-of-2 blocksizes for bitmasks */
-    if ((value & (value - 1)) != 0) {
-        error_setg(errp,
-                  "Property %s.%s doesn't take value '%" PRId64 "', it's not a power of 2",
-                  dev->id ?: "", name, (int64_t)value);
-        return;
-    }
-
     *ptr = value;
 }
 
diff --git a/util/Makefile.objs b/util/Makefile.objs
index b4d4af06dc..fa5380ddab 100644
--- a/util/Makefile.objs
+++ b/util/Makefile.objs
@@ -66,6 +66,7 @@ util-obj-y += hbitmap.o
 util-obj-y += main-loop.o
 util-obj-y += nvdimm-utils.o
 util-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
+util-obj-y += block-helpers.o
 util-obj-$(CONFIG_LINUX) += vhost-user-server.o
 util-obj-y += qemu-coroutine-sleep.o
 util-obj-y += qemu-co-shared-resource.o
diff --git a/util/block-helpers.c b/util/block-helpers.c
new file mode 100644
index 0000000000..d31309cc0e
--- /dev/null
+++ b/util/block-helpers.c
@@ -0,0 +1,46 @@
+/*
+ * Block utility functions
+ *
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qapi/qmp/qerror.h"
+#include "block-helpers.h"
+
+/*
+ * Logical block size input validation
+ *
+ * The size should meet the following conditions:
+ * 1. min=512
+ * 2. max=32768
+ * 3. a power of 2
+ *
+ *  Moved from hw/core/qdev-properties.c:set_blocksize()
+ */
+void check_logical_block_size(const char *id, const char *name, uint16_t value,
+                     Error **errp)
+{
+    const int64_t min = 512;
+    const int64_t max = 32768;
+
+    /* value of 0 means "unset" */
+    if (value && (value < min || value > max)) {
+        error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
+                   id, name, (int64_t)value, min, max);
+        return;
+    }
+
+    /* We rely on power-of-2 blocksizes for bitmasks */
+    if ((value & (value - 1)) != 0) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value '%" PRId64
+                   "', it's not a power of 2",
+                   id, name, (int64_t)value);
+        return;
+    }
+}
diff --git a/util/block-helpers.h b/util/block-helpers.h
new file mode 100644
index 0000000000..f06be282a1
--- /dev/null
+++ b/util/block-helpers.h
@@ -0,0 +1,7 @@
+#ifndef BLOCK_HELPERS_H
+#define BLOCK_HELPERS_H
+
+void check_logical_block_size(const char *id, const char *name, uint16_t value,
+                     Error **errp);
+
+#endif /* BLOCK_HELPERS_H */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v9 4/5] vhost-user block device backend server
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (2 preceding siblings ...)
  2020-06-14 18:39 ` [PATCH v9 3/5] move logical block size check function to a common utility function Coiby Xu
@ 2020-06-14 18:39 ` Coiby Xu
  2020-06-18 15:57   ` Kevin Wolf
  2020-06-19 12:03   ` [PATCH 1/2] vhost-user-blk-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
  2020-06-14 18:39 ` [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server Coiby Xu
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-14 18:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, open list:Block layer core, Coiby Xu, Max Reitz,
	bharatlkmlkvm, stefanha, Paolo Bonzini

By making use of libvhost-user, block device drive can be shared to
the connected vhost-user client. Only one client can connect to the
server one time.

Since vhost-user-server needs a block drive to be created first, delay
the creation of this object.

Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
---
 block/Makefile.objs                  |   1 +
 block/export/vhost-user-blk-server.c | 669 +++++++++++++++++++++++++++
 block/export/vhost-user-blk-server.h |  35 ++
 softmmu/vl.c                         |   4 +
 4 files changed, 709 insertions(+)
 create mode 100644 block/export/vhost-user-blk-server.c
 create mode 100644 block/export/vhost-user-blk-server.h

diff --git a/block/Makefile.objs b/block/Makefile.objs
index 3635b6b4c1..0eb7eff470 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -24,6 +24,7 @@ block-obj-y += throttle-groups.o
 block-obj-$(CONFIG_LINUX) += nvme.o
 
 block-obj-y += nbd.o
+block-obj-$(CONFIG_LINUX) += export/vhost-user-blk-server.o ../contrib/libvhost-user/libvhost-user.o
 block-obj-$(CONFIG_SHEEPDOG) += sheepdog.o
 block-obj-$(CONFIG_LIBISCSI) += iscsi.o
 block-obj-$(if $(CONFIG_LIBISCSI),y,n) += iscsi-opts.o
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
new file mode 100644
index 0000000000..bbf2ceaa9b
--- /dev/null
+++ b/block/export/vhost-user-blk-server.c
@@ -0,0 +1,669 @@
+/*
+ * Sharing QEMU block devices via vhost-user protocal
+ *
+ * Author: Coiby Xu <coiby.xu@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+#include "qemu/osdep.h"
+#include "block/block.h"
+#include "vhost-user-blk-server.h"
+#include "qapi/error.h"
+#include "qom/object_interfaces.h"
+#include "sysemu/block-backend.h"
+#include "util/block-helpers.h"
+
+enum {
+    VHOST_USER_BLK_MAX_QUEUES = 1,
+};
+struct virtio_blk_inhdr {
+    unsigned char status;
+};
+
+
+typedef struct VuBlockReq {
+    VuVirtqElement *elem;
+    int64_t sector_num;
+    size_t size;
+    struct virtio_blk_inhdr *in;
+    struct virtio_blk_outhdr out;
+    VuServer *server;
+    struct VuVirtq *vq;
+} VuBlockReq;
+
+
+static void vu_block_req_complete(VuBlockReq *req)
+{
+    VuDev *vu_dev = &req->server->vu_dev;
+
+    /* IO size with 1 extra status byte */
+    vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
+    vu_queue_notify(vu_dev, req->vq);
+
+    if (req->elem) {
+        free(req->elem);
+    }
+
+    g_free(req);
+}
+
+static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
+{
+    return container_of(server, VuBlockDev, vu_server);
+}
+
+static int coroutine_fn
+vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
+                              uint32_t iovcnt, uint32_t type)
+{
+    struct virtio_blk_discard_write_zeroes desc;
+    ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
+    if (unlikely(size != sizeof(desc))) {
+        error_report("Invalid size %ld, expect %ld", size, sizeof(desc));
+        return -EINVAL;
+    }
+
+    VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
+    uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
+                          le32_to_cpu(desc.num_sectors) << 9 };
+    if (type == VIRTIO_BLK_T_DISCARD) {
+        if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
+            return 0;
+        }
+    } else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
+        if (blk_co_pwrite_zeroes(vdev_blk->backend,
+                                 range[0], range[1], 0) == 0) {
+            return 0;
+        }
+    }
+
+    return -EINVAL;
+}
+
+
+static void coroutine_fn vu_block_flush(VuBlockReq *req)
+{
+    VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
+    BlockBackend *backend = vdev_blk->backend;
+    blk_co_flush(backend);
+}
+
+
+struct req_data {
+    VuServer *server;
+    VuVirtq *vq;
+    VuVirtqElement *elem;
+};
+
+static void coroutine_fn vu_block_virtio_process_req(void *opaque)
+{
+    struct req_data *data = opaque;
+    VuServer *server = data->server;
+    VuVirtq *vq = data->vq;
+    VuVirtqElement *elem = data->elem;
+    uint32_t type;
+    VuBlockReq *req;
+
+    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
+    BlockBackend *backend = vdev_blk->backend;
+
+    struct iovec *in_iov = elem->in_sg;
+    struct iovec *out_iov = elem->out_sg;
+    unsigned in_num = elem->in_num;
+    unsigned out_num = elem->out_num;
+    /* refer to hw/block/virtio_blk.c */
+    if (elem->out_num < 1 || elem->in_num < 1) {
+        error_report("virtio-blk request missing headers");
+        free(elem);
+        return;
+    }
+
+    req = g_new0(VuBlockReq, 1);
+    req->server = server;
+    req->vq = vq;
+    req->elem = elem;
+
+    if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
+                            sizeof(req->out)) != sizeof(req->out))) {
+        error_report("virtio-blk request outhdr too short");
+        goto err;
+    }
+
+    iov_discard_front(&out_iov, &out_num, sizeof(req->out));
+
+    if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
+        error_report("virtio-blk request inhdr too short");
+        goto err;
+    }
+
+    /* We always touch the last byte, so just see how big in_iov is.  */
+    req->in = (void *)in_iov[in_num - 1].iov_base
+              + in_iov[in_num - 1].iov_len
+              - sizeof(struct virtio_blk_inhdr);
+    iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr));
+
+
+    type = le32_to_cpu(req->out.type);
+    switch (type & ~VIRTIO_BLK_T_BARRIER) {
+    case VIRTIO_BLK_T_IN:
+    case VIRTIO_BLK_T_OUT: {
+        ssize_t ret = 0;
+        bool is_write = type & VIRTIO_BLK_T_OUT;
+        req->sector_num = le64_to_cpu(req->out.sector);
+
+        int64_t offset = req->sector_num * vdev_blk->blk_size;
+        QEMUIOVector qiov;
+        if (is_write) {
+            qemu_iovec_init_external(&qiov, out_iov, out_num);
+            ret = blk_co_pwritev(backend, offset, qiov.size,
+                                 &qiov, 0);
+        } else {
+            qemu_iovec_init_external(&qiov, in_iov, in_num);
+            ret = blk_co_preadv(backend, offset, qiov.size,
+                                &qiov, 0);
+        }
+        if (ret >= 0) {
+            req->in->status = VIRTIO_BLK_S_OK;
+        } else {
+            req->in->status = VIRTIO_BLK_S_IOERR;
+        }
+        break;
+    }
+    case VIRTIO_BLK_T_FLUSH:
+        vu_block_flush(req);
+        req->in->status = VIRTIO_BLK_S_OK;
+        break;
+    case VIRTIO_BLK_T_GET_ID: {
+        size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
+                          VIRTIO_BLK_ID_BYTES);
+        snprintf(elem->in_sg[0].iov_base, size, "%s", "vhost_user_blk_server");
+        req->in->status = VIRTIO_BLK_S_OK;
+        req->size = elem->in_sg[0].iov_len;
+        break;
+    }
+    case VIRTIO_BLK_T_DISCARD:
+    case VIRTIO_BLK_T_WRITE_ZEROES: {
+        int rc;
+        rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
+                                           out_num, type);
+        if (rc == 0) {
+            req->in->status = VIRTIO_BLK_S_OK;
+        } else {
+            req->in->status = VIRTIO_BLK_S_IOERR;
+        }
+        break;
+    }
+    default:
+        req->in->status = VIRTIO_BLK_S_UNSUPP;
+        break;
+    }
+
+    vu_block_req_complete(req);
+    return;
+
+err:
+    free(elem);
+    g_free(req);
+    return;
+}
+
+
+
+static void vu_block_process_vq(VuDev *vu_dev, int idx)
+{
+    VuServer *server;
+    VuVirtq *vq;
+
+    server = container_of(vu_dev, VuServer, vu_dev);
+    assert(server);
+
+    vq = vu_get_queue(vu_dev, idx);
+    assert(vq);
+    VuVirtqElement *elem;
+    while (1) {
+        elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
+                                    sizeof(VuBlockReq));
+        if (elem) {
+            struct req_data req_data = {
+                .server = server,
+                .vq = vq,
+                .elem = elem
+            };
+            Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
+                                                  &req_data);
+            aio_co_enter(server->ioc->ctx, co);
+        } else {
+            break;
+        }
+    }
+}
+
+static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
+{
+    VuVirtq *vq;
+
+    assert(vu_dev);
+
+    vq = vu_get_queue(vu_dev, idx);
+    vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
+}
+
+static uint64_t vu_block_get_features(VuDev *dev)
+{
+    uint64_t features;
+    VuServer *server = container_of(dev, VuServer, vu_dev);
+    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
+    features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
+               1ull << VIRTIO_BLK_F_SEG_MAX |
+               1ull << VIRTIO_BLK_F_TOPOLOGY |
+               1ull << VIRTIO_BLK_F_BLK_SIZE |
+               1ull << VIRTIO_BLK_F_FLUSH |
+               1ull << VIRTIO_BLK_F_DISCARD |
+               1ull << VIRTIO_BLK_F_WRITE_ZEROES |
+               1ull << VIRTIO_BLK_F_CONFIG_WCE |
+               1ull << VIRTIO_F_VERSION_1 |
+               1ull << VIRTIO_RING_F_INDIRECT_DESC |
+               1ull << VIRTIO_RING_F_EVENT_IDX |
+               1ull << VHOST_USER_F_PROTOCOL_FEATURES;
+
+    if (!vdev_blk->writable) {
+        features |= 1ull << VIRTIO_BLK_F_RO;
+    }
+
+    return features;
+}
+
+static uint64_t vu_block_get_protocol_features(VuDev *dev)
+{
+    return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
+           1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
+}
+
+static int
+vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
+{
+    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
+    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
+    memcpy(config, &vdev_blk->blkcfg, len);
+
+    return 0;
+}
+
+static int
+vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
+                    uint32_t offset, uint32_t size, uint32_t flags)
+{
+    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
+    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
+    uint8_t wce;
+
+    /* don't support live migration */
+    if (flags != VHOST_SET_CONFIG_TYPE_MASTER) {
+        return -EINVAL;
+    }
+
+
+    if (offset != offsetof(struct virtio_blk_config, wce) ||
+        size != 1) {
+        return -EINVAL;
+    }
+
+    wce = *data;
+    if (wce == vdev_blk->blkcfg.wce) {
+        /* Do nothing as same with old configuration */
+        return 0;
+    }
+
+    vdev_blk->blkcfg.wce = wce;
+    blk_set_enable_write_cache(vdev_blk->backend, wce);
+    return 0;
+}
+
+
+/*
+ * When the client disconnects, it sends a VHOST_USER_NONE request
+ * and vu_process_message will simple call exit which cause the VM
+ * to exit abruptly.
+ * To avoid this issue,  process VHOST_USER_NONE request ahead
+ * of vu_process_message.
+ *
+ */
+static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
+{
+    if (vmsg->request == VHOST_USER_NONE) {
+        dev->panic(dev, "disconnect");
+        return true;
+    }
+    return false;
+}
+
+
+static const VuDevIface vu_block_iface = {
+    .get_features          = vu_block_get_features,
+    .queue_set_started     = vu_block_queue_set_started,
+    .get_protocol_features = vu_block_get_protocol_features,
+    .get_config            = vu_block_get_config,
+    .set_config            = vu_block_set_config,
+    .process_msg           = vu_block_process_msg,
+};
+
+static void blk_aio_attached(AioContext *ctx, void *opaque)
+{
+    VuBlockDev *vub_dev = opaque;
+    aio_context_acquire(ctx);
+    vhost_user_server_set_aio_context(ctx, &vub_dev->vu_server);
+    aio_context_release(ctx);
+}
+
+static void blk_aio_detach(void *opaque)
+{
+    VuBlockDev *vub_dev = opaque;
+    AioContext *ctx = vub_dev->vu_server.ctx;
+    aio_context_acquire(ctx);
+    vhost_user_server_set_aio_context(NULL, &vub_dev->vu_server);
+    aio_context_release(ctx);
+}
+
+
+static void
+vu_block_initialize_config(BlockDriverState *bs,
+                           struct virtio_blk_config *config, uint32_t blk_size)
+{
+    config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
+    config->blk_size = blk_size;
+    config->size_max = 0;
+    config->seg_max = 128 - 2;
+    config->min_io_size = 1;
+    config->opt_io_size = 1;
+    config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
+    config->max_discard_sectors = 32768;
+    config->max_discard_seg = 1;
+    config->discard_sector_alignment = config->blk_size >> 9;
+    config->max_write_zeroes_sectors = 32768;
+    config->max_write_zeroes_seg = 1;
+}
+
+
+static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
+{
+
+    BlockBackend *blk;
+    Error *local_error = NULL;
+    const char *node_name = vu_block_device->node_name;
+    bool writable = vu_block_device->writable;
+    /*
+     * Don't allow resize while the vhost user server is running,
+     * otherwise we don't care what happens with the node.
+     */
+    uint64_t perm = BLK_PERM_CONSISTENT_READ;
+    int ret;
+
+    AioContext *ctx;
+
+    BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
+
+    if (!bs) {
+        error_propagate(errp, local_error);
+        return NULL;
+    }
+
+    if (bdrv_is_read_only(bs)) {
+        writable = false;
+    }
+
+    if (writable) {
+        perm |= BLK_PERM_WRITE;
+    }
+
+    ctx = bdrv_get_aio_context(bs);
+    aio_context_acquire(ctx);
+    bdrv_invalidate_cache(bs, NULL);
+    aio_context_release(ctx);
+
+    blk = blk_new(bdrv_get_aio_context(bs), perm,
+                  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
+                  BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
+    ret = blk_insert_bs(blk, bs, errp);
+
+    if (ret < 0) {
+        goto fail;
+    }
+
+    blk_set_enable_write_cache(blk, false);
+
+    blk_set_allow_aio_context_change(blk, true);
+
+    vu_block_device->blkcfg.wce = 0;
+    vu_block_device->backend = blk;
+    if (!vu_block_device->blk_size) {
+        vu_block_device->blk_size = BDRV_SECTOR_SIZE;
+    }
+    vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
+    blk_set_guest_block_size(blk, vu_block_device->blk_size);
+    vu_block_initialize_config(bs, &vu_block_device->blkcfg,
+                                   vu_block_device->blk_size);
+    return vu_block_device;
+
+fail:
+    blk_unref(blk);
+    return NULL;
+}
+
+static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
+{
+    if (!vu_block_device) {
+        return;
+    }
+
+    vhost_user_server_stop(&vu_block_device->vu_server);
+
+    if (vu_block_device->backend) {
+        blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
+                                        blk_aio_detach, vu_block_device);
+    }
+
+    blk_unref(vu_block_device->backend);
+
+}
+
+
+static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
+                                        Error **errp)
+{
+    SocketAddress *addr = vu_block_device->addr;
+
+    if (!vu_block_init(vu_block_device, errp)) {
+        return;
+    }
+
+    AioContext *ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
+
+    if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
+                                 VHOST_USER_BLK_MAX_QUEUES,
+                                 NULL, &vu_block_iface,
+                                 errp)) {
+        goto error;
+    }
+
+    blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
+                                 blk_aio_detach, vu_block_device);
+    vu_block_device->running = true;
+    return;
+
+ error:
+    vhost_user_blk_server_stop(vu_block_device);
+}
+
+static bool vu_prop_modificable(VuBlockDev *vus, Error **errp)
+{
+    if (vus->running) {
+            error_setg(errp, "The property can't be modified "
+                    "while the server is running");
+            return false;
+    }
+    return true;
+}
+static void vu_set_node_name(Object *obj, const char *value, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+
+    if (vus->node_name) {
+        if (!vu_prop_modificable(vus, errp)) {
+            return;
+        }
+        g_free(vus->node_name);
+    }
+
+    vus->node_name = g_strdup(value);
+}
+
+static char *vu_get_node_name(Object *obj, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+    return g_strdup(vus->node_name);
+}
+
+
+static void vu_set_unix_socket(Object *obj, const char *value,
+                               Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+
+    if (vus->addr) {
+        if (!vu_prop_modificable(vus, errp)) {
+            return;
+        }
+        g_free(vus->addr->u.q_unix.path);
+        g_free(vus->addr);
+    }
+
+    SocketAddress *addr = g_new0(SocketAddress, 1);
+    addr->type = SOCKET_ADDRESS_TYPE_UNIX;
+    addr->u.q_unix.path = g_strdup(value);
+    vus->addr = addr;
+}
+
+static char *vu_get_unix_socket(Object *obj, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+    return g_strdup(vus->addr->u.q_unix.path);
+}
+
+static bool vu_get_block_writable(Object *obj, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+    return vus->writable;
+}
+
+static void vu_set_block_writable(Object *obj, bool value, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+
+    if (!vu_prop_modificable(vus, errp)) {
+            return;
+    }
+
+    vus->writable = value;
+}
+
+static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
+                            void *opaque, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+    uint32_t value = vus->blk_size;
+
+    visit_type_uint32(v, name, &value, errp);
+}
+
+static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
+                            void *opaque, Error **errp)
+{
+    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
+
+    Error *local_err = NULL;
+    uint32_t value;
+
+    if (!vu_prop_modificable(vus, errp)) {
+            return;
+    }
+
+    visit_type_uint32(v, name, &value, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    check_logical_block_size(object_get_typename(obj), name, value, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
+    vus->blk_size = value;
+
+out:
+    error_propagate(errp, local_err);
+    vus->blk_size = value;
+}
+
+
+static void vhost_user_blk_server_instance_finalize(Object *obj)
+{
+    VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
+
+    vhost_user_blk_server_stop(vub);
+}
+
+static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
+{
+    Error *local_error = NULL;
+    VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
+
+    vhost_user_blk_server_start(vub, &local_error);
+
+    if (local_error) {
+        error_propagate(errp, local_error);
+        return;
+    }
+}
+
+static void vhost_user_blk_server_class_init(ObjectClass *klass,
+                                             void *class_data)
+{
+    UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
+    ucc->complete = vhost_user_blk_server_complete;
+
+    object_class_property_add_bool(klass, "writable",
+                                   vu_get_block_writable,
+                                   vu_set_block_writable);
+
+    object_class_property_add_str(klass, "node-name",
+                                  vu_get_node_name,
+                                  vu_set_node_name);
+
+    object_class_property_add_str(klass, "unix-socket",
+                                  vu_get_unix_socket,
+                                  vu_set_unix_socket);
+
+    object_class_property_add(klass, "logical-block-size", "uint32",
+                              vu_get_blk_size, vu_set_blk_size,
+                              NULL, NULL);
+}
+
+static const TypeInfo vhost_user_blk_server_info = {
+    .name = TYPE_VHOST_USER_BLK_SERVER,
+    .parent = TYPE_OBJECT,
+    .instance_size = sizeof(VuBlockDev),
+    .instance_finalize = vhost_user_blk_server_instance_finalize,
+    .class_init = vhost_user_blk_server_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        {TYPE_USER_CREATABLE},
+        {}
+    },
+};
+
+static void vhost_user_blk_server_register_types(void)
+{
+    type_register_static(&vhost_user_blk_server_info);
+}
+
+type_init(vhost_user_blk_server_register_types)
diff --git a/block/export/vhost-user-blk-server.h b/block/export/vhost-user-blk-server.h
new file mode 100644
index 0000000000..5398e5d352
--- /dev/null
+++ b/block/export/vhost-user-blk-server.h
@@ -0,0 +1,35 @@
+/*
+ * Sharing QEMU block devices via vhost-user protocal
+ *
+ * Author: Coiby Xu <coiby.xu@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#ifndef VHOST_USER_BLK_SERVER_H
+#define VHOST_USER_BLK_SERVER_H
+#include "util/vhost-user-server.h"
+
+typedef struct VuBlockDev VuBlockDev;
+#define TYPE_VHOST_USER_BLK_SERVER "vhost-user-blk-server"
+#define VHOST_USER_BLK_SERVER(obj) \
+   OBJECT_CHECK(VuBlockDev, obj, TYPE_VHOST_USER_BLK_SERVER)
+
+/* vhost user block device */
+struct VuBlockDev {
+    Object parent_obj;
+    char *node_name;
+    SocketAddress *addr;
+    AioContext *ctx;
+    VuServer vu_server;
+    bool running;
+    uint32_t blk_size;
+    BlockBackend *backend;
+    QIOChannelSocket *sioc;
+    QTAILQ_ENTRY(VuBlockDev) next;
+    struct virtio_blk_config blkcfg;
+    bool writable;
+};
+
+#endif /* VHOST_USER_BLK_SERVER_H */
diff --git a/softmmu/vl.c b/softmmu/vl.c
index 05d1a4cb6b..838df3e57a 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -2520,6 +2520,10 @@ static bool object_create_initial(const char *type, QemuOpts *opts)
     }
 #endif
 
+    /* Reason: vhost-user-blk-server property "node-name" */
+    if (g_str_equal(type, "vhost-user-blk-server")) {
+        return false;
+    }
     /*
      * Reason: filter-* property "netdev" etc.
      */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (3 preceding siblings ...)
  2020-06-14 18:39 ` [PATCH v9 4/5] vhost-user block device backend server Coiby Xu
@ 2020-06-14 18:39 ` Coiby Xu
  2020-06-18 15:17   ` Stefan Hajnoczi
  2020-06-24 15:14   ` Thomas Huth
  2020-06-14 19:12 ` [PATCH v9 0/5] vhost-user block device backend implementation no-reply
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-14 18:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, Laurent Vivier, Thomas Huth, Coiby Xu, bharatlkmlkvm,
	stefanha, Paolo Bonzini

This test case has the same tests as tests/virtio-blk-test.c except for
tests have block_resize. Since vhost-user server can only server one
client one time, two instances of qemu-storage-daemon are launched
for the hotplug test.

In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
send "quit" command to qemu-storage-daemon's QMP monitor. So a function
is added to libqtest.c to establish socket connection with socket
server.

Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
---
 tests/Makefile.include              |   3 +-
 tests/qtest/Makefile.include        |   2 +
 tests/qtest/libqos/vhost-user-blk.c | 130 +++++
 tests/qtest/libqos/vhost-user-blk.h |  48 ++
 tests/qtest/libqtest.c              |  35 +-
 tests/qtest/libqtest.h              |  17 +
 tests/qtest/vhost-user-blk-test.c   | 739 ++++++++++++++++++++++++++++
 7 files changed, 971 insertions(+), 3 deletions(-)
 create mode 100644 tests/qtest/libqos/vhost-user-blk.c
 create mode 100644 tests/qtest/libqos/vhost-user-blk.h
 create mode 100644 tests/qtest/vhost-user-blk-test.c

diff --git a/tests/Makefile.include b/tests/Makefile.include
index c2397de8ed..303235b40f 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -638,7 +638,8 @@ endef
 $(patsubst %, check-qtest-%, $(QTEST_TARGETS)): check-qtest-%: %-softmmu/all $(check-qtest-y)
 	$(call do_test_human,$(check-qtest-$*-y:%=tests/qtest/%$(EXESUF)) $(check-qtest-generic-y:%=tests/qtest/%$(EXESUF)), \
 	  QTEST_QEMU_BINARY=$*-softmmu/qemu-system-$* \
-	  QTEST_QEMU_IMG=qemu-img$(EXESUF))
+	  QTEST_QEMU_IMG=./qemu-img$(EXESUF) \
+	  QTEST_QEMU_STORAGE_DAEMON_BINARY=./qemu-storage-daemon$(EXESUF))
 
 check-unit: $(check-unit-y)
 	$(call do_test_human, $^)
diff --git a/tests/qtest/Makefile.include b/tests/qtest/Makefile.include
index 9e5a51d033..b6f081cb26 100644
--- a/tests/qtest/Makefile.include
+++ b/tests/qtest/Makefile.include
@@ -186,6 +186,7 @@ libqos-obj-y += tests/qtest/libqos/virtio.o
 libqos-obj-$(CONFIG_VIRTFS) += tests/qtest/libqos/virtio-9p.o
 libqos-obj-y += tests/qtest/libqos/virtio-balloon.o
 libqos-obj-y += tests/qtest/libqos/virtio-blk.o
+libqos-obj-$(CONFIG_LINUX) += tests/qtest/libqos/vhost-user-blk.o
 libqos-obj-y += tests/qtest/libqos/virtio-mmio.o
 libqos-obj-y += tests/qtest/libqos/virtio-net.o
 libqos-obj-y += tests/qtest/libqos/virtio-pci.o
@@ -230,6 +231,7 @@ qos-test-obj-$(CONFIG_VHOST_NET_USER) += tests/qtest/vhost-user-test.o $(chardev
 qos-test-obj-y += tests/qtest/virtio-test.o
 qos-test-obj-$(CONFIG_VIRTFS) += tests/qtest/virtio-9p-test.o
 qos-test-obj-y += tests/qtest/virtio-blk-test.o
+qos-test-obj-$(CONFIG_LINUX) += tests/qtest/vhost-user-blk-test.o
 qos-test-obj-y += tests/qtest/virtio-net-test.o
 qos-test-obj-y += tests/qtest/virtio-rng-test.o
 qos-test-obj-y += tests/qtest/virtio-scsi-test.o
diff --git a/tests/qtest/libqos/vhost-user-blk.c b/tests/qtest/libqos/vhost-user-blk.c
new file mode 100644
index 0000000000..3de9c59194
--- /dev/null
+++ b/tests/qtest/libqos/vhost-user-blk.c
@@ -0,0 +1,130 @@
+/*
+ * libqos driver framework
+ *
+ * Based on tests/qtest/libqos/virtio-blk.c
+ *
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
+ *
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License version 2.1 as published by the Free Software Foundation.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest.h"
+#include "qemu/module.h"
+#include "standard-headers/linux/virtio_blk.h"
+#include "libqos/qgraph.h"
+#include "libqos/vhost-user-blk.h"
+
+#define PCI_SLOT                0x04
+#define PCI_FN                  0x00
+
+/* virtio-blk-device */
+static void *qvhost_user_blk_get_driver(QVhostUserBlk *v_blk,
+                                    const char *interface)
+{
+    if (!g_strcmp0(interface, "vhost-user-blk")) {
+        return v_blk;
+    }
+    if (!g_strcmp0(interface, "virtio")) {
+        return v_blk->vdev;
+    }
+
+    fprintf(stderr, "%s not present in vhost-user-blk-device\n", interface);
+    g_assert_not_reached();
+}
+
+static void *qvhost_user_blk_device_get_driver(void *object,
+                                           const char *interface)
+{
+    QVhostUserBlkDevice *v_blk = object;
+    return qvhost_user_blk_get_driver(&v_blk->blk, interface);
+}
+
+static void *vhost_user_blk_device_create(void *virtio_dev,
+                                      QGuestAllocator *t_alloc,
+                                      void *addr)
+{
+    QVhostUserBlkDevice *vhost_user_blk = g_new0(QVhostUserBlkDevice, 1);
+    QVhostUserBlk *interface = &vhost_user_blk->blk;
+
+    interface->vdev = virtio_dev;
+
+    vhost_user_blk->obj.get_driver = qvhost_user_blk_device_get_driver;
+
+    return &vhost_user_blk->obj;
+}
+
+/* virtio-blk-pci */
+static void *qvhost_user_blk_pci_get_driver(void *object, const char *interface)
+{
+    QVhostUserBlkPCI *v_blk = object;
+    if (!g_strcmp0(interface, "pci-device")) {
+        return v_blk->pci_vdev.pdev;
+    }
+    return qvhost_user_blk_get_driver(&v_blk->blk, interface);
+}
+
+static void *vhost_user_blk_pci_create(void *pci_bus, QGuestAllocator *t_alloc,
+                                      void *addr)
+{
+    QVhostUserBlkPCI *vhost_user_blk = g_new0(QVhostUserBlkPCI, 1);
+    QVhostUserBlk *interface = &vhost_user_blk->blk;
+    QOSGraphObject *obj = &vhost_user_blk->pci_vdev.obj;
+
+    virtio_pci_init(&vhost_user_blk->pci_vdev, pci_bus, addr);
+    interface->vdev = &vhost_user_blk->pci_vdev.vdev;
+
+    g_assert_cmphex(interface->vdev->device_type, ==, VIRTIO_ID_BLOCK);
+
+    obj->get_driver = qvhost_user_blk_pci_get_driver;
+
+    return obj;
+}
+
+static void vhost_user_blk_register_nodes(void)
+{
+    /*
+     * FIXME: every test using these two nodes needs to setup a
+     * -drive,id=drive0 otherwise QEMU is not going to start.
+     * Therefore, we do not include "produces" edge for virtio
+     * and pci-device yet.
+     */
+
+    char *arg = g_strdup_printf("id=drv0,chardev=char1,addr=%x.%x",
+                                PCI_SLOT, PCI_FN);
+
+    QPCIAddress addr = {
+        .devfn = QPCI_DEVFN(PCI_SLOT, PCI_FN),
+    };
+
+    QOSGraphEdgeOptions opts = { };
+
+    /* virtio-blk-device */
+    /** opts.extra_device_opts = "drive=drive0"; */
+    qos_node_create_driver("vhost-user-blk-device", vhost_user_blk_device_create);
+    qos_node_consumes("vhost-user-blk-device", "virtio-bus", &opts);
+    qos_node_produces("vhost-user-blk-device", "vhost-user-blk");
+
+    /* virtio-blk-pci */
+    opts.extra_device_opts = arg;
+    add_qpci_address(&opts, &addr);
+    qos_node_create_driver("vhost-user-blk-pci", vhost_user_blk_pci_create);
+    qos_node_consumes("vhost-user-blk-pci", "pci-bus", &opts);
+    qos_node_produces("vhost-user-blk-pci", "vhost-user-blk");
+
+    g_free(arg);
+}
+
+libqos_init(vhost_user_blk_register_nodes);
diff --git a/tests/qtest/libqos/vhost-user-blk.h b/tests/qtest/libqos/vhost-user-blk.h
new file mode 100644
index 0000000000..40a85d808d
--- /dev/null
+++ b/tests/qtest/libqos/vhost-user-blk.h
@@ -0,0 +1,48 @@
+/*
+ * libqos driver framework
+ *
+ * Based on tests/qtest/libqos/virtio-blk.c
+ *
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
+ *
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License version 2 as published by the Free Software Foundation.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#ifndef TESTS_LIBQOS_VHOST_USER_BLK_H
+#define TESTS_LIBQOS_VHOST_USER_BLK_H
+
+#include "libqos/qgraph.h"
+#include "libqos/virtio.h"
+#include "libqos/virtio-pci.h"
+
+typedef struct QVhostUserBlk QVhostUserBlk;
+typedef struct QVhostUserBlkPCI QVhostUserBlkPCI;
+typedef struct QVhostUserBlkDevice QVhostUserBlkDevice;
+
+struct QVhostUserBlk {
+    QVirtioDevice *vdev;
+};
+
+struct QVhostUserBlkPCI {
+    QVirtioPCIDevice pci_vdev;
+    QVhostUserBlk blk;
+};
+
+struct QVhostUserBlkDevice {
+    QOSGraphObject obj;
+    QVhostUserBlk blk;
+};
+
+#endif
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
index 49075b55a1..02cc09f893 100644
--- a/tests/qtest/libqtest.c
+++ b/tests/qtest/libqtest.c
@@ -52,8 +52,7 @@ typedef struct QTestClientTransportOps {
     QTestRecvFn     recv_line; /* for receiving qtest command responses */
 } QTestTransportOps;
 
-struct QTestState
-{
+struct QTestState {
     int fd;
     int qmp_fd;
     pid_t qemu_pid;  /* our child QEMU process */
@@ -608,6 +607,38 @@ QDict *qtest_qmp_receive(QTestState *s)
     return qmp_fd_receive(s->qmp_fd);
 }
 
+QTestState *qtest_create_state_with_qmp_fd(int fd)
+{
+    QTestState *qmp_test_state = g_new0(QTestState, 1);
+    qmp_test_state->qmp_fd = fd;
+    return qmp_test_state;
+}
+
+int qtest_socket_client(char *server_socket_path)
+{
+    struct sockaddr_un serv_addr;
+    int sock;
+    int ret;
+    int retries = 0;
+    sock = socket(PF_UNIX, SOCK_STREAM, 0);
+    g_assert_cmpint(sock, !=, -1);
+    serv_addr.sun_family = AF_UNIX;
+    snprintf(serv_addr.sun_path, sizeof(serv_addr.sun_path), "%s",
+             server_socket_path);
+
+    do {
+        ret = connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
+        if (ret == 0) {
+            break;
+        }
+        retries += 1;
+        g_usleep(G_USEC_PER_SEC);
+    } while (retries < 3);
+
+    g_assert_cmpint(ret, ==, 0);
+    return sock;
+}
+
 /**
  * Allow users to send a message without waiting for the reply,
  * in the case that they choose to discard all replies up until
diff --git a/tests/qtest/libqtest.h b/tests/qtest/libqtest.h
index f5cf93c386..c73c0a9bbe 100644
--- a/tests/qtest/libqtest.h
+++ b/tests/qtest/libqtest.h
@@ -132,6 +132,23 @@ void qtest_qmp_send(QTestState *s, const char *fmt, ...)
 void qtest_qmp_send_raw(QTestState *s, const char *fmt, ...)
     GCC_FMT_ATTR(2, 3);
 
+/**
+ * qtest_socket_client:
+ * @server_socket_path: the socket server's path
+ *
+ * Connect to a socket server.
+ */
+int qtest_socket_client(char *server_socket_path);
+
+/**
+ * qtest_create_state_with_qmp_fd:
+ * @fd: socket fd
+ *
+ * Wrap socket fd in QTestState to make use of qtest_qmp*
+ * functions
+ */
+QTestState *qtest_create_state_with_qmp_fd(int fd);
+
 /**
  * qtest_vqmp_fds:
  * @s: #QTestState instance to operate on.
diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
new file mode 100644
index 0000000000..56e3d8f338
--- /dev/null
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -0,0 +1,739 @@
+/*
+ * QTest testcase for VirtIO Block Device
+ *
+ * Copyright (c) 2014 SUSE LINUX Products GmbH
+ * Copyright (c) 2014 Marc Marí
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest-single.h"
+#include "qemu/bswap.h"
+#include "qemu/module.h"
+#include "standard-headers/linux/virtio_blk.h"
+#include "standard-headers/linux/virtio_pci.h"
+#include "libqos/qgraph.h"
+#include "libqos/vhost-user-blk.h"
+#include "libqos/libqos-pc.h"
+
+/* TODO actually test the results and get rid of this */
+#define qmp_discard_response(...) qobject_unref(qmp(__VA_ARGS__))
+
+#define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
+#define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
+#define PCI_SLOT_HP             0x06
+
+typedef struct QVirtioBlkReq {
+    uint32_t type;
+    uint32_t ioprio;
+    uint64_t sector;
+    char *data;
+    uint8_t status;
+} QVirtioBlkReq;
+
+
+#ifdef HOST_WORDS_BIGENDIAN
+static const bool host_is_big_endian = true;
+#else
+static const bool host_is_big_endian; /* false */
+#endif
+
+static inline void virtio_blk_fix_request(QVirtioDevice *d, QVirtioBlkReq *req)
+{
+    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
+        req->type = bswap32(req->type);
+        req->ioprio = bswap32(req->ioprio);
+        req->sector = bswap64(req->sector);
+    }
+}
+
+
+static inline void virtio_blk_fix_dwz_hdr(QVirtioDevice *d,
+    struct virtio_blk_discard_write_zeroes *dwz_hdr)
+{
+    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
+        dwz_hdr->sector = bswap64(dwz_hdr->sector);
+        dwz_hdr->num_sectors = bswap32(dwz_hdr->num_sectors);
+        dwz_hdr->flags = bswap32(dwz_hdr->flags);
+    }
+}
+
+static uint64_t virtio_blk_request(QGuestAllocator *alloc, QVirtioDevice *d,
+                                   QVirtioBlkReq *req, uint64_t data_size)
+{
+    uint64_t addr;
+    uint8_t status = 0xFF;
+
+    switch (req->type) {
+    case VIRTIO_BLK_T_IN:
+    case VIRTIO_BLK_T_OUT:
+        g_assert_cmpuint(data_size % 512, ==, 0);
+        break;
+    case VIRTIO_BLK_T_DISCARD:
+    case VIRTIO_BLK_T_WRITE_ZEROES:
+        g_assert_cmpuint(data_size %
+                         sizeof(struct virtio_blk_discard_write_zeroes), ==, 0);
+        break;
+    default:
+        g_assert_cmpuint(data_size, ==, 0);
+    }
+
+    addr = guest_alloc(alloc, sizeof(*req) + data_size);
+
+    virtio_blk_fix_request(d, req);
+
+    memwrite(addr, req, 16);
+    memwrite(addr + 16, req->data, data_size);
+    memwrite(addr + 16 + data_size, &status, sizeof(status));
+
+    return addr;
+}
+
+/* Returns the request virtqueue so the caller can perform further tests */
+static QVirtQueue *test_basic(QVirtioDevice *dev, QGuestAllocator *alloc)
+{
+    QVirtioBlkReq req;
+    uint64_t req_addr;
+    uint64_t capacity;
+    uint64_t features;
+    uint32_t free_head;
+    uint8_t status;
+    char *data;
+    QTestState *qts = global_qtest;
+    QVirtQueue *vq;
+
+    features = qvirtio_get_features(dev);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                    (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+                    (1u << VIRTIO_RING_F_EVENT_IDX) |
+                    (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev, features);
+
+    capacity = qvirtio_config_readq(dev, 0);
+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
+
+    vq = qvirtqueue_setup(dev, alloc, 0);
+
+    qvirtio_set_driver_ok(dev);
+
+    /* Write and read with 3 descriptor layout */
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    guest_free(alloc, req_addr);
+
+    /* Read request */
+    req.type = VIRTIO_BLK_T_IN;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+
+    req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    data = g_malloc0(512);
+    memread(req_addr + 16, data, 512);
+    g_assert_cmpstr(data, ==, "TEST");
+    g_free(data);
+
+    guest_free(alloc, req_addr);
+
+    if (features & (1u << VIRTIO_BLK_F_WRITE_ZEROES)) {
+        struct virtio_blk_discard_write_zeroes dwz_hdr;
+        void *expected;
+
+        /*
+         * WRITE_ZEROES request on the same sector of previous test where
+         * we wrote "TEST".
+         */
+        req.type = VIRTIO_BLK_T_WRITE_ZEROES;
+        req.data = (char *) &dwz_hdr;
+        dwz_hdr.sector = 0;
+        dwz_hdr.num_sectors = 1;
+        dwz_hdr.flags = 0;
+
+        virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr), 1, true,
+                       false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 16 + sizeof(dwz_hdr));
+        g_assert_cmpint(status, ==, 0);
+
+        guest_free(alloc, req_addr);
+
+        /* Read request to check if the sector contains all zeroes */
+        req.type = VIRTIO_BLK_T_IN;
+        req.ioprio = 1;
+        req.sector = 0;
+        req.data = g_malloc0(512);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+        g_free(req.data);
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
+        qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 528);
+        g_assert_cmpint(status, ==, 0);
+
+        data = g_malloc(512);
+        expected = g_malloc0(512);
+        memread(req_addr + 16, data, 512);
+        g_assert_cmpmem(data, 512, expected, 512);
+        g_free(expected);
+        g_free(data);
+
+        guest_free(alloc, req_addr);
+    }
+
+    if (features & (1u << VIRTIO_BLK_F_DISCARD)) {
+        struct virtio_blk_discard_write_zeroes dwz_hdr;
+
+        req.type = VIRTIO_BLK_T_DISCARD;
+        req.data = (char *) &dwz_hdr;
+        dwz_hdr.sector = 0;
+        dwz_hdr.num_sectors = 1;
+        dwz_hdr.flags = 0;
+
+        virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr),
+                       1, true, false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 16 + sizeof(dwz_hdr));
+        g_assert_cmpint(status, ==, 0);
+
+        guest_free(alloc, req_addr);
+    }
+
+    if (features & (1u << VIRTIO_F_ANY_LAYOUT)) {
+        /* Write and read with 2 descriptor layout */
+        /* Write request */
+        req.type = VIRTIO_BLK_T_OUT;
+        req.ioprio = 1;
+        req.sector = 1;
+        req.data = g_malloc0(512);
+        strcpy(req.data, "TEST");
+
+        req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+        g_free(req.data);
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 528, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 528);
+        g_assert_cmpint(status, ==, 0);
+
+        guest_free(alloc, req_addr);
+
+        /* Read request */
+        req.type = VIRTIO_BLK_T_IN;
+        req.ioprio = 1;
+        req.sector = 1;
+        req.data = g_malloc0(512);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+        g_free(req.data);
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, 513, true, false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 528);
+        g_assert_cmpint(status, ==, 0);
+
+        data = g_malloc0(512);
+        memread(req_addr + 16, data, 512);
+        g_assert_cmpstr(data, ==, "TEST");
+        g_free(data);
+
+        guest_free(alloc, req_addr);
+    }
+
+    return vq;
+}
+
+static void basic(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+    QVhostUserBlk *blk_if = obj;
+    QVirtQueue *vq;
+
+    vq = test_basic(blk_if->vdev, t_alloc);
+    qvirtqueue_cleanup(blk_if->vdev->bus, vq, t_alloc);
+
+}
+
+static void indirect(void *obj, void *u_data, QGuestAllocator *t_alloc)
+{
+    QVirtQueue *vq;
+    QVhostUserBlk *blk_if = obj;
+    QVirtioDevice *dev = blk_if->vdev;
+    QVirtioBlkReq req;
+    QVRingIndirectDesc *indirect;
+    uint64_t req_addr;
+    uint64_t capacity;
+    uint64_t features;
+    uint32_t free_head;
+    uint8_t status;
+    char *data;
+    QTestState *qts = global_qtest;
+
+    features = qvirtio_get_features(dev);
+    g_assert_cmphex(features & (1u << VIRTIO_RING_F_INDIRECT_DESC), !=, 0);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                            (1u << VIRTIO_RING_F_EVENT_IDX) |
+                            (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev, features);
+
+    capacity = qvirtio_config_readq(dev, 0);
+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
+
+    vq = qvirtqueue_setup(dev, t_alloc, 0);
+    qvirtio_set_driver_ok(dev);
+
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 528, false);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 528, 1, true);
+    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    g_free(indirect);
+    guest_free(t_alloc, req_addr);
+
+    /* Read request */
+    req.type = VIRTIO_BLK_T_IN;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 16, false);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 16, 513, true);
+    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    data = g_malloc0(512);
+    memread(req_addr + 16, data, 512);
+    g_assert_cmpstr(data, ==, "TEST");
+    g_free(data);
+
+    g_free(indirect);
+    guest_free(t_alloc, req_addr);
+    qvirtqueue_cleanup(dev->bus, vq, t_alloc);
+}
+
+
+static void idx(void *obj, void *u_data, QGuestAllocator *t_alloc)
+{
+    QVirtQueue *vq;
+    QVhostUserBlkPCI *blk = obj;
+    QVirtioPCIDevice *pdev = &blk->pci_vdev;
+    QVirtioDevice *dev = &pdev->vdev;
+    QVirtioBlkReq req;
+    uint64_t req_addr;
+    uint64_t capacity;
+    uint64_t features;
+    uint32_t free_head;
+    uint32_t write_head;
+    uint32_t desc_idx;
+    uint8_t status;
+    char *data;
+    QOSGraphObject *blk_object = obj;
+    QPCIDevice *pci_dev = blk_object->get_driver(blk_object, "pci-device");
+    QTestState *qts = global_qtest;
+
+    if (qpci_check_buggy_msi(pci_dev)) {
+        return;
+    }
+
+    qpci_msix_enable(pdev->pdev);
+    qvirtio_pci_set_msix_configuration_vector(pdev, t_alloc, 0);
+
+    features = qvirtio_get_features(dev);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                            (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+                            (1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+                            (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev, features);
+
+    capacity = qvirtio_config_readq(dev, 0);
+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
+
+    vq = qvirtqueue_setup(dev, t_alloc, 0);
+    qvirtqueue_pci_msix_setup(pdev, (QVirtQueuePCI *)vq, t_alloc, 1);
+
+    qvirtio_set_driver_ok(dev);
+
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 1;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    /* Notify after processing the third request */
+    qvirtqueue_set_used_event(qts, vq, 2);
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+    write_head = free_head;
+
+    /* No notification expected */
+    status = qvirtio_wait_status_byte_no_isr(qts, dev,
+                                             vq, req_addr + 528,
+                                             QVIRTIO_BLK_TIMEOUT_US);
+    g_assert_cmpint(status, ==, 0);
+
+    guest_free(t_alloc, req_addr);
+
+    /* Read request */
+    req.type = VIRTIO_BLK_T_IN;
+    req.ioprio = 1;
+    req.sector = 1;
+    req.data = g_malloc0(512);
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    /* We get just one notification for both requests */
+    qvirtio_wait_used_elem(qts, dev, vq, write_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    g_assert(qvirtqueue_get_buf(qts, vq, &desc_idx, NULL));
+    g_assert_cmpint(desc_idx, ==, free_head);
+
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    data = g_malloc0(512);
+    memread(req_addr + 16, data, 512);
+    g_assert_cmpstr(data, ==, "TEST");
+    g_free(data);
+
+    guest_free(t_alloc, req_addr);
+
+    /* End test */
+    qpci_msix_disable(pdev->pdev);
+
+    qvirtqueue_cleanup(dev->bus, vq, t_alloc);
+}
+
+static void pci_hotplug(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+    QVirtioPCIDevice *dev1 = obj;
+    QVirtioPCIDevice *dev;
+    QTestState *qts = dev1->pdev->bus->qts;
+
+    /* plug secondary disk */
+    qtest_qmp_device_add(qts, "vhost-user-blk-pci", "drv1",
+                         "{'addr': %s, 'chardev': 'char2'}",
+                         stringify(PCI_SLOT_HP) ".0");
+
+    dev = virtio_pci_new(dev1->pdev->bus,
+                         &(QPCIAddress) { .devfn = QPCI_DEVFN(PCI_SLOT_HP, 0)
+                                        });
+    g_assert_nonnull(dev);
+    g_assert_cmpint(dev->vdev.device_type, ==, VIRTIO_ID_BLOCK);
+    qvirtio_pci_device_disable(dev);
+    qos_object_destroy((QOSGraphObject *)dev);
+
+    /* unplug secondary disk */
+    qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
+}
+
+/*
+ * Check that setting the vring addr on a non-existent virtqueue does
+ * not crash.
+ */
+static void test_nonexistent_virtqueue(void *obj, void *data,
+                                       QGuestAllocator *t_alloc)
+{
+    QVhostUserBlkPCI *blk = obj;
+    QVirtioPCIDevice *pdev = &blk->pci_vdev;
+    QPCIBar bar0;
+    QPCIDevice *dev;
+
+    dev = qpci_device_find(pdev->pdev->bus, QPCI_DEVFN(4, 0));
+    g_assert(dev != NULL);
+    qpci_device_enable(dev);
+
+    bar0 = qpci_iomap(dev, 0, NULL);
+
+    qpci_io_writeb(dev, bar0, VIRTIO_PCI_QUEUE_SEL, 2);
+    qpci_io_writel(dev, bar0, VIRTIO_PCI_QUEUE_PFN, 1);
+
+    g_free(dev);
+}
+
+static const char *qtest_qemu_storage_daemon_binary(void)
+{
+    const char *qemu_storage_daemon_bin;
+
+    qemu_storage_daemon_bin = getenv("QTEST_QEMU_STORAGE_DAEMON_BINARY");
+    if (!qemu_storage_daemon_bin) {
+        fprintf(stderr, "Environment variable "
+                        "QTEST_QEMU_STORAGE_DAEMON_BINARY required\n");
+        exit(0);
+    }
+
+    return qemu_storage_daemon_bin;
+}
+
+static void drive_destroy(void *path)
+{
+    unlink(path);
+    g_free(path);
+    qos_invalidate_command_line();
+}
+
+
+static char *drive_create(void)
+{
+    int fd, ret;
+    /** vhost-user-blk won't recognize drive located in /tmp */
+    char *t_path = g_strdup("qtest.XXXXXX");
+
+    /** Create a temporary raw image */
+    fd = mkstemp(t_path);
+    g_assert_cmpint(fd, >=, 0);
+    ret = ftruncate(fd, TEST_IMAGE_SIZE);
+    g_assert_cmpint(ret, ==, 0);
+    close(fd);
+
+    g_test_queue_destroy(drive_destroy, t_path);
+    return t_path;
+}
+
+static char sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.XXXXXX";
+static char qmp_sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.qmp.XXXXXX";
+
+
+static void quit_storage_daemon(void *qmp_test_state)
+{
+    qobject_unref(qtest_qmp((QTestState *)qmp_test_state, "{ 'execute': 'quit' }"));
+    g_free(qmp_test_state);
+}
+
+static char *start_vhost_user_blk(void)
+{
+    int fd, qmp_fd;
+    char *sock_path = g_strdup(sock_path_tempate);
+    char *qmp_sock_path = g_strdup(qmp_sock_path_tempate);
+    QTestState *qmp_test_state;
+    fd = mkstemp(sock_path);
+    g_assert_cmpint(fd, >=, 0);
+    g_test_queue_destroy(drive_destroy, sock_path);
+
+
+    qmp_fd = mkstemp(qmp_sock_path);
+    g_assert_cmpint(qmp_fd, >=, 0);
+    g_test_queue_destroy(drive_destroy, qmp_sock_path);
+
+    /* create image file */
+    const char *img_path = drive_create();
+
+    const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
+    gchar *command = g_strdup_printf(
+            "exec %s "
+            "--blockdev driver=file,node-name=disk,filename=%s "
+            "--object vhost-user-blk-server,id=disk,unix-socket=%s,"
+            "node-name=disk,writable=on "
+            "--chardev socket,id=qmp,path=%s,server,nowait --monitor chardev=qmp",
+            vhost_user_blk_bin, img_path, sock_path, qmp_sock_path);
+
+
+    g_test_message("starting vhost-user backend: %s", command);
+    pid_t pid = fork();
+    if (pid == 0) {
+        execlp("/bin/sh", "sh", "-c", command, NULL);
+        exit(1);
+    }
+    g_free(command);
+
+    qmp_test_state = qtest_create_state_with_qmp_fd(
+                             qtest_socket_client(qmp_sock_path));
+    /*
+     * Ask qemu-storage-daemon to quit so it
+     * will not block scripts/tap-driver.pl.
+     */
+    g_test_queue_destroy(quit_storage_daemon, qmp_test_state);
+
+    qobject_unref(qtest_qmp(qmp_test_state,
+                  "{ 'execute': 'qmp_capabilities' }"));
+    return sock_path;
+}
+
+
+static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
+{
+    char *sock_path1 = start_vhost_user_blk();
+    g_string_append_printf(cmd_line,
+                           " -object memory-backend-memfd,id=mem,size=128M,share=on -numa node,memdev=mem "
+                           "-chardev socket,id=char1,path=%s ", sock_path1);
+    return arg;
+}
+
+
+/*
+ * Setup for hotplug.
+ *
+ * Since vhost-user server only serves one vhost-user client one time,
+ * another exprot
+ *
+ */
+static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
+{
+    vhost_user_blk_test_setup(cmd_line, arg);
+    char *sock_path2 = start_vhost_user_blk();
+    /* "-chardev socket,id=char2" is used for pci_hotplug*/
+    g_string_append_printf(cmd_line, "-chardev socket,id=char2,path=%s",
+                           sock_path2);
+    return arg;
+}
+
+static void register_vhost_user_blk_test(void)
+{
+    QOSGraphTestOptions opts = {
+        .before = vhost_user_blk_test_setup,
+    };
+
+    /*
+     * tests for vhost-user-blk and vhost-user-blk-pci
+     * The tests are borrowed from tests/virtio-blk-test.c. But some tests
+     * regarding block_resize don't work for vhost-user-blk.
+     * vhost-user-blk device doesn't have -drive, so tests containing
+     * block_resize are also abandoned,
+     *  - config
+     *  - resize
+     */
+    qos_add_test("basic", "vhost-user-blk", basic, &opts);
+    qos_add_test("indirect", "vhost-user-blk", indirect, &opts);
+    qos_add_test("idx", "vhost-user-blk-pci", idx, &opts);
+    qos_add_test("nxvirtq", "vhost-user-blk-pci",
+                 test_nonexistent_virtqueue, &opts);
+
+    opts.before = vhost_user_blk_hotplug_test_setup;
+    qos_add_test("hotplug", "vhost-user-blk-pci", pci_hotplug, &opts);
+}
+
+libqos_init(register_vhost_user_blk_test);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (4 preceding siblings ...)
  2020-06-14 18:39 ` [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server Coiby Xu
@ 2020-06-14 19:12 ` no-reply
  2020-06-14 19:16 ` no-reply
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 51+ messages in thread
From: no-reply @ 2020-06-14 19:12 UTC (permalink / raw)
  To: coiby.xu; +Cc: kwolf, bharatlkmlkvm, stefanha, qemu-devel, coiby.xu

Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

  CC      hw/gpio/trace.o
  CC      hw/riscv/trace.o
/tmp/qemu-test/src/util/vhost-user-server.c: In function 'vu_message_read':
/tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: 'VHOST_MEMORY_MAX_NREGIONS' undeclared (first use in this function)
                              VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
                              ^
/tmp/qemu-test/src/util/vhost-user-server.c:142:30: note: each undeclared identifier is reported only once for each function it appears in
make: *** [util/vhost-user-server.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 669, in <module>
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=408a16efe57d44399707cea73c0c46cb', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-hyuz9qap/src/docker-src.2020-06-14-15.10.05.3902:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=408a16efe57d44399707cea73c0c46cb
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-hyuz9qap/src'
make: *** [docker-run-test-quick@centos7] Error 2

real    2m3.838s
user    0m8.880s


The full log is available at
http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (5 preceding siblings ...)
  2020-06-14 19:12 ` [PATCH v9 0/5] vhost-user block device backend implementation no-reply
@ 2020-06-14 19:16 ` no-reply
  2020-06-16  6:52   ` Coiby Xu
  2020-06-19 12:07 ` Stefan Hajnoczi
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 51+ messages in thread
From: no-reply @ 2020-06-14 19:16 UTC (permalink / raw)
  To: coiby.xu; +Cc: kwolf, bharatlkmlkvm, stefanha, qemu-devel, coiby.xu

Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/



Hi,

This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
=== TEST SCRIPT END ===

  CC      stubs/vm-stop.o
  CC      ui/input-keymap.o
  CC      qemu-keymap.o
/tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: use of undeclared identifier 'VHOST_MEMORY_MAX_NREGIONS'
                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
                             ^
1 error generated.
make: *** [/tmp/qemu-test/src/rules.mak:69: util/vhost-user-server.o] Error 1
make: *** Waiting for unfinished jobs....
  CC      contrib/elf2dmp/main.o
Traceback (most recent call last):
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=99a7788ed7f54df1b0ebb7f91beb522e', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=x86_64-softmmu', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-9goaol7z/src/docker-src.2020-06-14-15.12.59.10439:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-debug']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=99a7788ed7f54df1b0ebb7f91beb522e
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-9goaol7z/src'
make: *** [docker-run-test-debug@fedora] Error 2

real    3m29.335s
user    0m8.313s


The full log is available at
http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.asan/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-14 19:16 ` no-reply
@ 2020-06-16  6:52   ` Coiby Xu
  2020-06-18  8:27     ` Stefan Hajnoczi
  2020-06-18  8:28     ` Stefan Hajnoczi
  0 siblings, 2 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-16  6:52 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, bharatlkmlkvm, stefanha

On Sun, Jun 14, 2020 at 12:16:28PM -0700, no-reply@patchew.org wrote:
>Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/
>
>
>
>Hi,
>
>This series failed the asan build test. Please find the testing commands and
>their output below. If you have Docker installed, you can probably reproduce it
>locally.
>
>=== TEST SCRIPT BEGIN ===
>#!/bin/bash
>export ARCH=x86_64
>make docker-image-fedora V=1 NETWORK=1
>time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
>=== TEST SCRIPT END ===
>
>  CC      stubs/vm-stop.o
>  CC      ui/input-keymap.o
>  CC      qemu-keymap.o
>/tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: use of undeclared identifier 'VHOST_MEMORY_MAX_NREGIONS'
>                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
>                             ^
>
>The full log is available at
>http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.asan/?type=message.

I couldn't re-produce this error locally for both docker-test-quick@centos7
and this docker test. And I can't see any reason for this error to occur since
VHOST_MEMORY_MAX_NREGIONS is defined in contrib/libvhost-user/libvhost-user.h
which has been included by util/vhost-user-server.h.

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-16  6:52   ` Coiby Xu
@ 2020-06-18  8:27     ` Stefan Hajnoczi
  2020-06-24  4:00       ` Coiby Xu
  2020-06-18  8:28     ` Stefan Hajnoczi
  1 sibling, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-18  8:27 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1710 bytes --]

On Tue, Jun 16, 2020 at 02:52:16PM +0800, Coiby Xu wrote:
> On Sun, Jun 14, 2020 at 12:16:28PM -0700, no-reply@patchew.org wrote:
> > Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/
> > 
> > 
> > 
> > Hi,
> > 
> > This series failed the asan build test. Please find the testing commands and
> > their output below. If you have Docker installed, you can probably reproduce it
> > locally.
> > 
> > === TEST SCRIPT BEGIN ===
> > #!/bin/bash
> > export ARCH=x86_64
> > make docker-image-fedora V=1 NETWORK=1
> > time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
> > === TEST SCRIPT END ===
> > 
> >  CC      stubs/vm-stop.o
> >  CC      ui/input-keymap.o
> >  CC      qemu-keymap.o
> > /tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: use of undeclared identifier 'VHOST_MEMORY_MAX_NREGIONS'
> >                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
> >                             ^
> > 
> > The full log is available at
> > http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.asan/?type=message.
> 
> I couldn't re-produce this error locally for both docker-test-quick@centos7
> and this docker test. And I can't see any reason for this error to occur since
> VHOST_MEMORY_MAX_NREGIONS is defined in contrib/libvhost-user/libvhost-user.h
> which has been included by util/vhost-user-server.h.

Please see the recent change in commit
b650d5f4b1cd3f9f8c4fdb319838c5c1e0695e41 ("Lift max ram slots limit in
libvhost-user").

The error can be solved by replacing VHOST_MEMORY_MAX_NREGIONS with
VHOST_MEMORY_BASELINE_NREGIONS in util/vhost-user-server.c.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-16  6:52   ` Coiby Xu
  2020-06-18  8:27     ` Stefan Hajnoczi
@ 2020-06-18  8:28     ` Stefan Hajnoczi
  2020-08-17  8:23       ` Coiby Xu
  1 sibling, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-18  8:28 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1541 bytes --]

On Tue, Jun 16, 2020 at 02:52:16PM +0800, Coiby Xu wrote:
> On Sun, Jun 14, 2020 at 12:16:28PM -0700, no-reply@patchew.org wrote:
> > Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/
> > 
> > 
> > 
> > Hi,
> > 
> > This series failed the asan build test. Please find the testing commands and
> > their output below. If you have Docker installed, you can probably reproduce it
> > locally.
> > 
> > === TEST SCRIPT BEGIN ===
> > #!/bin/bash
> > export ARCH=x86_64
> > make docker-image-fedora V=1 NETWORK=1
> > time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
> > === TEST SCRIPT END ===
> > 
> >  CC      stubs/vm-stop.o
> >  CC      ui/input-keymap.o
> >  CC      qemu-keymap.o
> > /tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: use of undeclared identifier 'VHOST_MEMORY_MAX_NREGIONS'
> >                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
> >                             ^
> > 
> > The full log is available at
> > http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.asan/?type=message.
> 
> I couldn't re-produce this error locally for both docker-test-quick@centos7
> and this docker test. And I can't see any reason for this error to occur since
> VHOST_MEMORY_MAX_NREGIONS is defined in contrib/libvhost-user/libvhost-user.h
> which has been included by util/vhost-user-server.h.

Using G_N_ELEMENTS(vmsg->fds) instead of VHOST_MEMORY_MAX_NREGIONS is an
even cleaner fix.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 1/5] Allow vu_message_read to be replaced
  2020-06-14 18:39 ` [PATCH v9 1/5] Allow vu_message_read to be replaced Coiby Xu
@ 2020-06-18 10:43   ` Kevin Wolf
  2020-06-24  3:36     ` Coiby Xu
  0 siblings, 1 reply; 51+ messages in thread
From: Kevin Wolf @ 2020-06-18 10:43 UTC (permalink / raw)
  To: Coiby Xu; +Cc: bharatlkmlkvm, qemu-devel, stefanha, Dr. David Alan Gilbert

Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
> Allow vu_message_read to be replaced by one which will make use of the
> QIOChannel functions. Thus reading vhost-user message won't stall the
> guest.
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>

_vu_queue_notify() still has a direct call of vu_message_read() instead
of using the pointer. Is this intentional?

Renaming the function would make sure that such semantic merge conflicts
don't stay unnoticed.

> @@ -1704,6 +1702,7 @@ vu_deinit(VuDev *dev)
>          }
>  
>          if (vq->kick_fd != -1) {
> +            dev->remove_watch(dev, vq->kick_fd);
>              close(vq->kick_fd);
>              vq->kick_fd = -1;
>          }

This hunk looks unrelated.

Kevin



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 2/5] generic vhost user server
  2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
@ 2020-06-18 13:29   ` Kevin Wolf
  2020-08-17  8:59     ` Coiby Xu
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
  2020-06-19 12:13   ` [PATCH v9 2/5] generic vhost user server Stefan Hajnoczi
  2 siblings, 1 reply; 51+ messages in thread
From: Kevin Wolf @ 2020-06-18 13:29 UTC (permalink / raw)
  To: Coiby Xu; +Cc: bharatlkmlkvm, qemu-devel, stefanha

Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
> Sharing QEMU devices via vhost-user protocol.
> 
> Only one vhost-user client can connect to the server one time.
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> ---
>  util/Makefile.objs       |   1 +
>  util/vhost-user-server.c | 400 +++++++++++++++++++++++++++++++++++++++
>  util/vhost-user-server.h |  61 ++++++
>  3 files changed, 462 insertions(+)
>  create mode 100644 util/vhost-user-server.c
>  create mode 100644 util/vhost-user-server.h
> 
> diff --git a/util/Makefile.objs b/util/Makefile.objs
> index cc5e37177a..b4d4af06dc 100644
> --- a/util/Makefile.objs
> +++ b/util/Makefile.objs
> @@ -66,6 +66,7 @@ util-obj-y += hbitmap.o
>  util-obj-y += main-loop.o
>  util-obj-y += nvdimm-utils.o
>  util-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
> +util-obj-$(CONFIG_LINUX) += vhost-user-server.o
>  util-obj-y += qemu-coroutine-sleep.o
>  util-obj-y += qemu-co-shared-resource.o
>  util-obj-y += qemu-sockets.o
> diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
> new file mode 100644
> index 0000000000..393beeb6b9
> --- /dev/null
> +++ b/util/vhost-user-server.c
> @@ -0,0 +1,400 @@
> +/*
> + * Sharing QEMU devices via vhost-user protocol
> + *
> + * Author: Coiby Xu <coiby.xu@gmail.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later.  See the COPYING file in the top-level directory.
> + */
> +#include "qemu/osdep.h"
> +#include <sys/eventfd.h>
> +#include "qemu/main-loop.h"
> +#include "vhost-user-server.h"
> +
> +static void vmsg_close_fds(VhostUserMsg *vmsg)
> +{
> +    int i;
> +    for (i = 0; i < vmsg->fd_num; i++) {
> +        close(vmsg->fds[i]);
> +    }
> +}
> +
> +static void vmsg_unblock_fds(VhostUserMsg *vmsg)
> +{
> +    int i;
> +    for (i = 0; i < vmsg->fd_num; i++) {
> +        qemu_set_nonblock(vmsg->fds[i]);
> +    }
> +}
> +
> +static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
> +                      gpointer opaque);
> +
> +static void close_client(VuServer *server)
> +{
> +    vu_deinit(&server->vu_dev);
> +    object_unref(OBJECT(server->sioc));
> +    object_unref(OBJECT(server->ioc));
> +    server->sioc_slave = NULL;

Where is sioc_slave closed/freed?

> +    object_unref(OBJECT(server->ioc_slave));
> +    /*
> +     * Set the callback function for network listener so another
> +     * vhost-user client can connect to this server
> +     */
> +    qio_net_listener_set_client_func(server->listener,
> +                                     vu_accept,
> +                                     server,
> +                                     NULL);

If connecting another client to the server should work, don't we have to
set at least server->sioc = NULL so that vu_accept() won't error out?

> +}
> +
> +static void panic_cb(VuDev *vu_dev, const char *buf)
> +{
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +
> +    if (buf) {
> +        error_report("vu_panic: %s", buf);
> +    }
> +
> +    if (server->sioc) {
> +        close_client(server);
> +        server->sioc = NULL;
> +    }
> +
> +    if (server->device_panic_notifier) {
> +        server->device_panic_notifier(server);
> +    }
> +}
> +
> +static QIOChannel *slave_io_channel(VuServer *server, int fd,
> +                                    Error **local_err)
> +{
> +    if (server->sioc_slave) {
> +        if (fd == server->sioc_slave->fd) {
> +            return server->ioc_slave;
> +        }
> +    } else {
> +        server->sioc_slave = qio_channel_socket_new_fd(fd, local_err);
> +        if (!*local_err) {
> +            server->ioc_slave = QIO_CHANNEL(server->sioc_slave);
> +            return server->ioc_slave;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static bool coroutine_fn
> +vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
> +{
> +    struct iovec iov = {
> +        .iov_base = (char *)vmsg,
> +        .iov_len = VHOST_USER_HDR_SIZE,
> +    };
> +    int rc, read_bytes = 0;
> +    Error *local_err = NULL;
> +    /*
> +     * Store fds/nfds returned from qio_channel_readv_full into
> +     * temporary variables.
> +     *
> +     * VhostUserMsg is a packed structure, gcc will complain about passing
> +     * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
> +     * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
> +     * thus two temporary variables nfds and fds are used here.
> +     */
> +    size_t nfds = 0, nfds_t = 0;
> +    int *fds_t = NULL;
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    QIOChannel *ioc = NULL;
> +
> +    if (conn_fd == server->sioc->fd) {
> +        ioc = server->ioc;
> +    } else {
> +        /* Slave communication will also use this function to read msg */
> +        ioc = slave_io_channel(server, conn_fd, &local_err);
> +    }
> +
> +    if (!ioc) {
> +        error_report_err(local_err);
> +        goto fail;
> +    }
> +
> +    assert(qemu_in_coroutine());
> +    do {
> +        /*
> +         * qio_channel_readv_full may have short reads, keeping calling it
> +         * until getting VHOST_USER_HDR_SIZE or 0 bytes in total
> +         */
> +        rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
> +        if (rc < 0) {
> +            if (rc == QIO_CHANNEL_ERR_BLOCK) {
> +                qio_channel_yield(ioc, G_IO_IN);
> +                continue;
> +            } else {
> +                error_report_err(local_err);
> +                return false;
> +            }
> +        }
> +        read_bytes += rc;
> +        if (nfds_t > 0) {
> +            if (nfds + nfds_t > G_N_ELEMENTS(vmsg->fds)) {
> +                error_report("A maximum of %d fds are allowed, "
> +                             "however got %lu fds now",
> +                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
> +                goto fail;
> +            }
> +            memcpy(vmsg->fds + nfds, fds_t,
> +                   nfds_t *sizeof(vmsg->fds[0]));
> +            nfds += nfds_t;
> +            g_free(fds_t);
> +        }
> +        if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
> +            break;
> +        }
> +        iov.iov_base = (char *)vmsg + read_bytes;
> +        iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
> +    } while (true);
> +
> +    vmsg->fd_num = nfds;
> +    /* qio_channel_readv_full will make socket fds blocking, unblock them */
> +    vmsg_unblock_fds(vmsg);
> +    if (vmsg->size > sizeof(vmsg->payload)) {
> +        error_report("Error: too big message request: %d, "
> +                     "size: vmsg->size: %u, "
> +                     "while sizeof(vmsg->payload) = %zu",
> +                     vmsg->request, vmsg->size, sizeof(vmsg->payload));
> +        goto fail;
> +    }
> +
> +    struct iovec iov_payload = {
> +        .iov_base = (char *)&vmsg->payload,
> +        .iov_len = vmsg->size,
> +    };
> +    if (vmsg->size) {
> +        rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
> +        if (rc == -1) {
> +            error_report_err(local_err);
> +            goto fail;
> +        }
> +    }
> +
> +    return true;
> +
> +fail:
> +    vmsg_close_fds(vmsg);
> +
> +    return false;
> +}
> +
> +
> +static void vu_client_start(VuServer *server);
> +static coroutine_fn void vu_client_trip(void *opaque)
> +{
> +    VuServer *server = opaque;
> +
> +    while (!server->aio_context_changed && server->sioc) {
> +        vu_dispatch(&server->vu_dev);
> +    }
> +
> +    if (server->aio_context_changed && server->sioc) {
> +        server->aio_context_changed = false;
> +        vu_client_start(server);
> +    }
> +}

This is somewhat convoluted, but ok. As soon as my patch "util/async:
Add aio_co_reschedule_self()" is merged, we can use it to simplify this
a bit.

> +static void vu_client_start(VuServer *server)
> +{
> +    server->co_trip = qemu_coroutine_create(vu_client_trip, server);
> +    aio_co_enter(server->ctx, server->co_trip);
> +}
> +
> +/*
> + * a wrapper for vu_kick_cb
> + *
> + * since aio_dispatch can only pass one user data pointer to the
> + * callback function, pack VuDev and pvt into a struct. Then unpack it
> + * and pass them to vu_kick_cb
> + */
> +static void kick_handler(void *opaque)
> +{
> +    KickInfo *kick_info = opaque;
> +    kick_info->cb(kick_info->vu_dev, 0, (void *) kick_info->index);
> +}
> +
> +
> +static void
> +set_watch(VuDev *vu_dev, int fd, int vu_evt,
> +          vu_watch_cb cb, void *pvt)
> +{
> +
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    g_assert(vu_dev);
> +    g_assert(fd >= 0);
> +    long index = (intptr_t) pvt;
> +    g_assert(cb);
> +    KickInfo *kick_info = &server->kick_info[index];
> +    if (!kick_info->cb) {
> +        kick_info->fd = fd;
> +        kick_info->cb = cb;
> +        qemu_set_nonblock(fd);
> +        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
> +                           NULL, NULL, kick_info);
> +        kick_info->vu_dev = vu_dev;
> +    }
> +}
> +
> +
> +static void remove_watch(VuDev *vu_dev, int fd)
> +{
> +    VuServer *server;
> +    int i;
> +    int index = -1;
> +    g_assert(vu_dev);
> +    g_assert(fd >= 0);
> +
> +    server = container_of(vu_dev, VuServer, vu_dev);
> +    for (i = 0; i < vu_dev->max_queues; i++) {
> +        if (server->kick_info[i].fd == fd) {
> +            index = i;
> +            break;
> +        }
> +    }
> +
> +    if (index == -1) {
> +        return;
> +    }
> +    server->kick_info[i].cb = NULL;
> +    aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL);
> +}
> +
> +
> +static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
> +                      gpointer opaque)
> +{
> +    VuServer *server = opaque;
> +
> +    if (server->sioc) {
> +        warn_report("Only one vhost-user client is allowed to "
> +                    "connect the server one time");
> +        return;
> +    }
> +
> +    if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
> +                 vu_message_read, set_watch, remove_watch, server->vu_iface)) {
> +        error_report("Failed to initialized libvhost-user");
> +        return;
> +    }
> +
> +    /*
> +     * Unset the callback function for network listener to make another
> +     * vhost-user client keeping waiting until this client disconnects
> +     */
> +    qio_net_listener_set_client_func(server->listener,
> +                                     NULL,
> +                                     NULL,
> +                                     NULL);
> +    server->sioc = sioc;
> +    server->kick_info = g_new0(KickInfo, server->max_queues);
> +    /*
> +     * Increase the object reference, so sioc will not freed by
> +     * qio_net_listener_channel_func which will call object_unref(OBJECT(sioc))
> +     */
> +    object_ref(OBJECT(server->sioc));
> +    qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
> +    server->ioc = QIO_CHANNEL(sioc);
> +    object_ref(OBJECT(server->ioc));
> +    qio_channel_attach_aio_context(server->ioc, server->ctx);
> +    qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
> +    vu_client_start(server);
> +}
> +
> +
> +void vhost_user_server_stop(VuServer *server)
> +{
> +    if (!server) {
> +        return;
> +    }

There is no reason why the caller should even pass NULL.

> +    if (server->sioc) {
> +        close_client(server);
> +        object_unref(OBJECT(server->sioc));

close_client() already unrefs it. Do we really hold two references? If
so, why?

I can see that vu_accept() takes an extra reference, but the comment
there says this is because QIOChannel takes ownership.

> +    }
> +
> +    if (server->listener) {
> +        qio_net_listener_disconnect(server->listener);
> +        object_unref(OBJECT(server->listener));
> +    }
> +
> +    g_free(server->kick_info);

Don't we need to wait for co_trip to terminate somewhere? Probably
before freeing any objects because it could still use them.

I assume vhost_user_server_stop() is always called from the main thread
whereas co_trip runs in the server AioContext, so extra care is
necessary.

> +}
> +
> +static void detach_context(VuServer *server)
> +{
> +    int i;
> +    AioContext *ctx = server->ioc->ctx;
> +    qio_channel_detach_aio_context(server->ioc);
> +    for (i = 0; i < server->vu_dev.max_queues; i++) {
> +        if (server->kick_info[i].cb) {
> +            aio_set_fd_handler(ctx, server->kick_info[i].fd, false, NULL,
> +                               NULL, NULL, NULL);
> +        }
> +    }
> +}
> +
> +static void attach_context(VuServer *server, AioContext *ctx)
> +{
> +    int i;
> +    qio_channel_attach_aio_context(server->ioc, ctx);
> +    server->aio_context_changed = true;
> +    if (server->co_trip) {
> +        aio_co_schedule(ctx, server->co_trip);
> +    }
> +    for (i = 0; i < server->vu_dev.max_queues; i++) {
> +        if (server->kick_info[i].cb) {
> +            aio_set_fd_handler(ctx, server->kick_info[i].fd, false,
> +                               kick_handler, NULL, NULL,
> +                               &server->kick_info[i]);
> +        }
> +    }
> +}

There is a lot of duplication between detach_context() and
attach_context(). I think implementing this directly in
vhost_user_server_set_aio_context() for both cases at once would result
in simpler code.

> +void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server)
> +{
> +    server->ctx = ctx ? ctx : qemu_get_aio_context();
> +    if (!server->sioc) {
> +        return;
> +    }
> +    if (ctx) {
> +        attach_context(server, ctx);
> +    } else {
> +        detach_context(server);
> +    }
> +}

What happens if the VuServer is already attached to an AioContext and
you change it to another AioContext? Shouldn't it be detached from the
old context and attached to the new one instead of only doing the
latter?

> +
> +bool vhost_user_server_start(VuServer *server,
> +                             SocketAddress *socket_addr,
> +                             AioContext *ctx,
> +                             uint16_t max_queues,
> +                             DevicePanicNotifierFn *device_panic_notifier,
> +                             const VuDevIface *vu_iface,
> +                             Error **errp)
> +{

I think this is the function that is supposed to initialise the VuServer
object, so would it be better to first zero it out completely?

Or alternatively assign it completely like this (which automatically
zeroes any unspecified field):

    *server = (VuServer) {
        .vu_iface       = vu_iface,
        .max_queues     = max_queues,
        ...
    }

> +    server->listener = qio_net_listener_new();
> +    if (qio_net_listener_open_sync(server->listener, socket_addr, 1,
> +                                   errp) < 0) {
> +        return false;
> +    }
> +
> +    qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
> +
> +    server->vu_iface = vu_iface;
> +    server->max_queues = max_queues;
> +    server->ctx = ctx;
> +    server->device_panic_notifier = device_panic_notifier;
> +    qio_net_listener_set_client_func(server->listener,
> +                                     vu_accept,
> +                                     server,
> +                                     NULL);
> +
> +    return true;
> +}
> diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
> new file mode 100644
> index 0000000000..5baf58f96a
> --- /dev/null
> +++ b/util/vhost-user-server.h
> @@ -0,0 +1,61 @@
> +/*
> + * Sharing QEMU devices via vhost-user protocol
> + *
> + * Author: Coiby Xu <coiby.xu@gmail.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later.  See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef VHOST_USER_SERVER_H
> +#define VHOST_USER_SERVER_H
> +
> +#include "contrib/libvhost-user/libvhost-user.h"
> +#include "io/channel-socket.h"
> +#include "io/channel-file.h"
> +#include "io/net-listener.h"
> +#include "qemu/error-report.h"
> +#include "qapi/error.h"
> +#include "standard-headers/linux/virtio_blk.h"
> +
> +typedef struct KickInfo {
> +    VuDev *vu_dev;
> +    int fd; /*kick fd*/
> +    long index; /*queue index*/
> +    vu_watch_cb cb;
> +} KickInfo;
> +
> +typedef struct VuServer {
> +    QIONetListener *listener;
> +    AioContext *ctx;
> +    void (*device_panic_notifier)(struct VuServer *server) ;

Extra space before the semicolon.

> +    int max_queues;
> +    const VuDevIface *vu_iface;
> +    VuDev vu_dev;
> +    QIOChannel *ioc; /* The I/O channel with the client */
> +    QIOChannelSocket *sioc; /* The underlying data channel with the client */
> +    /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
> +    QIOChannel *ioc_slave;
> +    QIOChannelSocket *sioc_slave;
> +    Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
> +    KickInfo *kick_info; /* an array with the length of the queue number */

"an array with @max_queues elements"?

> +    /* restart coroutine co_trip if AIOContext is changed */
> +    bool aio_context_changed;
> +} VuServer;
> +
> +
> +typedef void DevicePanicNotifierFn(struct VuServer *server);
> +
> +bool vhost_user_server_start(VuServer *server,
> +                             SocketAddress *unix_socket,
> +                             AioContext *ctx,
> +                             uint16_t max_queues,
> +                             DevicePanicNotifierFn *device_panic_notifier,
> +                             const VuDevIface *vu_iface,
> +                             Error **errp);
> +
> +void vhost_user_server_stop(VuServer *server);
> +
> +void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server);
> +
> +#endif /* VHOST_USER_SERVER_H */

Kevin



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 3/5] move logical block size check function to a common utility function
  2020-06-14 18:39 ` [PATCH v9 3/5] move logical block size check function to a common utility function Coiby Xu
@ 2020-06-18 13:44   ` Kevin Wolf
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
  1 sibling, 0 replies; 51+ messages in thread
From: Kevin Wolf @ 2020-06-18 13:44 UTC (permalink / raw)
  To: Coiby Xu
  Cc: Daniel P. Berrangé,
	Eduardo Habkost, qemu-devel, bharatlkmlkvm, stefanha,
	Paolo Bonzini

Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
> Move logical block size check function in hw/core/qdev-properties.c:set_blocksize() to util/block-helpers.c
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>

Just a heads-up that you'll probably need to rebase this after the pull
request I sent yesterday is merged because it changes the block size
properties.

Kevin



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server
  2020-06-14 18:39 ` [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server Coiby Xu
@ 2020-06-18 15:17   ` Stefan Hajnoczi
  2020-06-24  4:35     ` Coiby Xu
  2020-06-24 15:14   ` Thomas Huth
  1 sibling, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-18 15:17 UTC (permalink / raw)
  To: Coiby Xu
  Cc: kwolf, Laurent Vivier, Thomas Huth, qemu-devel, bharatlkmlkvm,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 1867 bytes --]

On Mon, Jun 15, 2020 at 02:39:07AM +0800, Coiby Xu wrote:
> This test case has the same tests as tests/virtio-blk-test.c except for
> tests have block_resize. Since vhost-user server can only server one
> client one time, two instances of qemu-storage-daemon are launched
> for the hotplug test.
> 
> In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
> send "quit" command to qemu-storage-daemon's QMP monitor. So a function
> is added to libqtest.c to establish socket connection with socket
> server.
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> ---
>  tests/Makefile.include              |   3 +-
>  tests/qtest/Makefile.include        |   2 +
>  tests/qtest/libqos/vhost-user-blk.c | 130 +++++
>  tests/qtest/libqos/vhost-user-blk.h |  48 ++
>  tests/qtest/libqtest.c              |  35 +-
>  tests/qtest/libqtest.h              |  17 +
>  tests/qtest/vhost-user-blk-test.c   | 739 ++++++++++++++++++++++++++++
>  7 files changed, 971 insertions(+), 3 deletions(-)
>  create mode 100644 tests/qtest/libqos/vhost-user-blk.c
>  create mode 100644 tests/qtest/libqos/vhost-user-blk.h
>  create mode 100644 tests/qtest/vhost-user-blk-test.c

This test case fails for me:

qemu-system-x86_64: Failed to read from slave.
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Success (0)
qemu-system-x86_64: Failed to read from slave.
qemu-system-x86_64: Failed to read from slave.
qemu-system-x86_64: Failed to read from slave.
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Success (0)
qemu-system-x86_64: Failed to read msg header. Read -1 instead of 12. Original request 11.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Input/output error (5)

Does "make -j4 check" pass for you?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 4/5] vhost-user block device backend server
  2020-06-14 18:39 ` [PATCH v9 4/5] vhost-user block device backend server Coiby Xu
@ 2020-06-18 15:57   ` Kevin Wolf
  2020-08-17 12:30     ` Coiby Xu
  2020-06-19 12:03   ` [PATCH 1/2] vhost-user-blk-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
  1 sibling, 1 reply; 51+ messages in thread
From: Kevin Wolf @ 2020-06-18 15:57 UTC (permalink / raw)
  To: Coiby Xu
  Cc: open list:Block layer core, qemu-devel, Max Reitz, bharatlkmlkvm,
	stefanha, Paolo Bonzini

Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
> By making use of libvhost-user, block device drive can be shared to
> the connected vhost-user client. Only one client can connect to the
> server one time.
> 
> Since vhost-user-server needs a block drive to be created first, delay
> the creation of this object.
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> ---
>  block/Makefile.objs                  |   1 +
>  block/export/vhost-user-blk-server.c | 669 +++++++++++++++++++++++++++
>  block/export/vhost-user-blk-server.h |  35 ++
>  softmmu/vl.c                         |   4 +
>  4 files changed, 709 insertions(+)
>  create mode 100644 block/export/vhost-user-blk-server.c
>  create mode 100644 block/export/vhost-user-blk-server.h
> 
> diff --git a/block/Makefile.objs b/block/Makefile.objs
> index 3635b6b4c1..0eb7eff470 100644
> --- a/block/Makefile.objs
> +++ b/block/Makefile.objs
> @@ -24,6 +24,7 @@ block-obj-y += throttle-groups.o
>  block-obj-$(CONFIG_LINUX) += nvme.o
>  
>  block-obj-y += nbd.o
> +block-obj-$(CONFIG_LINUX) += export/vhost-user-blk-server.o ../contrib/libvhost-user/libvhost-user.o
>  block-obj-$(CONFIG_SHEEPDOG) += sheepdog.o
>  block-obj-$(CONFIG_LIBISCSI) += iscsi.o
>  block-obj-$(if $(CONFIG_LIBISCSI),y,n) += iscsi-opts.o
> diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
> new file mode 100644
> index 0000000000..bbf2ceaa9b
> --- /dev/null
> +++ b/block/export/vhost-user-blk-server.c
> @@ -0,0 +1,669 @@
> +/*
> + * Sharing QEMU block devices via vhost-user protocal
> + *
> + * Author: Coiby Xu <coiby.xu@gmail.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later.  See the COPYING file in the top-level directory.
> + */
> +#include "qemu/osdep.h"
> +#include "block/block.h"
> +#include "vhost-user-blk-server.h"
> +#include "qapi/error.h"
> +#include "qom/object_interfaces.h"
> +#include "sysemu/block-backend.h"
> +#include "util/block-helpers.h"
> +
> +enum {
> +    VHOST_USER_BLK_MAX_QUEUES = 1,
> +};
> +struct virtio_blk_inhdr {
> +    unsigned char status;
> +};
> +
> +
> +typedef struct VuBlockReq {
> +    VuVirtqElement *elem;
> +    int64_t sector_num;
> +    size_t size;
> +    struct virtio_blk_inhdr *in;
> +    struct virtio_blk_outhdr out;
> +    VuServer *server;
> +    struct VuVirtq *vq;
> +} VuBlockReq;
> +
> +
> +static void vu_block_req_complete(VuBlockReq *req)
> +{
> +    VuDev *vu_dev = &req->server->vu_dev;
> +
> +    /* IO size with 1 extra status byte */
> +    vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
> +    vu_queue_notify(vu_dev, req->vq);
> +
> +    if (req->elem) {
> +        free(req->elem);
> +    }
> +
> +    g_free(req);
> +}
> +
> +static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
> +{
> +    return container_of(server, VuBlockDev, vu_server);
> +}
> +
> +static int coroutine_fn
> +vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
> +                              uint32_t iovcnt, uint32_t type)
> +{
> +    struct virtio_blk_discard_write_zeroes desc;
> +    ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
> +    if (unlikely(size != sizeof(desc))) {
> +        error_report("Invalid size %ld, expect %ld", size, sizeof(desc));
> +        return -EINVAL;
> +    }
> +
> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
> +    uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
> +                          le32_to_cpu(desc.num_sectors) << 9 };
> +    if (type == VIRTIO_BLK_T_DISCARD) {
> +        if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
> +            return 0;
> +        }
> +    } else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
> +        if (blk_co_pwrite_zeroes(vdev_blk->backend,
> +                                 range[0], range[1], 0) == 0) {
> +            return 0;
> +        }
> +    }
> +
> +    return -EINVAL;
> +}
> +
> +
> +static void coroutine_fn vu_block_flush(VuBlockReq *req)
> +{
> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
> +    BlockBackend *backend = vdev_blk->backend;
> +    blk_co_flush(backend);
> +}
> +
> +
> +struct req_data {
> +    VuServer *server;
> +    VuVirtq *vq;
> +    VuVirtqElement *elem;
> +};
> +
> +static void coroutine_fn vu_block_virtio_process_req(void *opaque)
> +{
> +    struct req_data *data = opaque;
> +    VuServer *server = data->server;
> +    VuVirtq *vq = data->vq;
> +    VuVirtqElement *elem = data->elem;
> +    uint32_t type;
> +    VuBlockReq *req;
> +
> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
> +    BlockBackend *backend = vdev_blk->backend;
> +
> +    struct iovec *in_iov = elem->in_sg;
> +    struct iovec *out_iov = elem->out_sg;
> +    unsigned in_num = elem->in_num;
> +    unsigned out_num = elem->out_num;
> +    /* refer to hw/block/virtio_blk.c */
> +    if (elem->out_num < 1 || elem->in_num < 1) {
> +        error_report("virtio-blk request missing headers");
> +        free(elem);
> +        return;
> +    }
> +
> +    req = g_new0(VuBlockReq, 1);
> +    req->server = server;
> +    req->vq = vq;
> +    req->elem = elem;
> +
> +    if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
> +                            sizeof(req->out)) != sizeof(req->out))) {
> +        error_report("virtio-blk request outhdr too short");
> +        goto err;
> +    }
> +
> +    iov_discard_front(&out_iov, &out_num, sizeof(req->out));
> +
> +    if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
> +        error_report("virtio-blk request inhdr too short");
> +        goto err;
> +    }
> +
> +    /* We always touch the last byte, so just see how big in_iov is.  */
> +    req->in = (void *)in_iov[in_num - 1].iov_base
> +              + in_iov[in_num - 1].iov_len
> +              - sizeof(struct virtio_blk_inhdr);
> +    iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr));
> +
> +
> +    type = le32_to_cpu(req->out.type);
> +    switch (type & ~VIRTIO_BLK_T_BARRIER) {
> +    case VIRTIO_BLK_T_IN:
> +    case VIRTIO_BLK_T_OUT: {
> +        ssize_t ret = 0;
> +        bool is_write = type & VIRTIO_BLK_T_OUT;
> +        req->sector_num = le64_to_cpu(req->out.sector);
> +
> +        int64_t offset = req->sector_num * vdev_blk->blk_size;
> +        QEMUIOVector qiov;
> +        if (is_write) {
> +            qemu_iovec_init_external(&qiov, out_iov, out_num);
> +            ret = blk_co_pwritev(backend, offset, qiov.size,
> +                                 &qiov, 0);
> +        } else {
> +            qemu_iovec_init_external(&qiov, in_iov, in_num);
> +            ret = blk_co_preadv(backend, offset, qiov.size,
> +                                &qiov, 0);
> +        }
> +        if (ret >= 0) {
> +            req->in->status = VIRTIO_BLK_S_OK;
> +        } else {
> +            req->in->status = VIRTIO_BLK_S_IOERR;
> +        }
> +        break;
> +    }
> +    case VIRTIO_BLK_T_FLUSH:
> +        vu_block_flush(req);
> +        req->in->status = VIRTIO_BLK_S_OK;
> +        break;
> +    case VIRTIO_BLK_T_GET_ID: {
> +        size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
> +                          VIRTIO_BLK_ID_BYTES);
> +        snprintf(elem->in_sg[0].iov_base, size, "%s", "vhost_user_blk_server");
> +        req->in->status = VIRTIO_BLK_S_OK;
> +        req->size = elem->in_sg[0].iov_len;
> +        break;
> +    }
> +    case VIRTIO_BLK_T_DISCARD:
> +    case VIRTIO_BLK_T_WRITE_ZEROES: {
> +        int rc;
> +        rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
> +                                           out_num, type);
> +        if (rc == 0) {
> +            req->in->status = VIRTIO_BLK_S_OK;
> +        } else {
> +            req->in->status = VIRTIO_BLK_S_IOERR;
> +        }
> +        break;
> +    }
> +    default:
> +        req->in->status = VIRTIO_BLK_S_UNSUPP;
> +        break;
> +    }
> +
> +    vu_block_req_complete(req);
> +    return;
> +
> +err:
> +    free(elem);
> +    g_free(req);
> +    return;
> +}
> +
> +
> +
> +static void vu_block_process_vq(VuDev *vu_dev, int idx)
> +{
> +    VuServer *server;
> +    VuVirtq *vq;
> +
> +    server = container_of(vu_dev, VuServer, vu_dev);
> +    assert(server);
> +
> +    vq = vu_get_queue(vu_dev, idx);
> +    assert(vq);
> +    VuVirtqElement *elem;
> +    while (1) {
> +        elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
> +                                    sizeof(VuBlockReq));
> +        if (elem) {
> +            struct req_data req_data = {
> +                .server = server,
> +                .vq = vq,
> +                .elem = elem
> +            };

This is on the stack of the function.

> +            Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
> +                                                  &req_data);
> +            aio_co_enter(server->ioc->ctx, co);

Therefore, this code is only correct, if co accesses the data only while
the function has not returned yet.

This function is called in the context of vu_dispatch(), which in turn
is called from vu_client_trip(). So we already run in a coroutine. In
this case, aio_co_enter() only schedules co to run after the current
coroutine yields or terminates. In other words, this looks wrong to me
because req_data will be accessed when it's long out of scope.

I think we need to malloc it.

> +        } else {
> +            break;
> +        }
> +    }
> +}
> +
> +static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
> +{
> +    VuVirtq *vq;
> +
> +    assert(vu_dev);
> +
> +    vq = vu_get_queue(vu_dev, idx);
> +    vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
> +}
> +
> +static uint64_t vu_block_get_features(VuDev *dev)
> +{
> +    uint64_t features;
> +    VuServer *server = container_of(dev, VuServer, vu_dev);
> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
> +    features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
> +               1ull << VIRTIO_BLK_F_SEG_MAX |
> +               1ull << VIRTIO_BLK_F_TOPOLOGY |
> +               1ull << VIRTIO_BLK_F_BLK_SIZE |
> +               1ull << VIRTIO_BLK_F_FLUSH |
> +               1ull << VIRTIO_BLK_F_DISCARD |
> +               1ull << VIRTIO_BLK_F_WRITE_ZEROES |
> +               1ull << VIRTIO_BLK_F_CONFIG_WCE |
> +               1ull << VIRTIO_F_VERSION_1 |
> +               1ull << VIRTIO_RING_F_INDIRECT_DESC |
> +               1ull << VIRTIO_RING_F_EVENT_IDX |
> +               1ull << VHOST_USER_F_PROTOCOL_FEATURES;
> +
> +    if (!vdev_blk->writable) {
> +        features |= 1ull << VIRTIO_BLK_F_RO;
> +    }
> +
> +    return features;
> +}
> +
> +static uint64_t vu_block_get_protocol_features(VuDev *dev)
> +{
> +    return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
> +           1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
> +}
> +
> +static int
> +vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
> +{
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
> +    memcpy(config, &vdev_blk->blkcfg, len);
> +
> +    return 0;
> +}
> +
> +static int
> +vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
> +                    uint32_t offset, uint32_t size, uint32_t flags)
> +{
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
> +    uint8_t wce;
> +
> +    /* don't support live migration */
> +    if (flags != VHOST_SET_CONFIG_TYPE_MASTER) {
> +        return -EINVAL;
> +    }
> +
> +
> +    if (offset != offsetof(struct virtio_blk_config, wce) ||
> +        size != 1) {
> +        return -EINVAL;
> +    }
> +
> +    wce = *data;
> +    if (wce == vdev_blk->blkcfg.wce) {
> +        /* Do nothing as same with old configuration */
> +        return 0;
> +    }

This check is unnecessary. Nothing bad happens if you set the same value
again.

> +    vdev_blk->blkcfg.wce = wce;
> +    blk_set_enable_write_cache(vdev_blk->backend, wce);
> +    return 0;
> +}
> +
> +
> +/*
> + * When the client disconnects, it sends a VHOST_USER_NONE request
> + * and vu_process_message will simple call exit which cause the VM
> + * to exit abruptly.
> + * To avoid this issue,  process VHOST_USER_NONE request ahead
> + * of vu_process_message.
> + *
> + */
> +static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
> +{
> +    if (vmsg->request == VHOST_USER_NONE) {
> +        dev->panic(dev, "disconnect");
> +        return true;
> +    }
> +    return false;
> +}
> +
> +
> +static const VuDevIface vu_block_iface = {
> +    .get_features          = vu_block_get_features,
> +    .queue_set_started     = vu_block_queue_set_started,
> +    .get_protocol_features = vu_block_get_protocol_features,
> +    .get_config            = vu_block_get_config,
> +    .set_config            = vu_block_set_config,
> +    .process_msg           = vu_block_process_msg,
> +};
> +
> +static void blk_aio_attached(AioContext *ctx, void *opaque)
> +{
> +    VuBlockDev *vub_dev = opaque;
> +    aio_context_acquire(ctx);
> +    vhost_user_server_set_aio_context(ctx, &vub_dev->vu_server);
> +    aio_context_release(ctx);
> +}
> +
> +static void blk_aio_detach(void *opaque)
> +{
> +    VuBlockDev *vub_dev = opaque;
> +    AioContext *ctx = vub_dev->vu_server.ctx;
> +    aio_context_acquire(ctx);
> +    vhost_user_server_set_aio_context(NULL, &vub_dev->vu_server);
> +    aio_context_release(ctx);
> +}
> +
> +
> +static void
> +vu_block_initialize_config(BlockDriverState *bs,
> +                           struct virtio_blk_config *config, uint32_t blk_size)
> +{
> +    config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
> +    config->blk_size = blk_size;
> +    config->size_max = 0;
> +    config->seg_max = 128 - 2;
> +    config->min_io_size = 1;
> +    config->opt_io_size = 1;
> +    config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
> +    config->max_discard_sectors = 32768;
> +    config->max_discard_seg = 1;
> +    config->discard_sector_alignment = config->blk_size >> 9;
> +    config->max_write_zeroes_sectors = 32768;
> +    config->max_write_zeroes_seg = 1;
> +}
> +
> +
> +static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
> +{
> +
> +    BlockBackend *blk;
> +    Error *local_error = NULL;
> +    const char *node_name = vu_block_device->node_name;
> +    bool writable = vu_block_device->writable;
> +    /*
> +     * Don't allow resize while the vhost user server is running,
> +     * otherwise we don't care what happens with the node.
> +     */

I think this comment belong to the blk_new() below where the shared
permissions are specified.

> +    uint64_t perm = BLK_PERM_CONSISTENT_READ;
> +    int ret;
> +
> +    AioContext *ctx;
> +
> +    BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
> +
> +    if (!bs) {
> +        error_propagate(errp, local_error);
> +        return NULL;
> +    }
> +
> +    if (bdrv_is_read_only(bs)) {
> +        writable = false;
> +    }
> +
> +    if (writable) {
> +        perm |= BLK_PERM_WRITE;
> +    }
> +
> +    ctx = bdrv_get_aio_context(bs);
> +    aio_context_acquire(ctx);
> +    bdrv_invalidate_cache(bs, NULL);
> +    aio_context_release(ctx);
> +
> +    blk = blk_new(bdrv_get_aio_context(bs), perm,
> +                  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
> +                  BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
> +    ret = blk_insert_bs(blk, bs, errp);
> +
> +    if (ret < 0) {
> +        goto fail;
> +    }
> +
> +    blk_set_enable_write_cache(blk, false);
> +
> +    blk_set_allow_aio_context_change(blk, true);
> +
> +    vu_block_device->blkcfg.wce = 0;
> +    vu_block_device->backend = blk;
> +    if (!vu_block_device->blk_size) {
> +        vu_block_device->blk_size = BDRV_SECTOR_SIZE;
> +    }
> +    vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
> +    blk_set_guest_block_size(blk, vu_block_device->blk_size);
> +    vu_block_initialize_config(bs, &vu_block_device->blkcfg,
> +                                   vu_block_device->blk_size);
> +    return vu_block_device;
> +
> +fail:
> +    blk_unref(blk);
> +    return NULL;
> +}
> +
> +static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
> +{
> +    if (!vu_block_device) {
> +        return;
> +    }
> +
> +    vhost_user_server_stop(&vu_block_device->vu_server);
> +
> +    if (vu_block_device->backend) {
> +        blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
> +                                        blk_aio_detach, vu_block_device);
> +    }
> +
> +    blk_unref(vu_block_device->backend);
> +
> +}
> +
> +
> +static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
> +                                        Error **errp)
> +{
> +    SocketAddress *addr = vu_block_device->addr;
> +
> +    if (!vu_block_init(vu_block_device, errp)) {
> +        return;
> +    }
> +
> +    AioContext *ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));

Please move declarations to the top of the function.

> +    if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
> +                                 VHOST_USER_BLK_MAX_QUEUES,
> +                                 NULL, &vu_block_iface,
> +                                 errp)) {
> +        goto error;
> +    }
> +
> +    blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
> +                                 blk_aio_detach, vu_block_device);
> +    vu_block_device->running = true;
> +    return;
> +
> + error:
> +    vhost_user_blk_server_stop(vu_block_device);

vu_block_device hasn't been fully set up. You need to undo only
vu_block_init(). You must not call vhost_user_server_stop().

> +}
> +
> +static bool vu_prop_modificable(VuBlockDev *vus, Error **errp)

The word is "modifiable".

> +{
> +    if (vus->running) {
> +            error_setg(errp, "The property can't be modified "
> +                    "while the server is running");
> +            return false;

The indentation is off here.

> +    }
> +    return true;
> +}
> +static void vu_set_node_name(Object *obj, const char *value, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +
> +    if (vus->node_name) {
> +        if (!vu_prop_modificable(vus, errp)) {
> +            return;
> +        }

Why don't we need to check vu_prop_modificable() when the property isn't
set yet? I assume it's because the server can't even be started without
a node name, but it would be more obviously correct if the check were
done unconditionally.

> +        g_free(vus->node_name);
> +    }
> +
> +    vus->node_name = g_strdup(value);
> +}
> +
> +static char *vu_get_node_name(Object *obj, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +    return g_strdup(vus->node_name);
> +}
> +
> +
> +static void vu_set_unix_socket(Object *obj, const char *value,
> +                               Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +
> +    if (vus->addr) {
> +        if (!vu_prop_modificable(vus, errp)) {
> +            return;
> +        }

Same here.

> +        g_free(vus->addr->u.q_unix.path);
> +        g_free(vus->addr);
> +    }
> +
> +    SocketAddress *addr = g_new0(SocketAddress, 1);
> +    addr->type = SOCKET_ADDRESS_TYPE_UNIX;
> +    addr->u.q_unix.path = g_strdup(value);
> +    vus->addr = addr;
> +}
> +
> +static char *vu_get_unix_socket(Object *obj, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +    return g_strdup(vus->addr->u.q_unix.path);
> +}
> +
> +static bool vu_get_block_writable(Object *obj, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +    return vus->writable;
> +}
> +
> +static void vu_set_block_writable(Object *obj, bool value, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +
> +    if (!vu_prop_modificable(vus, errp)) {
> +            return;
> +    }
> +
> +    vus->writable = value;
> +}
> +
> +static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
> +                            void *opaque, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +    uint32_t value = vus->blk_size;
> +
> +    visit_type_uint32(v, name, &value, errp);
> +}
> +
> +static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
> +                            void *opaque, Error **errp)
> +{
> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
> +
> +    Error *local_err = NULL;
> +    uint32_t value;
> +
> +    if (!vu_prop_modificable(vus, errp)) {
> +            return;
> +    }
> +
> +    visit_type_uint32(v, name, &value, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    check_logical_block_size(object_get_typename(obj), name, value, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +
> +    vus->blk_size = value;
> +
> +out:
> +    error_propagate(errp, local_err);
> +    vus->blk_size = value;

Surely you don't want to set the value here, when some check failed?

> +}
> +
> +
> +static void vhost_user_blk_server_instance_finalize(Object *obj)
> +{
> +    VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
> +
> +    vhost_user_blk_server_stop(vub);
> +}
> +
> +static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
> +{
> +    Error *local_error = NULL;
> +    VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
> +
> +    vhost_user_blk_server_start(vub, &local_error);
> +
> +    if (local_error) {
> +        error_propagate(errp, local_error);
> +        return;
> +    }

If you don't do anything with local_error (which is named inconsistently
with local_err used above), you can just directly pass errp to
vhost_user_blk_server_start().

> +}
> +
> +static void vhost_user_blk_server_class_init(ObjectClass *klass,
> +                                             void *class_data)
> +{
> +    UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
> +    ucc->complete = vhost_user_blk_server_complete;
> +
> +    object_class_property_add_bool(klass, "writable",
> +                                   vu_get_block_writable,
> +                                   vu_set_block_writable);
> +
> +    object_class_property_add_str(klass, "node-name",
> +                                  vu_get_node_name,
> +                                  vu_set_node_name);
> +
> +    object_class_property_add_str(klass, "unix-socket",
> +                                  vu_get_unix_socket,
> +                                  vu_set_unix_socket);
> +
> +    object_class_property_add(klass, "logical-block-size", "uint32",
> +                              vu_get_blk_size, vu_set_blk_size,
> +                              NULL, NULL);
> +}
> +
> +static const TypeInfo vhost_user_blk_server_info = {
> +    .name = TYPE_VHOST_USER_BLK_SERVER,
> +    .parent = TYPE_OBJECT,
> +    .instance_size = sizeof(VuBlockDev),
> +    .instance_finalize = vhost_user_blk_server_instance_finalize,
> +    .class_init = vhost_user_blk_server_class_init,
> +    .interfaces = (InterfaceInfo[]) {
> +        {TYPE_USER_CREATABLE},
> +        {}
> +    },
> +};
> +
> +static void vhost_user_blk_server_register_types(void)
> +{
> +    type_register_static(&vhost_user_blk_server_info);
> +}
> +

Please remove the trailing empty line.

Compared to the last version that I reviewed, this seems to get the
architecture for concurrent requests right, which is an important
improvement. I feel we're getting quite close to mergable now.

Kevin



^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error
  2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
  2020-06-18 13:29   ` Kevin Wolf
@ 2020-06-19 12:00   ` Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h> Stefan Hajnoczi
                       ` (4 more replies)
  2020-06-19 12:13   ` [PATCH v9 2/5] generic vhost user server Stefan Hajnoczi
  2 siblings, 5 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:00 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

Commit b650d5f4b1cd3f9f8c4fdb319838c5c1e0695e41 ("Lift max ram slots
limit in libvhost-user") renamed this constant. Use the array size
instead of hard-coding a particular constant in the error message.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 393beeb6b9..e94a8d8a83 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -137,9 +137,9 @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
         read_bytes += rc;
         if (nfds_t > 0) {
             if (nfds + nfds_t > G_N_ELEMENTS(vmsg->fds)) {
-                error_report("A maximum of %d fds are allowed, "
+                error_report("A maximum of %zu fds are allowed, "
                              "however got %lu fds now",
-                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
+                             G_N_ELEMENTS(vmsg->fds), nfds + nfds_t);
                 goto fail;
             }
             memcpy(vmsg->fds + nfds, fds_t,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h>
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
@ 2020-06-19 12:00     ` Stefan Hajnoczi
  2020-08-17 12:49       ` Coiby Xu
  2020-06-19 12:00     ` [PATCH 3/6] vhost-user-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
                       ` (3 subsequent siblings)
  4 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:00 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index e94a8d8a83..49ada8bc78 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -7,7 +7,6 @@
  * later.  See the COPYING file in the top-level directory.
  */
 #include "qemu/osdep.h"
-#include <sys/eventfd.h>
 #include "qemu/main-loop.h"
 #include "vhost-user-server.h"
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 3/6] vhost-user-server: adjust vhost_user_server_set_aio_context() arguments
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h> Stefan Hajnoczi
@ 2020-06-19 12:00     ` Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 4/6] vhost-user-server: mark fd handlers "external" Stefan Hajnoczi
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:00 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

vhost_user_server_set_aio_context() operates on a VuServer object. Make
that the first argument of the function since it is conventional to
define functions with the object they act on as the first argument. In
other words, obj_action(obj, args...) is commonly used and not
obj_action(arg1, ..., obj, ...).

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.h | 2 +-
 util/vhost-user-server.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
index 5baf58f96a..584aab3da5 100644
--- a/util/vhost-user-server.h
+++ b/util/vhost-user-server.h
@@ -56,6 +56,6 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server);
+void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx);
 
 #endif /* VHOST_USER_SERVER_H */
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 49ada8bc78..5230ba3883 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -356,7 +356,7 @@ static void attach_context(VuServer *server, AioContext *ctx)
     }
 }
 
-void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server)
+void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx)
 {
     server->ctx = ctx ? ctx : qemu_get_aio_context();
     if (!server->sioc) {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 4/6] vhost-user-server: mark fd handlers "external"
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h> Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 3/6] vhost-user-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
@ 2020-06-19 12:00     ` Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 5/6] vhost-user-server: fix s/initialized/initialize/ typo Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 6/6] vhost-user-server: use DevicePanicNotifierFn everywhere Stefan Hajnoczi
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:00 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

The event loop has the concept of "external" fd handlers that process
requests from outside clients such as the guest. External fd handlers
are disabled during critical sections where new requests are not
allowed.

The vhost-user-server seems like an "external" client to me and
therefore should mark its file descriptors "external".

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5230ba3883..a5785cbf86 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -235,7 +235,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         kick_info->fd = fd;
         kick_info->cb = cb;
         qemu_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
                            NULL, NULL, kick_info);
         kick_info->vu_dev = vu_dev;
     }
@@ -262,7 +262,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
         return;
     }
     server->kick_info[i].cb = NULL;
-    aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(server->ioc->ctx, fd, true, NULL, NULL, NULL, NULL);
 }
 
 
@@ -333,7 +333,7 @@ static void detach_context(VuServer *server)
     qio_channel_detach_aio_context(server->ioc);
     for (i = 0; i < server->vu_dev.max_queues; i++) {
         if (server->kick_info[i].cb) {
-            aio_set_fd_handler(ctx, server->kick_info[i].fd, false, NULL,
+            aio_set_fd_handler(ctx, server->kick_info[i].fd, true, NULL,
                                NULL, NULL, NULL);
         }
     }
@@ -349,7 +349,7 @@ static void attach_context(VuServer *server, AioContext *ctx)
     }
     for (i = 0; i < server->vu_dev.max_queues; i++) {
         if (server->kick_info[i].cb) {
-            aio_set_fd_handler(ctx, server->kick_info[i].fd, false,
+            aio_set_fd_handler(ctx, server->kick_info[i].fd, true,
                                kick_handler, NULL, NULL,
                                &server->kick_info[i]);
         }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 5/6] vhost-user-server: fix s/initialized/initialize/ typo
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
                       ` (2 preceding siblings ...)
  2020-06-19 12:00     ` [PATCH 4/6] vhost-user-server: mark fd handlers "external" Stefan Hajnoczi
@ 2020-06-19 12:00     ` Stefan Hajnoczi
  2020-06-19 12:00     ` [PATCH 6/6] vhost-user-server: use DevicePanicNotifierFn everywhere Stefan Hajnoczi
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:00 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

The present tense is used here.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index a5785cbf86..42a51d190c 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -279,7 +279,7 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
 
     if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
                  vu_message_read, set_watch, remove_watch, server->vu_iface)) {
-        error_report("Failed to initialized libvhost-user");
+        error_report("Failed to initialize libvhost-user");
         return;
     }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 6/6] vhost-user-server: use DevicePanicNotifierFn everywhere
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
                       ` (3 preceding siblings ...)
  2020-06-19 12:00     ` [PATCH 5/6] vhost-user-server: fix s/initialized/initialize/ typo Stefan Hajnoczi
@ 2020-06-19 12:00     ` Stefan Hajnoczi
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:00 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

Use the DevicePanicNotifierFn typedef instead of open-coding the
function pointer. Writing the code this way avoids duplicating the
function prototype.

Also use the VuServer typedef instead of struct VuServer as required by
QEMU's coding style.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
index 584aab3da5..37aca1e5aa 100644
--- a/util/vhost-user-server.h
+++ b/util/vhost-user-server.h
@@ -25,10 +25,13 @@ typedef struct KickInfo {
     vu_watch_cb cb;
 } KickInfo;
 
-typedef struct VuServer {
+typedef struct VuServer VuServer;
+typedef void DevicePanicNotifierFn(VuServer *server);
+
+struct VuServer {
     QIONetListener *listener;
     AioContext *ctx;
-    void (*device_panic_notifier)(struct VuServer *server) ;
+    DevicePanicNotifierFn *device_panic_notifier;
     int max_queues;
     const VuDevIface *vu_iface;
     VuDev vu_dev;
@@ -41,10 +44,7 @@ typedef struct VuServer {
     KickInfo *kick_info; /* an array with the length of the queue number */
     /* restart coroutine co_trip if AIOContext is changed */
     bool aio_context_changed;
-} VuServer;
-
-
-typedef void DevicePanicNotifierFn(struct VuServer *server);
+};
 
 bool vhost_user_server_start(VuServer *server,
                              SocketAddress *unix_socket,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file
  2020-06-14 18:39 ` [PATCH v9 3/5] move logical block size check function to a common utility function Coiby Xu
  2020-06-18 13:44   ` Kevin Wolf
@ 2020-06-19 12:01   ` Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 2/6] block-helpers: switch to int64_t block size values Stefan Hajnoczi
                       ` (4 more replies)
  1 sibling, 5 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:01 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

Move the constants from hw/core/qdev-properties.c to
util/block-helpers.h so that knowledge of the min/max values is
encapsulated in block-helpers code.

Callers should not assume specific min/max values. In fact, the values
in hw/core/qdev-properties.c and util/block-helpers.c did not match. Use
the hw/core/qdev-properties.c values since that's what existing code
expects.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/block-helpers.h      | 12 ++++++++++++
 hw/core/qdev-properties.c | 11 -----------
 util/block-helpers.c      |  7 ++-----
 3 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/util/block-helpers.h b/util/block-helpers.h
index f06be282a1..46975ca7af 100644
--- a/util/block-helpers.h
+++ b/util/block-helpers.h
@@ -1,6 +1,18 @@
 #ifndef BLOCK_HELPERS_H
 #define BLOCK_HELPERS_H
 
+#include "qemu/units.h"
+
+/* lower limit is sector size */
+#define MIN_BLOCK_SIZE          INT64_C(512)
+#define MIN_BLOCK_SIZE_STR      "512 B"
+/*
+ * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
+ * matches qcow2 cluster size limit
+ */
+#define MAX_BLOCK_SIZE          (2 * MiB)
+#define MAX_BLOCK_SIZE_STR      "2 MiB"
+
 void check_logical_block_size(const char *id, const char *name, uint16_t value,
                      Error **errp);
 
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index b478f100af..03981feb02 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -14,7 +14,6 @@
 #include "qapi/visitor.h"
 #include "chardev/char.h"
 #include "qemu/uuid.h"
-#include "qemu/units.h"
 #include "util/block-helpers.h"
 
 void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
@@ -782,16 +781,6 @@ const PropertyInfo qdev_prop_size32 = {
 
 /* --- blocksize --- */
 
-/* lower limit is sector size */
-#define MIN_BLOCK_SIZE          512
-#define MIN_BLOCK_SIZE_STR      "512 B"
-/*
- * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
- * matches qcow2 cluster size limit
- */
-#define MAX_BLOCK_SIZE          (2 * MiB)
-#define MAX_BLOCK_SIZE_STR      "2 MiB"
-
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
diff --git a/util/block-helpers.c b/util/block-helpers.c
index d31309cc0e..089fe3401d 100644
--- a/util/block-helpers.c
+++ b/util/block-helpers.c
@@ -25,13 +25,10 @@
 void check_logical_block_size(const char *id, const char *name, uint16_t value,
                      Error **errp)
 {
-    const int64_t min = 512;
-    const int64_t max = 32768;
-
     /* value of 0 means "unset" */
-    if (value && (value < min || value > max)) {
+    if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
         error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
-                   id, name, (int64_t)value, min, max);
+                   id, name, value, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
         return;
     }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 2/6] block-helpers: switch to int64_t block size values
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
@ 2020-06-19 12:01     ` Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 3/6] block-helpers: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:01 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

The uint16_t type is too small for MAX_BLOCK_SIZE (2 MiB). The int64_t
type is widely used in QEMU as a type for disk offsets and sizes, so
it's an appropriate type to use here. It will work for all callers.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/block-helpers.h | 2 +-
 util/block-helpers.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/util/block-helpers.h b/util/block-helpers.h
index 46975ca7af..ec6421560c 100644
--- a/util/block-helpers.h
+++ b/util/block-helpers.h
@@ -13,7 +13,7 @@
 #define MAX_BLOCK_SIZE          (2 * MiB)
 #define MAX_BLOCK_SIZE_STR      "2 MiB"
 
-void check_logical_block_size(const char *id, const char *name, uint16_t value,
+void check_logical_block_size(const char *id, const char *name, int64_t value,
                      Error **errp);
 
 #endif /* BLOCK_HELPERS_H */
diff --git a/util/block-helpers.c b/util/block-helpers.c
index 089fe3401d..9e68954c46 100644
--- a/util/block-helpers.c
+++ b/util/block-helpers.c
@@ -22,7 +22,7 @@
  *
  *  Moved from hw/core/qdev-properties.c:set_blocksize()
  */
-void check_logical_block_size(const char *id, const char *name, uint16_t value,
+void check_logical_block_size(const char *id, const char *name, int64_t value,
                      Error **errp)
 {
     /* value of 0 means "unset" */
@@ -37,7 +37,7 @@ void check_logical_block_size(const char *id, const char *name, uint16_t value,
         error_setg(errp,
                    "Property %s.%s doesn't take value '%" PRId64
                    "', it's not a power of 2",
-                   id, name, (int64_t)value);
+                   id, name, value);
         return;
     }
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 3/6] block-helpers: rename check_logical_block_size() to check_block_size()
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 2/6] block-helpers: switch to int64_t block size values Stefan Hajnoczi
@ 2020-06-19 12:01     ` Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 4/6] block-helpers: use local_err in case errp is NULL Stefan Hajnoczi
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:01 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

Block size checking is the same whether it's a physical, logical, or
other block size value. Use a more general name to show this function
can be used in other cases too (just like the qdev property that this
code originally comes from).

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/block-helpers.h      | 4 ++--
 hw/core/qdev-properties.c | 2 +-
 util/block-helpers.c      | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/util/block-helpers.h b/util/block-helpers.h
index ec6421560c..b53295a529 100644
--- a/util/block-helpers.h
+++ b/util/block-helpers.h
@@ -13,7 +13,7 @@
 #define MAX_BLOCK_SIZE          (2 * MiB)
 #define MAX_BLOCK_SIZE_STR      "2 MiB"
 
-void check_logical_block_size(const char *id, const char *name, int64_t value,
-                     Error **errp);
+void check_block_size(const char *id, const char *name, int64_t value,
+                      Error **errp);
 
 #endif /* BLOCK_HELPERS_H */
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 03981feb02..28a6d8b2ee 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -800,7 +800,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         error_propagate(errp, local_err);
         return;
     }
-    check_logical_block_size(dev->id ? : "", name, value, errp);
+    check_block_size(dev->id ? : "", name, value, errp);
     if (errp) {
         return;
     }
diff --git a/util/block-helpers.c b/util/block-helpers.c
index 9e68954c46..51d9d02c43 100644
--- a/util/block-helpers.c
+++ b/util/block-helpers.c
@@ -22,8 +22,8 @@
  *
  *  Moved from hw/core/qdev-properties.c:set_blocksize()
  */
-void check_logical_block_size(const char *id, const char *name, int64_t value,
-                     Error **errp)
+void check_block_size(const char *id, const char *name, int64_t value,
+                      Error **errp)
 {
     /* value of 0 means "unset" */
     if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 4/6] block-helpers: use local_err in case errp is NULL
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 2/6] block-helpers: switch to int64_t block size values Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 3/6] block-helpers: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
@ 2020-06-19 12:01     ` Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 5/6] block-helpers: keep the copyright line from the original file Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 6/6] block-helpers: update doc comment in gtkdoc style Stefan Hajnoczi
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:01 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

If we use errp then we won't know when check_block_size() fails when
errp is NULL.

The purpose of local_err is to detect an error has occurred even if the
caller doesn't care about the specific error and has passed a NULL errp.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/core/qdev-properties.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 28a6d8b2ee..0a651c7d32 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -800,8 +800,9 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         error_propagate(errp, local_err);
         return;
     }
-    check_block_size(dev->id ? : "", name, value, errp);
-    if (errp) {
+    check_block_size(dev->id ? : "", name, value, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
         return;
     }
     *ptr = value;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 5/6] block-helpers: keep the copyright line from the original file
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
                       ` (2 preceding siblings ...)
  2020-06-19 12:01     ` [PATCH 4/6] block-helpers: use local_err in case errp is NULL Stefan Hajnoczi
@ 2020-06-19 12:01     ` Stefan Hajnoczi
  2020-06-19 12:01     ` [PATCH 6/6] block-helpers: update doc comment in gtkdoc style Stefan Hajnoczi
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:01 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

The check_block_size() code comes from hw/core/qdev-properties.c. Keep
the copyright.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/block-helpers.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/util/block-helpers.c b/util/block-helpers.c
index 51d9d02c43..9d12368032 100644
--- a/util/block-helpers.c
+++ b/util/block-helpers.c
@@ -1,6 +1,7 @@
 /*
  * Block utility functions
  *
+ * Copyright IBM, Corp. 2011
  * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
  *
  * This work is licensed under the terms of the GNU GPL, version 2 or later.
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 6/6] block-helpers: update doc comment in gtkdoc style
  2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
                       ` (3 preceding siblings ...)
  2020-06-19 12:01     ` [PATCH 5/6] block-helpers: keep the copyright line from the original file Stefan Hajnoczi
@ 2020-06-19 12:01     ` Stefan Hajnoczi
  4 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:01 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

QEMU uses the gtkdoc style for API doc comments. For examples, see
include/qom/object.h.

Fully document the function with up-to-date information (the min/max
values were outdated).

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/block-helpers.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/util/block-helpers.c b/util/block-helpers.c
index 9d12368032..c4851432f5 100644
--- a/util/block-helpers.c
+++ b/util/block-helpers.c
@@ -13,15 +13,17 @@
 #include "qapi/qmp/qerror.h"
 #include "block-helpers.h"
 
-/*
- * Logical block size input validation
+/**
+ * check_block_size:
+ * @id: The unique ID of the object
+ * @name: The name of the property being validated
+ * @value: The block size in bytes
+ * @errp: A pointer to an area to store an error
  *
- * The size should meet the following conditions:
- * 1. min=512
- * 2. max=32768
- * 3. a power of 2
- *
- *  Moved from hw/core/qdev-properties.c:set_blocksize()
+ * This function checks that the block size meets the following conditions:
+ * 1. At least MIN_BLOCK_SIZE
+ * 2. No larger than MAX_BLOCK_SIZE
+ * 3. A power of 2
  */
 void check_block_size(const char *id, const char *name, int64_t value,
                       Error **errp)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 1/2] vhost-user-blk-server: adjust vhost_user_server_set_aio_context() arguments
  2020-06-14 18:39 ` [PATCH v9 4/5] vhost-user block device backend server Coiby Xu
  2020-06-18 15:57   ` Kevin Wolf
@ 2020-06-19 12:03   ` Stefan Hajnoczi
  2020-06-19 12:03     ` [PATCH 2/2] vhost-user-blk-server: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
  1 sibling, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:03 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

The function arguments were reordered in a previous patch. Use the new
ordering.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index bbf2ceaa9b..bed3c43737 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -352,7 +352,7 @@ static void blk_aio_attached(AioContext *ctx, void *opaque)
 {
     VuBlockDev *vub_dev = opaque;
     aio_context_acquire(ctx);
-    vhost_user_server_set_aio_context(ctx, &vub_dev->vu_server);
+    vhost_user_server_set_aio_context(&vub_dev->vu_server, ctx);
     aio_context_release(ctx);
 }
 
@@ -361,7 +361,7 @@ static void blk_aio_detach(void *opaque)
     VuBlockDev *vub_dev = opaque;
     AioContext *ctx = vub_dev->vu_server.ctx;
     aio_context_acquire(ctx);
-    vhost_user_server_set_aio_context(NULL, &vub_dev->vu_server);
+    vhost_user_server_set_aio_context(&vub_dev->vu_server, NULL);
     aio_context_release(ctx);
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 2/2] vhost-user-blk-server: rename check_logical_block_size() to check_block_size()
  2020-06-19 12:03   ` [PATCH 1/2] vhost-user-blk-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
@ 2020-06-19 12:03     ` Stefan Hajnoczi
  0 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:03 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, Stefan Hajnoczi

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index bed3c43737..f3fada5b37 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -593,7 +593,7 @@ static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
         goto out;
     }
 
-    check_logical_block_size(object_get_typename(obj), name, value, &local_err);
+    check_block_size(object_get_typename(obj), name, value, &local_err);
     if (local_err) {
         goto out;
     }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (6 preceding siblings ...)
  2020-06-14 19:16 ` no-reply
@ 2020-06-19 12:07 ` Stefan Hajnoczi
  2020-06-24  4:48   ` Coiby Xu
  2020-06-25 12:46   ` Coiby Xu
  2020-08-18 15:13 ` Stefan Hajnoczi
  2020-09-15 15:35 ` Stefan Hajnoczi
  9 siblings, 2 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:07 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 802 bytes --]

On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
> v9
>  - move logical block size check function to a utility function
>  - fix issues regarding license, coding style, memory deallocation, etc.

I have replied with patches that you can consider squashing into your
series. I was testing this patch series and decided it was easier to
send code than to go back and write review comments since I was already
on a git branch.

My patches can be combined into your original patches using "git rebase
-i" and the "fixup" or "squash" directive.

Please add my Signed-off-by: line to affected patches when squashing
patches so that the git log records that I have confirmed that I have
permission to contribute this code.

If you have questions about any of the patches, please let me know.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 2/5] generic vhost user server
  2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
  2020-06-18 13:29   ` Kevin Wolf
  2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
@ 2020-06-19 12:13   ` Stefan Hajnoczi
  2020-08-17  8:24     ` Coiby Xu
  2 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-19 12:13 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1464 bytes --]

On Mon, Jun 15, 2020 at 02:39:04AM +0800, Coiby Xu wrote:
> +/*
> + * a wrapper for vu_kick_cb
> + *
> + * since aio_dispatch can only pass one user data pointer to the
> + * callback function, pack VuDev and pvt into a struct. Then unpack it
> + * and pass them to vu_kick_cb
> + */
> +static void kick_handler(void *opaque)
> +{
> +    KickInfo *kick_info = opaque;
> +    kick_info->cb(kick_info->vu_dev, 0, (void *) kick_info->index);

Where is kick_info->index assigned? It appears to be NULL in all cases.

> +}
> +
> +
> +static void
> +set_watch(VuDev *vu_dev, int fd, int vu_evt,
> +          vu_watch_cb cb, void *pvt)
> +{
> +
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    g_assert(vu_dev);
> +    g_assert(fd >= 0);
> +    long index = (intptr_t) pvt;

The meaning of the pvt argument is not defined in the library interface.
set_watch() callbacks shouldn't interpret pvt.

You could modify libvhost-user to explicitly pass the virtqueue index
(or -1 if the fd is not associated with a virtqueue), but it's nice to
avoid libvhost-user API changes so that existing libvhost-user
applications don't require modifications.

What I would do here is to change the ->kick_info[] data struct. How
about a linked list of VuFdWatch objects? That way the code can handle
any number of fd watches and doesn't make assumptions about virtqueues.
set_watch() is a generic fd monitoring interface and doesn't need to be
tied to virtqueues.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 1/5] Allow vu_message_read to be replaced
  2020-06-18 10:43   ` Kevin Wolf
@ 2020-06-24  3:36     ` Coiby Xu
  2020-06-24 12:24       ` Kevin Wolf
  0 siblings, 1 reply; 51+ messages in thread
From: Coiby Xu @ 2020-06-24  3:36 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: bharatlkmlkvm, qemu-devel, stefanha, Dr. David Alan Gilbert

On Thu, Jun 18, 2020 at 12:43:47PM +0200, Kevin Wolf wrote:
>Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
>> Allow vu_message_read to be replaced by one which will make use of the
>> QIOChannel functions. Thus reading vhost-user message won't stall the
>> guest.
>>
>> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
>
>_vu_queue_notify() still has a direct call of vu_message_read() instead
>of using the pointer. Is this intentional?

This is a mistake. Thank you for reminding me!

>Renaming the function would make sure that such semantic merge conflicts
>don't stay unnoticed.

Thank you for this tip! Do you suggest renaming the function only for
the purpose of testing or should I adopt a name when submitting the
patch? For the latter case, I will change it to vu_message_read_default
then.

>> @@ -1704,6 +1702,7 @@ vu_deinit(VuDev *dev)
>>          }
>>
>>          if (vq->kick_fd != -1) {
>> +            dev->remove_watch(dev, vq->kick_fd);
>>              close(vq->kick_fd);
>>              vq->kick_fd = -1;
>>          }
>
>This hunk looks unrelated.

In v4, I made the comment to explain why it's needed. But libvhost-user
is supposed to be independent from QEMU, so Stefan suggested to remove it,

> > @@ -1627,6 +1647,12 @@ vu_deinit(VuDev *dev)
> >          }
> >
> >          if (vq->kick_fd != -1) {
> > +            /* remove watch for kick_fd
> > +             * When client process is running in gdb and
> > +             * quit command is run in gdb, QEMU will still dispatch the event
> > +             * which will cause segment fault in the callback function
> > +             */
>
> Code and comments in libvhost-user should not refer to QEMU specifics.
> Removing the watch is a good idea regardless of the application or event
> loop implementation.  No comment is needed here.
> +            /* remove watch for kick_fd
> +             * When client process is running in gdb and
> +             * quit command is run in gdb, QEMU will still dispatch the event
> +             * which will cause segment fault in the callback function
> +             */
> +            dev->remove_watch(dev, vq->kick_fd);

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-18  8:27     ` Stefan Hajnoczi
@ 2020-06-24  4:00       ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-24  4:00 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel

On Thu, Jun 18, 2020 at 09:27:48AM +0100, Stefan Hajnoczi wrote:
>On Tue, Jun 16, 2020 at 02:52:16PM +0800, Coiby Xu wrote:
>> On Sun, Jun 14, 2020 at 12:16:28PM -0700, no-reply@patchew.org wrote:
>> > Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/
>> >
>> >
>> >
>> > Hi,
>> >
>> > This series failed the asan build test. Please find the testing commands and
>> > their output below. If you have Docker installed, you can probably reproduce it
>> > locally.
>> >
>> > === TEST SCRIPT BEGIN ===
>> > #!/bin/bash
>> > export ARCH=x86_64
>> > make docker-image-fedora V=1 NETWORK=1
>> > time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
>> > === TEST SCRIPT END ===
>> >
>> >  CC      stubs/vm-stop.o
>> >  CC      ui/input-keymap.o
>> >  CC      qemu-keymap.o
>> > /tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: use of undeclared identifier 'VHOST_MEMORY_MAX_NREGIONS'
>> >                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
>> >                             ^
>> >
>> > The full log is available at
>> > http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.asan/?type=message.
>>
>> I couldn't re-produce this error locally for both docker-test-quick@centos7
>> and this docker test. And I can't see any reason for this error to occur since
>> VHOST_MEMORY_MAX_NREGIONS is defined in contrib/libvhost-user/libvhost-user.h
>> which has been included by util/vhost-user-server.h.
>
>Please see the recent change in commit
>b650d5f4b1cd3f9f8c4fdb319838c5c1e0695e41 ("Lift max ram slots limit in
>libvhost-user").
>
>The error can be solved by replacing VHOST_MEMORY_MAX_NREGIONS with
>VHOST_MEMORY_BASELINE_NREGIONS in util/vhost-user-server.c.

Thank you for the clarification! I did run "git pull" when checking this error.
It seems there is a delay when pulling updates from git://git.qemu.org/qemu.git.


--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server
  2020-06-18 15:17   ` Stefan Hajnoczi
@ 2020-06-24  4:35     ` Coiby Xu
  2020-06-24 10:49       ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Coiby Xu @ 2020-06-24  4:35 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kwolf, Laurent Vivier, Thomas Huth, qemu-devel, bharatlkmlkvm,
	Paolo Bonzini

On Thu, Jun 18, 2020 at 04:17:51PM +0100, Stefan Hajnoczi wrote:
>On Mon, Jun 15, 2020 at 02:39:07AM +0800, Coiby Xu wrote:
>> This test case has the same tests as tests/virtio-blk-test.c except for
>> tests have block_resize. Since vhost-user server can only server one
>> client one time, two instances of qemu-storage-daemon are launched
>> for the hotplug test.
>>
>> In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
>> send "quit" command to qemu-storage-daemon's QMP monitor. So a function
>> is added to libqtest.c to establish socket connection with socket
>> server.
>>
>> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
>> ---
>>  tests/Makefile.include              |   3 +-
>>  tests/qtest/Makefile.include        |   2 +
>>  tests/qtest/libqos/vhost-user-blk.c | 130 +++++
>>  tests/qtest/libqos/vhost-user-blk.h |  48 ++
>>  tests/qtest/libqtest.c              |  35 +-
>>  tests/qtest/libqtest.h              |  17 +
>>  tests/qtest/vhost-user-blk-test.c   | 739 ++++++++++++++++++++++++++++
>>  7 files changed, 971 insertions(+), 3 deletions(-)
>>  create mode 100644 tests/qtest/libqos/vhost-user-blk.c
>>  create mode 100644 tests/qtest/libqos/vhost-user-blk.h
>>  create mode 100644 tests/qtest/vhost-user-blk-test.c
>
>This test case fails for me:
>
>qemu-system-x86_64: Failed to read from slave.
>qemu-system-x86_64: Failed to set msg fds.
>qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Success (0)
>qemu-system-x86_64: Failed to read from slave.
>qemu-system-x86_64: Failed to read from slave.
>qemu-system-x86_64: Failed to read from slave.
>qemu-system-x86_64: Failed to set msg fds.
>qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Success (0)
>qemu-system-x86_64: Failed to read msg header. Read -1 instead of 12. Original request 11.
>qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Input/output error (5)
>
>Does "make -j4 check" pass for you?

Actually it's a success since it won't fail CI. The reason for the
occurrence of these dubious messages is after finishing the tests,
vhost-user-blk-server is stopped before qemu-system-x86_64 is destroyed.
I'll see if I can find a way to kill qemu-system-x86_64 first.

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-19 12:07 ` Stefan Hajnoczi
@ 2020-06-24  4:48   ` Coiby Xu
  2020-06-25 12:46   ` Coiby Xu
  1 sibling, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-06-24  4:48 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel

On Fri, Jun 19, 2020 at 01:07:46PM +0100, Stefan Hajnoczi wrote:
>On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
>> v9
>>  - move logical block size check function to a utility function
>>  - fix issues regarding license, coding style, memory deallocation, etc.
>
>I have replied with patches that you can consider squashing into your
>series. I was testing this patch series and decided it was easier to
>send code than to go back and write review comments since I was already
>on a git branch.
>
>My patches can be combined into your original patches using "git rebase
>-i" and the "fixup" or "squash" directive.
>
>Please add my Signed-off-by: line to affected patches when squashing
>patches so that the git log records that I have confirmed that I have
>permission to contribute this code.

I was thinking on how to incorporate your work while reading the emails.
You just provide the instruction! Thank you!


--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server
  2020-06-24  4:35     ` Coiby Xu
@ 2020-06-24 10:49       ` Stefan Hajnoczi
  0 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-24 10:49 UTC (permalink / raw)
  To: Coiby Xu
  Cc: kwolf, Laurent Vivier, Thomas Huth, qemu-devel, bharatlkmlkvm,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 2634 bytes --]

On Wed, Jun 24, 2020 at 12:35:10PM +0800, Coiby Xu wrote:
> On Thu, Jun 18, 2020 at 04:17:51PM +0100, Stefan Hajnoczi wrote:
> > On Mon, Jun 15, 2020 at 02:39:07AM +0800, Coiby Xu wrote:
> > > This test case has the same tests as tests/virtio-blk-test.c except for
> > > tests have block_resize. Since vhost-user server can only server one
> > > client one time, two instances of qemu-storage-daemon are launched
> > > for the hotplug test.
> > > 
> > > In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
> > > send "quit" command to qemu-storage-daemon's QMP monitor. So a function
> > > is added to libqtest.c to establish socket connection with socket
> > > server.
> > > 
> > > Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> > > ---
> > >  tests/Makefile.include              |   3 +-
> > >  tests/qtest/Makefile.include        |   2 +
> > >  tests/qtest/libqos/vhost-user-blk.c | 130 +++++
> > >  tests/qtest/libqos/vhost-user-blk.h |  48 ++
> > >  tests/qtest/libqtest.c              |  35 +-
> > >  tests/qtest/libqtest.h              |  17 +
> > >  tests/qtest/vhost-user-blk-test.c   | 739 ++++++++++++++++++++++++++++
> > >  7 files changed, 971 insertions(+), 3 deletions(-)
> > >  create mode 100644 tests/qtest/libqos/vhost-user-blk.c
> > >  create mode 100644 tests/qtest/libqos/vhost-user-blk.h
> > >  create mode 100644 tests/qtest/vhost-user-blk-test.c
> > 
> > This test case fails for me:
> > 
> > qemu-system-x86_64: Failed to read from slave.
> > qemu-system-x86_64: Failed to set msg fds.
> > qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Success (0)
> > qemu-system-x86_64: Failed to read from slave.
> > qemu-system-x86_64: Failed to read from slave.
> > qemu-system-x86_64: Failed to read from slave.
> > qemu-system-x86_64: Failed to set msg fds.
> > qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Success (0)
> > qemu-system-x86_64: Failed to read msg header. Read -1 instead of 12. Original request 11.
> > qemu-system-x86_64: vhost VQ 0 ring restore failed: -1: Input/output error (5)
> > 
> > Does "make -j4 check" pass for you?
> 
> Actually it's a success since it won't fail CI. The reason for the
> occurrence of these dubious messages is after finishing the tests,
> vhost-user-blk-server is stopped before qemu-system-x86_64 is destroyed.
> I'll see if I can find a way to kill qemu-system-x86_64 first.

Maybe I didn't even notice whether it was passing or failing and just
got scared by these messages! :)

Thanks for explaining. It would be good to terminate cleanly to avoid
confusing users.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 1/5] Allow vu_message_read to be replaced
  2020-06-24  3:36     ` Coiby Xu
@ 2020-06-24 12:24       ` Kevin Wolf
  0 siblings, 0 replies; 51+ messages in thread
From: Kevin Wolf @ 2020-06-24 12:24 UTC (permalink / raw)
  To: Coiby Xu; +Cc: bharatlkmlkvm, qemu-devel, stefanha, Dr. David Alan Gilbert

Am 24.06.2020 um 05:36 hat Coiby Xu geschrieben:
> On Thu, Jun 18, 2020 at 12:43:47PM +0200, Kevin Wolf wrote:
> > Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
> > > Allow vu_message_read to be replaced by one which will make use of the
> > > QIOChannel functions. Thus reading vhost-user message won't stall the
> > > guest.
> > > 
> > > Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> > 
> > _vu_queue_notify() still has a direct call of vu_message_read() instead
> > of using the pointer. Is this intentional?
> 
> This is a mistake. Thank you for reminding me!
> 
> > Renaming the function would make sure that such semantic merge conflicts
> > don't stay unnoticed.
> 
> Thank you for this tip! Do you suggest renaming the function only for
> the purpose of testing or should I adopt a name when submitting the
> patch? For the latter case, I will change it to vu_message_read_default
> then.

I think I would rename it permanently and keep the new name for the
actual submission. The reason is that if someone else is working on a
series adding new references, the compiler would catch it there, too.

vu_message_read_default() sounds good to me.

> > > @@ -1704,6 +1702,7 @@ vu_deinit(VuDev *dev)
> > >          }
> > > 
> > >          if (vq->kick_fd != -1) {
> > > +            dev->remove_watch(dev, vq->kick_fd);
> > >              close(vq->kick_fd);
> > >              vq->kick_fd = -1;
> > >          }
> > 
> > This hunk looks unrelated.
> 
> In v4, I made the comment to explain why it's needed. But libvhost-user
> is supposed to be independent from QEMU, so Stefan suggested to remove it,

Yes, I saw the reason why you need it in later patches.

If you can remove it completely, that is even better, but otherwise I
would make the addition only later (either in the patch that actually
needs it or in a new separate patch) because it's not really part of
"allowing vu_message_read to be replaced", as the commit message says.

Kevin



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server
  2020-06-14 18:39 ` [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server Coiby Xu
  2020-06-18 15:17   ` Stefan Hajnoczi
@ 2020-06-24 15:14   ` Thomas Huth
  2020-08-17  8:16     ` Coiby Xu
  1 sibling, 1 reply; 51+ messages in thread
From: Thomas Huth @ 2020-06-24 15:14 UTC (permalink / raw)
  To: Coiby Xu, qemu-devel
  Cc: kwolf, bharatlkmlkvm, Laurent Vivier, stefanha, Paolo Bonzini

On 14/06/2020 20.39, Coiby Xu wrote:
> This test case has the same tests as tests/virtio-blk-test.c except for
> tests have block_resize. Since vhost-user server can only server one
> client one time, two instances of qemu-storage-daemon are launched
> for the hotplug test.
> 
> In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
> send "quit" command to qemu-storage-daemon's QMP monitor. So a function
> is added to libqtest.c to establish socket connection with socket
> server.
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> ---
[...]
> diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
> index 49075b55a1..02cc09f893 100644
> --- a/tests/qtest/libqtest.c
> +++ b/tests/qtest/libqtest.c
> @@ -52,8 +52,7 @@ typedef struct QTestClientTransportOps {
>       QTestRecvFn     recv_line; /* for receiving qtest command responses */
>   } QTestTransportOps;
>   
> -struct QTestState
> -{
> +struct QTestState {
>       int fd;
>       int qmp_fd;
>       pid_t qemu_pid;  /* our child QEMU process */
> @@ -608,6 +607,38 @@ QDict *qtest_qmp_receive(QTestState *s)
>       return qmp_fd_receive(s->qmp_fd);
>   }
>   
> +QTestState *qtest_create_state_with_qmp_fd(int fd)
> +{
> +    QTestState *qmp_test_state = g_new0(QTestState, 1);
> +    qmp_test_state->qmp_fd = fd;
> +    return qmp_test_state;
> +}
> +
> +int qtest_socket_client(char *server_socket_path)
> +{
> +    struct sockaddr_un serv_addr;
> +    int sock;
> +    int ret;
> +    int retries = 0;
> +    sock = socket(PF_UNIX, SOCK_STREAM, 0);
> +    g_assert_cmpint(sock, !=, -1);
> +    serv_addr.sun_family = AF_UNIX;
> +    snprintf(serv_addr.sun_path, sizeof(serv_addr.sun_path), "%s",
> +             server_socket_path);
> +
> +    do {

Why not simply:

  for (retries = 0; retries < 3; retries++)

?

> +        ret = connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
> +        if (ret == 0) {
> +            break;
> +        }
> +        retries += 1;
> +        g_usleep(G_USEC_PER_SEC);
> +    } while (retries < 3);
> +
> +    g_assert_cmpint(ret, ==, 0);
> +    return sock;
> +}
[...]
> diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
> new file mode 100644
> index 0000000000..56e3d8f338
> --- /dev/null
> +++ b/tests/qtest/vhost-user-blk-test.c
> @@ -0,0 +1,739 @@
> +/*
> + * QTest testcase for VirtIO Block Device
> + *
> + * Copyright (c) 2014 SUSE LINUX Products GmbH
> + * Copyright (c) 2014 Marc Marí
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "libqtest-single.h"
> +#include "qemu/bswap.h"
> +#include "qemu/module.h"
> +#include "standard-headers/linux/virtio_blk.h"
> +#include "standard-headers/linux/virtio_pci.h"
> +#include "libqos/qgraph.h"
> +#include "libqos/vhost-user-blk.h"
> +#include "libqos/libqos-pc.h"
> +
> +/* TODO actually test the results and get rid of this */
> +#define qmp_discard_response(...) qobject_unref(qmp(__VA_ARGS__))

Please no more qmp_discard_response() in new code!

> +#define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
> +#define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
> +#define PCI_SLOT_HP             0x06
> +
> +typedef struct QVirtioBlkReq {
> +    uint32_t type;
> +    uint32_t ioprio;
> +    uint64_t sector;
> +    char *data;
> +    uint8_t status;
> +} QVirtioBlkReq;
> +
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +static const bool host_is_big_endian = true;
> +#else
> +static const bool host_is_big_endian; /* false */
> +#endif
> +
> +static inline void virtio_blk_fix_request(QVirtioDevice *d, QVirtioBlkReq *req)
> +{
> +    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
> +        req->type = bswap32(req->type);
> +        req->ioprio = bswap32(req->ioprio);
> +        req->sector = bswap64(req->sector);
> +    }
> +}
> +
> +

One empty line should be enough ;-)

> +static inline void virtio_blk_fix_dwz_hdr(QVirtioDevice *d,
> +    struct virtio_blk_discard_write_zeroes *dwz_hdr)
> +{
> +    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
> +        dwz_hdr->sector = bswap64(dwz_hdr->sector);
> +        dwz_hdr->num_sectors = bswap32(dwz_hdr->num_sectors);
> +        dwz_hdr->flags = bswap32(dwz_hdr->flags);
> +    }
> +}
> +
> +static uint64_t virtio_blk_request(QGuestAllocator *alloc, QVirtioDevice *d,
> +                                   QVirtioBlkReq *req, uint64_t data_size)
> +{
> +    uint64_t addr;
> +    uint8_t status = 0xFF;
> +
> +    switch (req->type) {
> +    case VIRTIO_BLK_T_IN:
> +    case VIRTIO_BLK_T_OUT:
> +        g_assert_cmpuint(data_size % 512, ==, 0);
> +        break;
> +    case VIRTIO_BLK_T_DISCARD:
> +    case VIRTIO_BLK_T_WRITE_ZEROES:
> +        g_assert_cmpuint(data_size %
> +                         sizeof(struct virtio_blk_discard_write_zeroes), ==, 0);
> +        break;
> +    default:
> +        g_assert_cmpuint(data_size, ==, 0);
> +    }
> +
> +    addr = guest_alloc(alloc, sizeof(*req) + data_size);
> +
> +    virtio_blk_fix_request(d, req);
> +
> +    memwrite(addr, req, 16);
> +    memwrite(addr + 16, req->data, data_size);
> +    memwrite(addr + 16 + data_size, &status, sizeof(status));
> +
> +    return addr;
> +}
> +
> +/* Returns the request virtqueue so the caller can perform further tests */
> +static QVirtQueue *test_basic(QVirtioDevice *dev, QGuestAllocator *alloc)
> +{
> +    QVirtioBlkReq req;
> +    uint64_t req_addr;
> +    uint64_t capacity;
> +    uint64_t features;
> +    uint32_t free_head;
> +    uint8_t status;
> +    char *data;
> +    QTestState *qts = global_qtest;
> +    QVirtQueue *vq;
> +
> +    features = qvirtio_get_features(dev);
> +    features = features & ~(QVIRTIO_F_BAD_FEATURE |
> +                    (1u << VIRTIO_RING_F_INDIRECT_DESC) |
> +                    (1u << VIRTIO_RING_F_EVENT_IDX) |
> +                    (1u << VIRTIO_BLK_F_SCSI));
> +    qvirtio_set_features(dev, features);
> +
> +    capacity = qvirtio_config_readq(dev, 0);
> +    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
> +
> +    vq = qvirtqueue_setup(dev, alloc, 0);
> +
> +    qvirtio_set_driver_ok(dev);
> +
> +    /* Write and read with 3 descriptor layout */
> +    /* Write request */
> +    req.type = VIRTIO_BLK_T_OUT;
> +    req.ioprio = 1;
> +    req.sector = 0;
> +    req.data = g_malloc0(512);
> +    strcpy(req.data, "TEST");
> +
> +    req_addr = virtio_blk_request(alloc, dev, &req, 512);
> +
> +    g_free(req.data);
> +
> +    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
> +    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
> +    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
> +
> +    qvirtqueue_kick(qts, dev, vq, free_head);
> +
> +    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
> +                           QVIRTIO_BLK_TIMEOUT_US);
> +    status = readb(req_addr + 528);
> +    g_assert_cmpint(status, ==, 0);
> +
> +    guest_free(alloc, req_addr);
> +
> +    /* Read request */
> +    req.type = VIRTIO_BLK_T_IN;
> +    req.ioprio = 1;
> +    req.sector = 0;
> +    req.data = g_malloc0(512);
> +
> +    req_addr = virtio_blk_request(alloc, dev, &req, 512);
> +
> +    g_free(req.data);
> +
> +    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
> +    qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
> +    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
> +
> +    qvirtqueue_kick(qts, dev, vq, free_head);
> +
> +    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
> +                           QVIRTIO_BLK_TIMEOUT_US);
> +    status = readb(req_addr + 528);
> +    g_assert_cmpint(status, ==, 0);
> +
> +    data = g_malloc0(512);
> +    memread(req_addr + 16, data, 512);

Since you have a "qts" variable here anyway, could you please use 
qtest_memread(qts, ...) here instead? (also in the other spots where you 
use memread and memwrite if possible) ... in case we ever have to 
introduce multiple test states later, we then don't have to rewrite the 
code anymore.
(generally, it's nice to avoid libqtest-single.h nowadays and only use 
libqtest.h if possible)

[...]
> +static void indirect(void *obj, void *u_data, QGuestAllocator *t_alloc)
> +{
> +    QVirtQueue *vq;
> +    QVhostUserBlk *blk_if = obj;
> +    QVirtioDevice *dev = blk_if->vdev;
> +    QVirtioBlkReq req;
> +    QVRingIndirectDesc *indirect;
> +    uint64_t req_addr;
> +    uint64_t capacity;
> +    uint64_t features;
> +    uint32_t free_head;
> +    uint8_t status;
> +    char *data;
> +    QTestState *qts = global_qtest;
> +
> +    features = qvirtio_get_features(dev);
> +    g_assert_cmphex(features & (1u << VIRTIO_RING_F_INDIRECT_DESC), !=, 0);
> +    features = features & ~(QVIRTIO_F_BAD_FEATURE |
> +                            (1u << VIRTIO_RING_F_EVENT_IDX) |
> +                            (1u << VIRTIO_BLK_F_SCSI));
> +    qvirtio_set_features(dev, features);
> +
> +    capacity = qvirtio_config_readq(dev, 0);
> +    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
> +
> +    vq = qvirtqueue_setup(dev, t_alloc, 0);
> +    qvirtio_set_driver_ok(dev);
> +
> +    /* Write request */
> +    req.type = VIRTIO_BLK_T_OUT;
> +    req.ioprio = 1;
> +    req.sector = 0;
> +    req.data = g_malloc0(512);
> +    strcpy(req.data, "TEST");
> +
> +    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
> +
> +    g_free(req.data);
> +
> +    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
> +    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 528, false);
> +    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 528, 1, true);
> +    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
> +    qvirtqueue_kick(qts, dev, vq, free_head);
> +
> +    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
> +                           QVIRTIO_BLK_TIMEOUT_US);
> +    status = readb(req_addr + 528);
> +    g_assert_cmpint(status, ==, 0);
> +
> +    g_free(indirect);
> +    guest_free(t_alloc, req_addr);
> +
> +    /* Read request */
> +    req.type = VIRTIO_BLK_T_IN;
> +    req.ioprio = 1;
> +    req.sector = 0;
> +    req.data = g_malloc0(512);
> +    strcpy(req.data, "TEST");
> +
> +    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
> +
> +    g_free(req.data);
> +
> +    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
> +    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 16, false);
> +    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 16, 513, true);
> +    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
> +    qvirtqueue_kick(qts, dev, vq, free_head);
> +
> +    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
> +                           QVIRTIO_BLK_TIMEOUT_US);
> +    status = readb(req_addr + 528);
> +    g_assert_cmpint(status, ==, 0);
> +
> +    data = g_malloc0(512);
> +    memread(req_addr + 16, data, 512);
> +    g_assert_cmpstr(data, ==, "TEST");
> +    g_free(data);
> +
> +    g_free(indirect);
> +    guest_free(t_alloc, req_addr);
> +    qvirtqueue_cleanup(dev->bus, vq, t_alloc);
> +}
> +
> +

One empty line is enough.

[...]
> +static void drive_destroy(void *path)
> +{
> +    unlink(path);
> +    g_free(path);
> +    qos_invalidate_command_line();
> +}
> +
> +

dito.

[...]
> +static char *drive_create(void)
> +{
> +    int fd, ret;
> +    /** vhost-user-blk won't recognize drive located in /tmp */
> +    char *t_path = g_strdup("qtest.XXXXXX");
> +
> +    /** Create a temporary raw image */
> +    fd = mkstemp(t_path);
> +    g_assert_cmpint(fd, >=, 0);
> +    ret = ftruncate(fd, TEST_IMAGE_SIZE);
> +    g_assert_cmpint(ret, ==, 0);
> +    close(fd);
> +
> +    g_test_queue_destroy(drive_destroy, t_path);
> +    return t_path;
> +}
> +
> +static char sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.XXXXXX";
> +static char qmp_sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.qmp.XXXXXX";
> +
> +

dito.

> +static void quit_storage_daemon(void *qmp_test_state)
> +{
> +    qobject_unref(qtest_qmp((QTestState *)qmp_test_state, "{ 'execute': 'quit' }"));
> +    g_free(qmp_test_state);
> +}
> +
> +static char *start_vhost_user_blk(void)
> +{
> +    int fd, qmp_fd;
> +    char *sock_path = g_strdup(sock_path_tempate);
> +    char *qmp_sock_path = g_strdup(qmp_sock_path_tempate);
> +    QTestState *qmp_test_state;
> +    fd = mkstemp(sock_path);
> +    g_assert_cmpint(fd, >=, 0);
> +    g_test_queue_destroy(drive_destroy, sock_path);
> +
> +

dito.

> +    qmp_fd = mkstemp(qmp_sock_path);
> +    g_assert_cmpint(qmp_fd, >=, 0);
> +    g_test_queue_destroy(drive_destroy, qmp_sock_path);
> +
> +    /* create image file */
> +    const char *img_path = drive_create();
> +
> +    const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> +    gchar *command = g_strdup_printf(
> +            "exec %s "
> +            "--blockdev driver=file,node-name=disk,filename=%s "
> +            "--object vhost-user-blk-server,id=disk,unix-socket=%s,"
> +            "node-name=disk,writable=on "
> +            "--chardev socket,id=qmp,path=%s,server,nowait --monitor chardev=qmp",
> +            vhost_user_blk_bin, img_path, sock_path, qmp_sock_path);
> +
> +

dito.

> +    g_test_message("starting vhost-user backend: %s", command);
> +    pid_t pid = fork();
> +    if (pid == 0) {
> +        execlp("/bin/sh", "sh", "-c", command, NULL);
> +        exit(1);
> +    }
> +    g_free(command);
> +
> +    qmp_test_state = qtest_create_state_with_qmp_fd(
> +                             qtest_socket_client(qmp_sock_path));
> +    /*
> +     * Ask qemu-storage-daemon to quit so it
> +     * will not block scripts/tap-driver.pl.
> +     */
> +    g_test_queue_destroy(quit_storage_daemon, qmp_test_state);
> +
> +    qobject_unref(qtest_qmp(qmp_test_state,
> +                  "{ 'execute': 'qmp_capabilities' }"));
> +    return sock_path;
> +}
> +
> +

dito

> +static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
> +{
> +    char *sock_path1 = start_vhost_user_blk();
> +    g_string_append_printf(cmd_line,
> +                           " -object memory-backend-memfd,id=mem,size=128M,share=on -numa node,memdev=mem "
> +                           "-chardev socket,id=char1,path=%s ", sock_path1);
> +    return arg;
> +}
> +
> +

dito

> +/*
> + * Setup for hotplug.
> + *
> + * Since vhost-user server only serves one vhost-user client one time,
> + * another exprot
> + *
> + */
> +static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
> +{
> +    vhost_user_blk_test_setup(cmd_line, arg);
> +    char *sock_path2 = start_vhost_user_blk();
> +    /* "-chardev socket,id=char2" is used for pci_hotplug*/
> +    g_string_append_printf(cmd_line, "-chardev socket,id=char2,path=%s",
> +                           sock_path2);
> +    return arg;
> +}
> +
> +static void register_vhost_user_blk_test(void)
> +{
> +    QOSGraphTestOptions opts = {
> +        .before = vhost_user_blk_test_setup,
> +    };
> +
> +    /*
> +     * tests for vhost-user-blk and vhost-user-blk-pci
> +     * The tests are borrowed from tests/virtio-blk-test.c. But some tests
> +     * regarding block_resize don't work for vhost-user-blk.
> +     * vhost-user-blk device doesn't have -drive, so tests containing
> +     * block_resize are also abandoned,
> +     *  - config
> +     *  - resize
> +     */
> +    qos_add_test("basic", "vhost-user-blk", basic, &opts);
> +    qos_add_test("indirect", "vhost-user-blk", indirect, &opts);
> +    qos_add_test("idx", "vhost-user-blk-pci", idx, &opts);
> +    qos_add_test("nxvirtq", "vhost-user-blk-pci",
> +                 test_nonexistent_virtqueue, &opts);
> +
> +    opts.before = vhost_user_blk_hotplug_test_setup;
> +    qos_add_test("hotplug", "vhost-user-blk-pci", pci_hotplug, &opts);
> +}
> +
> +libqos_init(register_vhost_user_blk_test);
> 

  Thomas



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-19 12:07 ` Stefan Hajnoczi
  2020-06-24  4:48   ` Coiby Xu
@ 2020-06-25 12:46   ` Coiby Xu
  2020-06-26 15:46     ` Stefan Hajnoczi
  1 sibling, 1 reply; 51+ messages in thread
From: Coiby Xu @ 2020-06-25 12:46 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel

On Fri, Jun 19, 2020 at 01:07:46PM +0100, Stefan Hajnoczi wrote:
>On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
>> v9
>>  - move logical block size check function to a utility function
>>  - fix issues regarding license, coding style, memory deallocation, etc.
>
>I have replied with patches that you can consider squashing into your
>series. I was testing this patch series and decided it was easier to
>send code than to go back and write review comments since I was already
>on a git branch.
>
>My patches can be combined into your original patches using "git rebase
>-i" and the "fixup" or "squash" directive.
>
>Please add my Signed-off-by: line to affected patches when squashing
>patches so that the git log records that I have confirmed that I have
>permission to contribute this code.
>
>If you have questions about any of the patches, please let me know.

Besides your Signed-off-by: line, shouldn't I also add copyright info to
the affected files as follows?

  * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
  *
  * Copyright (c) 2020 Red Hat, Inc., Stefan Hajnoczi <stefanha@redhat.com>
  *


--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-25 12:46   ` Coiby Xu
@ 2020-06-26 15:46     ` Stefan Hajnoczi
  0 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-06-26 15:46 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1358 bytes --]

On Thu, Jun 25, 2020 at 08:46:56PM +0800, Coiby Xu wrote:
> On Fri, Jun 19, 2020 at 01:07:46PM +0100, Stefan Hajnoczi wrote:
> > On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
> > > v9
> > >  - move logical block size check function to a utility function
> > >  - fix issues regarding license, coding style, memory deallocation, etc.
> > 
> > I have replied with patches that you can consider squashing into your
> > series. I was testing this patch series and decided it was easier to
> > send code than to go back and write review comments since I was already
> > on a git branch.
> > 
> > My patches can be combined into your original patches using "git rebase
> > -i" and the "fixup" or "squash" directive.
> > 
> > Please add my Signed-off-by: line to affected patches when squashing
> > patches so that the git log records that I have confirmed that I have
> > permission to contribute this code.
> > 
> > If you have questions about any of the patches, please let me know.
> 
> Besides your Signed-off-by: line, shouldn't I also add copyright info to
> the affected files as follows?
> 
>  * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
>  *
>  * Copyright (c) 2020 Red Hat, Inc., Stefan Hajnoczi <stefanha@redhat.com>

The following would be good:

 * Copyright (c) 2020 Red Hat, Inc.

Thanks,
Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server
  2020-06-24 15:14   ` Thomas Huth
@ 2020-08-17  8:16     ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-08-17  8:16 UTC (permalink / raw)
  To: Thomas Huth
  Cc: kwolf, Laurent Vivier, qemu-devel, bharatlkmlkvm, stefanha,
	Paolo Bonzini

On Wed, Jun 24, 2020 at 05:14:22PM +0200, Thomas Huth wrote:
>On 14/06/2020 20.39, Coiby Xu wrote:
>>This test case has the same tests as tests/virtio-blk-test.c except for
>>tests have block_resize. Since vhost-user server can only server one
>>client one time, two instances of qemu-storage-daemon are launched
>>for the hotplug test.
>>
>>In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
>>send "quit" command to qemu-storage-daemon's QMP monitor. So a function
>>is added to libqtest.c to establish socket connection with socket
>>server.
>>
>>Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
>>---
>[...]
>>diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
>>index 49075b55a1..02cc09f893 100644
>>--- a/tests/qtest/libqtest.c
>>+++ b/tests/qtest/libqtest.c
>>@@ -52,8 +52,7 @@ typedef struct QTestClientTransportOps {
>>      QTestRecvFn     recv_line; /* for receiving qtest command responses */
>>  } QTestTransportOps;
>>-struct QTestState
>>-{
>>+struct QTestState {
>>      int fd;
>>      int qmp_fd;
>>      pid_t qemu_pid;  /* our child QEMU process */
>>@@ -608,6 +607,38 @@ QDict *qtest_qmp_receive(QTestState *s)
>>      return qmp_fd_receive(s->qmp_fd);
>>  }
>>+QTestState *qtest_create_state_with_qmp_fd(int fd)
>>+{
>>+    QTestState *qmp_test_state = g_new0(QTestState, 1);
>>+    qmp_test_state->qmp_fd = fd;
>>+    return qmp_test_state;
>>+}
>>+
>>+int qtest_socket_client(char *server_socket_path)
>>+{
>>+    struct sockaddr_un serv_addr;
>>+    int sock;
>>+    int ret;
>>+    int retries = 0;
>>+    sock = socket(PF_UNIX, SOCK_STREAM, 0);
>>+    g_assert_cmpint(sock, !=, -1);
>>+    serv_addr.sun_family = AF_UNIX;
>>+    snprintf(serv_addr.sun_path, sizeof(serv_addr.sun_path), "%s",
>>+             server_socket_path);
>>+
>>+    do {
>
>Why not simply:
>
> for (retries = 0; retries < 3; retries++)
>
>?

Thank you for the advice which has been applied to v10.

>>+        ret = connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
>>+        if (ret == 0) {
>>+            break;
>>+        }
>>+        retries += 1;
>>+        g_usleep(G_USEC_PER_SEC);
>>+    } while (retries < 3);
>>+
>>+    g_assert_cmpint(ret, ==, 0);
>>+    return sock;
>>+}
>[...]
>>diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
>>new file mode 100644
>>index 0000000000..56e3d8f338
>>--- /dev/null
>>+++ b/tests/qtest/vhost-user-blk-test.c
>>@@ -0,0 +1,739 @@
>>+/*
>>+ * QTest testcase for VirtIO Block Device
>>+ *
>>+ * Copyright (c) 2014 SUSE LINUX Products GmbH
>>+ * Copyright (c) 2014 Marc Marí
>>+ *
>>+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
>>+ * See the COPYING file in the top-level directory.
>>+ */
>>+
>>+#include "qemu/osdep.h"
>>+#include "libqtest-single.h"
>>+#include "qemu/bswap.h"
>>+#include "qemu/module.h"
>>+#include "standard-headers/linux/virtio_blk.h"
>>+#include "standard-headers/linux/virtio_pci.h"
>>+#include "libqos/qgraph.h"
>>+#include "libqos/vhost-user-blk.h"
>>+#include "libqos/libqos-pc.h"
>>+
>>+/* TODO actually test the results and get rid of this */
>>+#define qmp_discard_response(...) qobject_unref(qmp(__VA_ARGS__))
>
>Please no more qmp_discard_response() in new code!

I planned to check if the response has the dict key "return" or "QMP".
But when sometimes the SHUTDOWN event is returned which doesn't has
the key "QMP". I'm not sure if QMI will change in the fure. So I use
qobject_unref instead which I don't think will affect the correctness
of the tests.

>>+#define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
>>+#define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
>>+#define PCI_SLOT_HP             0x06
>>+
>>+typedef struct QVirtioBlkReq {
>>+    uint32_t type;
>>+    uint32_t ioprio;
>>+    uint64_t sector;
>>+    char *data;
>>+    uint8_t status;
>>+} QVirtioBlkReq;
>>+
>>+
>>+#ifdef HOST_WORDS_BIGENDIAN
>>+static const bool host_is_big_endian = true;
>>+#else
>>+static const bool host_is_big_endian; /* false */
>>+#endif
>>+
>>+static inline void virtio_blk_fix_request(QVirtioDevice *d, QVirtioBlkReq *req)
>>+{
>>+    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
>>+        req->type = bswap32(req->type);
>>+        req->ioprio = bswap32(req->ioprio);
>>+        req->sector = bswap64(req->sector);
>>+    }
>>+}
>>+
>>+
>
>One empty line should be enough ;-)
>
>>+static inline void virtio_blk_fix_dwz_hdr(QVirtioDevice *d,
>>+    struct virtio_blk_discard_write_zeroes *dwz_hdr)
>>+{
>>+    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
>>+        dwz_hdr->sector = bswap64(dwz_hdr->sector);
>>+        dwz_hdr->num_sectors = bswap32(dwz_hdr->num_sectors);
>>+        dwz_hdr->flags = bswap32(dwz_hdr->flags);
>>+    }
>>+}
>>+
>>+static uint64_t virtio_blk_request(QGuestAllocator *alloc, QVirtioDevice *d,
>>+                                   QVirtioBlkReq *req, uint64_t data_size)
>>+{
>>+    uint64_t addr;
>>+    uint8_t status = 0xFF;
>>+
>>+    switch (req->type) {
>>+    case VIRTIO_BLK_T_IN:
>>+    case VIRTIO_BLK_T_OUT:
>>+        g_assert_cmpuint(data_size % 512, ==, 0);
>>+        break;
>>+    case VIRTIO_BLK_T_DISCARD:
>>+    case VIRTIO_BLK_T_WRITE_ZEROES:
>>+        g_assert_cmpuint(data_size %
>>+                         sizeof(struct virtio_blk_discard_write_zeroes), ==, 0);
>>+        break;
>>+    default:
>>+        g_assert_cmpuint(data_size, ==, 0);
>>+    }
>>+
>>+    addr = guest_alloc(alloc, sizeof(*req) + data_size);
>>+
>>+    virtio_blk_fix_request(d, req);
>>+
>>+    memwrite(addr, req, 16);
>>+    memwrite(addr + 16, req->data, data_size);
>>+    memwrite(addr + 16 + data_size, &status, sizeof(status));
>>+
>>+    return addr;
>>+}
>>+
>>+/* Returns the request virtqueue so the caller can perform further tests */
>>+static QVirtQueue *test_basic(QVirtioDevice *dev, QGuestAllocator *alloc)
>>+{
>>+    QVirtioBlkReq req;
>>+    uint64_t req_addr;
>>+    uint64_t capacity;
>>+    uint64_t features;
>>+    uint32_t free_head;
>>+    uint8_t status;
>>+    char *data;
>>+    QTestState *qts = global_qtest;
>>+    QVirtQueue *vq;
>>+
>>+    features = qvirtio_get_features(dev);
>>+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
>>+                    (1u << VIRTIO_RING_F_INDIRECT_DESC) |
>>+                    (1u << VIRTIO_RING_F_EVENT_IDX) |
>>+                    (1u << VIRTIO_BLK_F_SCSI));
>>+    qvirtio_set_features(dev, features);
>>+
>>+    capacity = qvirtio_config_readq(dev, 0);
>>+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
>>+
>>+    vq = qvirtqueue_setup(dev, alloc, 0);
>>+
>>+    qvirtio_set_driver_ok(dev);
>>+
>>+    /* Write and read with 3 descriptor layout */
>>+    /* Write request */
>>+    req.type = VIRTIO_BLK_T_OUT;
>>+    req.ioprio = 1;
>>+    req.sector = 0;
>>+    req.data = g_malloc0(512);
>>+    strcpy(req.data, "TEST");
>>+
>>+    req_addr = virtio_blk_request(alloc, dev, &req, 512);
>>+
>>+    g_free(req.data);
>>+
>>+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
>>+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
>>+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
>>+
>>+    qvirtqueue_kick(qts, dev, vq, free_head);
>>+
>>+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
>>+                           QVIRTIO_BLK_TIMEOUT_US);
>>+    status = readb(req_addr + 528);
>>+    g_assert_cmpint(status, ==, 0);
>>+
>>+    guest_free(alloc, req_addr);
>>+
>>+    /* Read request */
>>+    req.type = VIRTIO_BLK_T_IN;
>>+    req.ioprio = 1;
>>+    req.sector = 0;
>>+    req.data = g_malloc0(512);
>>+
>>+    req_addr = virtio_blk_request(alloc, dev, &req, 512);
>>+
>>+    g_free(req.data);
>>+
>>+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
>>+    qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
>>+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
>>+
>>+    qvirtqueue_kick(qts, dev, vq, free_head);
>>+
>>+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
>>+                           QVIRTIO_BLK_TIMEOUT_US);
>>+    status = readb(req_addr + 528);
>>+    g_assert_cmpint(status, ==, 0);
>>+
>>+    data = g_malloc0(512);
>>+    memread(req_addr + 16, data, 512);
>
>Since you have a "qts" variable here anyway, could you please use
>qtest_memread(qts, ...) here instead? (also in the other spots where
>you use memread and memwrite if possible) ... in case we ever have to
>introduce multiple test states later, we then don't have to rewrite
>the code anymore.
>(generally, it's nice to avoid libqtest-single.h nowadays and only use
>libqtest.h if possible)

`qtest_memread(qts, ...)` has been used in v10. But libqtest-single.h
is still needed to access
`QTestState *global_qtest __attribute__((common, weak));`.

>[...]
>>+static void indirect(void *obj, void *u_data, QGuestAllocator *t_alloc)
>>+{
>>+    QVirtQueue *vq;
>>+    QVhostUserBlk *blk_if = obj;
>>+    QVirtioDevice *dev = blk_if->vdev;
>>+    QVirtioBlkReq req;
>>+    QVRingIndirectDesc *indirect;
>>+    uint64_t req_addr;
>>+    uint64_t capacity;
>>+    uint64_t features;
>>+    uint32_t free_head;
>>+    uint8_t status;
>>+    char *data;
>>+    QTestState *qts = global_qtest;
>>+
>>+    features = qvirtio_get_features(dev);
>>+    g_assert_cmphex(features & (1u << VIRTIO_RING_F_INDIRECT_DESC), !=, 0);
>>+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
>>+                            (1u << VIRTIO_RING_F_EVENT_IDX) |
>>+                            (1u << VIRTIO_BLK_F_SCSI));
>>+    qvirtio_set_features(dev, features);
>>+
>>+    capacity = qvirtio_config_readq(dev, 0);
>>+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
>>+
>>+    vq = qvirtqueue_setup(dev, t_alloc, 0);
>>+    qvirtio_set_driver_ok(dev);
>>+
>>+    /* Write request */
>>+    req.type = VIRTIO_BLK_T_OUT;
>>+    req.ioprio = 1;
>>+    req.sector = 0;
>>+    req.data = g_malloc0(512);
>>+    strcpy(req.data, "TEST");
>>+
>>+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
>>+
>>+    g_free(req.data);
>>+
>>+    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
>>+    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 528, false);
>>+    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 528, 1, true);
>>+    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
>>+    qvirtqueue_kick(qts, dev, vq, free_head);
>>+
>>+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
>>+                           QVIRTIO_BLK_TIMEOUT_US);
>>+    status = readb(req_addr + 528);
>>+    g_assert_cmpint(status, ==, 0);
>>+
>>+    g_free(indirect);
>>+    guest_free(t_alloc, req_addr);
>>+
>>+    /* Read request */
>>+    req.type = VIRTIO_BLK_T_IN;
>>+    req.ioprio = 1;
>>+    req.sector = 0;
>>+    req.data = g_malloc0(512);
>>+    strcpy(req.data, "TEST");
>>+
>>+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
>>+
>>+    g_free(req.data);
>>+
>>+    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
>>+    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 16, false);
>>+    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 16, 513, true);
>>+    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
>>+    qvirtqueue_kick(qts, dev, vq, free_head);
>>+
>>+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
>>+                           QVIRTIO_BLK_TIMEOUT_US);
>>+    status = readb(req_addr + 528);
>>+    g_assert_cmpint(status, ==, 0);
>>+
>>+    data = g_malloc0(512);
>>+    memread(req_addr + 16, data, 512);
>>+    g_assert_cmpstr(data, ==, "TEST");
>>+    g_free(data);
>>+
>>+    g_free(indirect);
>>+    guest_free(t_alloc, req_addr);
>>+    qvirtqueue_cleanup(dev->bus, vq, t_alloc);
>>+}
>>+
>>+
>
>One empty line is enough.
>
>[...]
>>+static void drive_destroy(void *path)
>>+{
>>+    unlink(path);
>>+    g_free(path);
>>+    qos_invalidate_command_line();
>>+}
>>+
>>+
>
>dito.
>
>[...]
>>+static char *drive_create(void)
>>+{
>>+    int fd, ret;
>>+    /** vhost-user-blk won't recognize drive located in /tmp */
>>+    char *t_path = g_strdup("qtest.XXXXXX");
>>+
>>+    /** Create a temporary raw image */
>>+    fd = mkstemp(t_path);
>>+    g_assert_cmpint(fd, >=, 0);
>>+    ret = ftruncate(fd, TEST_IMAGE_SIZE);
>>+    g_assert_cmpint(ret, ==, 0);
>>+    close(fd);
>>+
>>+    g_test_queue_destroy(drive_destroy, t_path);
>>+    return t_path;
>>+}
>>+
>>+static char sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.XXXXXX";
>>+static char qmp_sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.qmp.XXXXXX";
>>+
>>+
>
>dito.
>
>>+static void quit_storage_daemon(void *qmp_test_state)
>>+{
>>+    qobject_unref(qtest_qmp((QTestState *)qmp_test_state, "{ 'execute': 'quit' }"));
>>+    g_free(qmp_test_state);
>>+}
>>+
>>+static char *start_vhost_user_blk(void)
>>+{
>>+    int fd, qmp_fd;
>>+    char *sock_path = g_strdup(sock_path_tempate);
>>+    char *qmp_sock_path = g_strdup(qmp_sock_path_tempate);
>>+    QTestState *qmp_test_state;
>>+    fd = mkstemp(sock_path);
>>+    g_assert_cmpint(fd, >=, 0);
>>+    g_test_queue_destroy(drive_destroy, sock_path);
>>+
>>+
>
>dito.
>
>>+    qmp_fd = mkstemp(qmp_sock_path);
>>+    g_assert_cmpint(qmp_fd, >=, 0);
>>+    g_test_queue_destroy(drive_destroy, qmp_sock_path);
>>+
>>+    /* create image file */
>>+    const char *img_path = drive_create();
>>+
>>+    const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
>>+    gchar *command = g_strdup_printf(
>>+            "exec %s "
>>+            "--blockdev driver=file,node-name=disk,filename=%s "
>>+            "--object vhost-user-blk-server,id=disk,unix-socket=%s,"
>>+            "node-name=disk,writable=on "
>>+            "--chardev socket,id=qmp,path=%s,server,nowait --monitor chardev=qmp",
>>+            vhost_user_blk_bin, img_path, sock_path, qmp_sock_path);
>>+
>>+
>
>dito.
>
>>+    g_test_message("starting vhost-user backend: %s", command);
>>+    pid_t pid = fork();
>>+    if (pid == 0) {
>>+        execlp("/bin/sh", "sh", "-c", command, NULL);
>>+        exit(1);
>>+    }
>>+    g_free(command);
>>+
>>+    qmp_test_state = qtest_create_state_with_qmp_fd(
>>+                             qtest_socket_client(qmp_sock_path));
>>+    /*
>>+     * Ask qemu-storage-daemon to quit so it
>>+     * will not block scripts/tap-driver.pl.
>>+     */
>>+    g_test_queue_destroy(quit_storage_daemon, qmp_test_state);
>>+
>>+    qobject_unref(qtest_qmp(qmp_test_state,
>>+                  "{ 'execute': 'qmp_capabilities' }"));
>>+    return sock_path;
>>+}
>>+
>>+
>
>dito
>
>>+static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
>>+{
>>+    char *sock_path1 = start_vhost_user_blk();
>>+    g_string_append_printf(cmd_line,
>>+                           " -object memory-backend-memfd,id=mem,size=128M,share=on -numa node,memdev=mem "
>>+                           "-chardev socket,id=char1,path=%s ", sock_path1);
>>+    return arg;
>>+}
>>+
>>+
>
>dito
>
>>+/*
>>+ * Setup for hotplug.
>>+ *
>>+ * Since vhost-user server only serves one vhost-user client one time,
>>+ * another exprot
>>+ *
>>+ */
>>+static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
>>+{
>>+    vhost_user_blk_test_setup(cmd_line, arg);
>>+    char *sock_path2 = start_vhost_user_blk();
>>+    /* "-chardev socket,id=char2" is used for pci_hotplug*/
>>+    g_string_append_printf(cmd_line, "-chardev socket,id=char2,path=%s",
>>+                           sock_path2);
>>+    return arg;
>>+}
>>+
>>+static void register_vhost_user_blk_test(void)
>>+{
>>+    QOSGraphTestOptions opts = {
>>+        .before = vhost_user_blk_test_setup,
>>+    };
>>+
>>+    /*
>>+     * tests for vhost-user-blk and vhost-user-blk-pci
>>+     * The tests are borrowed from tests/virtio-blk-test.c. But some tests
>>+     * regarding block_resize don't work for vhost-user-blk.
>>+     * vhost-user-blk device doesn't have -drive, so tests containing
>>+     * block_resize are also abandoned,
>>+     *  - config
>>+     *  - resize
>>+     */
>>+    qos_add_test("basic", "vhost-user-blk", basic, &opts);
>>+    qos_add_test("indirect", "vhost-user-blk", indirect, &opts);
>>+    qos_add_test("idx", "vhost-user-blk-pci", idx, &opts);
>>+    qos_add_test("nxvirtq", "vhost-user-blk-pci",
>>+                 test_nonexistent_virtqueue, &opts);
>>+
>>+    opts.before = vhost_user_blk_hotplug_test_setup;
>>+    qos_add_test("hotplug", "vhost-user-blk-pci", pci_hotplug, &opts);
>>+}
>>+
>>+libqos_init(register_vhost_user_blk_test);
>>
>
> Thomas

All extra empty lines have been removed in v10:)

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-18  8:28     ` Stefan Hajnoczi
@ 2020-08-17  8:23       ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-08-17  8:23 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel

On Thu, Jun 18, 2020 at 09:28:44AM +0100, Stefan Hajnoczi wrote:
>On Tue, Jun 16, 2020 at 02:52:16PM +0800, Coiby Xu wrote:
>> On Sun, Jun 14, 2020 at 12:16:28PM -0700, no-reply@patchew.org wrote:
>> > Patchew URL: https://patchew.org/QEMU/20200614183907.514282-1-coiby.xu@gmail.com/
>> >
>> >
>> >
>> > Hi,
>> >
>> > This series failed the asan build test. Please find the testing commands and
>> > their output below. If you have Docker installed, you can probably reproduce it
>> > locally.
>> >
>> > === TEST SCRIPT BEGIN ===
>> > #!/bin/bash
>> > export ARCH=x86_64
>> > make docker-image-fedora V=1 NETWORK=1
>> > time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
>> > === TEST SCRIPT END ===
>> >
>> >  CC      stubs/vm-stop.o
>> >  CC      ui/input-keymap.o
>> >  CC      qemu-keymap.o
>> > /tmp/qemu-test/src/util/vhost-user-server.c:142:30: error: use of undeclared identifier 'VHOST_MEMORY_MAX_NREGIONS'
>> >                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
>> >                             ^
>> >
>> > The full log is available at
>> > http://patchew.org/logs/20200614183907.514282-1-coiby.xu@gmail.com/testing.asan/?type=message.
>>
>> I couldn't re-produce this error locally for both docker-test-quick@centos7
>> and this docker test. And I can't see any reason for this error to occur since
>> VHOST_MEMORY_MAX_NREGIONS is defined in contrib/libvhost-user/libvhost-user.h
>> which has been included by util/vhost-user-server.h.
>
>Using G_N_ELEMENTS(vmsg->fds) instead of VHOST_MEMORY_MAX_NREGIONS is an
>even cleaner fix.
>
>Stefan

Thank you for this cleaner fix!

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 2/5] generic vhost user server
  2020-06-19 12:13   ` [PATCH v9 2/5] generic vhost user server Stefan Hajnoczi
@ 2020-08-17  8:24     ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-08-17  8:24 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel

On Fri, Jun 19, 2020 at 01:13:00PM +0100, Stefan Hajnoczi wrote:
>On Mon, Jun 15, 2020 at 02:39:04AM +0800, Coiby Xu wrote:
>> +/*
>> + * a wrapper for vu_kick_cb
>> + *
>> + * since aio_dispatch can only pass one user data pointer to the
>> + * callback function, pack VuDev and pvt into a struct. Then unpack it
>> + * and pass them to vu_kick_cb
>> + */
>> +static void kick_handler(void *opaque)
>> +{
>> +    KickInfo *kick_info = opaque;
>> +    kick_info->cb(kick_info->vu_dev, 0, (void *) kick_info->index);
>
>Where is kick_info->index assigned? It appears to be NULL in all cases.
>
>> +}
>> +
>> +
>> +static void
>> +set_watch(VuDev *vu_dev, int fd, int vu_evt,
>> +          vu_watch_cb cb, void *pvt)
>> +{
>> +
>> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
>> +    g_assert(vu_dev);
>> +    g_assert(fd >= 0);
>> +    long index = (intptr_t) pvt;
>
>The meaning of the pvt argument is not defined in the library interface.
>set_watch() callbacks shouldn't interpret pvt.
>
>You could modify libvhost-user to explicitly pass the virtqueue index
>(or -1 if the fd is not associated with a virtqueue), but it's nice to
>avoid libvhost-user API changes so that existing libvhost-user
>applications don't require modifications.
>
>What I would do here is to change the ->kick_info[] data struct. How
>about a linked list of VuFdWatch objects? That way the code can handle
>any number of fd watches and doesn't make assumptions about virtqueues.
>set_watch() is a generic fd monitoring interface and doesn't need to be
>tied to virtqueues.

A linked list of VuFdWatch objects has been adopted in v10. Thank you!

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 2/5] generic vhost user server
  2020-06-18 13:29   ` Kevin Wolf
@ 2020-08-17  8:59     ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-08-17  8:59 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: bharatlkmlkvm, qemu-devel, stefanha

On Thu, Jun 18, 2020 at 03:29:26PM +0200, Kevin Wolf wrote:
>Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
>> Sharing QEMU devices via vhost-user protocol.
>>
>> Only one vhost-user client can connect to the server one time.
>>
>> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
>> ---
>>  util/Makefile.objs       |   1 +
>>  util/vhost-user-server.c | 400 +++++++++++++++++++++++++++++++++++++++
>>  util/vhost-user-server.h |  61 ++++++
>>  3 files changed, 462 insertions(+)
>>  create mode 100644 util/vhost-user-server.c
>>  create mode 100644 util/vhost-user-server.h
>>
>> diff --git a/util/Makefile.objs b/util/Makefile.objs
>> index cc5e37177a..b4d4af06dc 100644
>> --- a/util/Makefile.objs
>> +++ b/util/Makefile.objs
>> @@ -66,6 +66,7 @@ util-obj-y += hbitmap.o
>>  util-obj-y += main-loop.o
>>  util-obj-y += nvdimm-utils.o
>>  util-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
>> +util-obj-$(CONFIG_LINUX) += vhost-user-server.o
>>  util-obj-y += qemu-coroutine-sleep.o
>>  util-obj-y += qemu-co-shared-resource.o
>>  util-obj-y += qemu-sockets.o
>> diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
>> new file mode 100644
>> index 0000000000..393beeb6b9
>> --- /dev/null
>> +++ b/util/vhost-user-server.c
>> @@ -0,0 +1,400 @@
>> +/*
>> + * Sharing QEMU devices via vhost-user protocol
>> + *
>> + * Author: Coiby Xu <coiby.xu@gmail.com>
>> + *
>> + * This work is licensed under the terms of the GNU GPL, version 2 or
>> + * later.  See the COPYING file in the top-level directory.
>> + */
>> +#include "qemu/osdep.h"
>> +#include <sys/eventfd.h>
>> +#include "qemu/main-loop.h"
>> +#include "vhost-user-server.h"
>> +
>> +static void vmsg_close_fds(VhostUserMsg *vmsg)
>> +{
>> +    int i;
>> +    for (i = 0; i < vmsg->fd_num; i++) {
>> +        close(vmsg->fds[i]);
>> +    }
>> +}
>> +
>> +static void vmsg_unblock_fds(VhostUserMsg *vmsg)
>> +{
>> +    int i;
>> +    for (i = 0; i < vmsg->fd_num; i++) {
>> +        qemu_set_nonblock(vmsg->fds[i]);
>> +    }
>> +}
>> +
>> +static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
>> +                      gpointer opaque);
>> +
>> +static void close_client(VuServer *server)
>> +{
>> +    vu_deinit(&server->vu_dev);
>> +    object_unref(OBJECT(server->sioc));
>> +    object_unref(OBJECT(server->ioc));
>> +    server->sioc_slave = NULL;
>
>Where is sioc_slave closed/freed?

Thank you for pointing out my neglect! When working on v10, I realized
communication on the slave channel can't be done easily in a coroutine.
So I simply dropped the support.

>> +    object_unref(OBJECT(server->ioc_slave));
>> +    /*
>> +     * Set the callback function for network listener so another
>> +     * vhost-user client can connect to this server
>> +     */
>> +    qio_net_listener_set_client_func(server->listener,
>> +                                     vu_accept,
>> +                                     server,
>> +                                     NULL);
>
>If connecting another client to the server should work, don't we have to
>set at least server->sioc = NULL so that vu_accept() won't error out?

Previously I set `server->sioc = NULL` in the panic_cb, i.e. only when the
client disconnects, because I thought it's different from the case that the
server is shutdown. But this differentiating is not necessary. In v10, I
has moved `server->sioc = NULL` into `close_client`.

>
>> +}
>> +
>> +static void panic_cb(VuDev *vu_dev, const char *buf)
>> +{
>> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
>> +
>> +    if (buf) {
>> +        error_report("vu_panic: %s", buf);
>> +    }
>> +
>> +    if (server->sioc) {
>> +        close_client(server);
>> +        server->sioc = NULL;
>> +    }
>> +
>> +    if (server->device_panic_notifier) {
>> +        server->device_panic_notifier(server);
>> +    }
>> +}
>> +
>> +static QIOChannel *slave_io_channel(VuServer *server, int fd,
>> +                                    Error **local_err)
>> +{
>> +    if (server->sioc_slave) {
>> +        if (fd == server->sioc_slave->fd) {
>> +            return server->ioc_slave;
>> +        }
>> +    } else {
>> +        server->sioc_slave = qio_channel_socket_new_fd(fd, local_err);
>> +        if (!*local_err) {
>> +            server->ioc_slave = QIO_CHANNEL(server->sioc_slave);
>> +            return server->ioc_slave;
>> +        }
>> +    }
>> +
>> +    return NULL;
>> +}
>> +
>> +static bool coroutine_fn
>> +vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
>> +{
>> +    struct iovec iov = {
>> +        .iov_base = (char *)vmsg,
>> +        .iov_len = VHOST_USER_HDR_SIZE,
>> +    };
>> +    int rc, read_bytes = 0;
>> +    Error *local_err = NULL;
>> +    /*
>> +     * Store fds/nfds returned from qio_channel_readv_full into
>> +     * temporary variables.
>> +     *
>> +     * VhostUserMsg is a packed structure, gcc will complain about passing
>> +     * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
>> +     * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
>> +     * thus two temporary variables nfds and fds are used here.
>> +     */
>> +    size_t nfds = 0, nfds_t = 0;
>> +    int *fds_t = NULL;
>> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
>> +    QIOChannel *ioc = NULL;
>> +
>> +    if (conn_fd == server->sioc->fd) {
>> +        ioc = server->ioc;
>> +    } else {
>> +        /* Slave communication will also use this function to read msg */
>> +        ioc = slave_io_channel(server, conn_fd, &local_err);
>> +    }
>> +
>> +    if (!ioc) {
>> +        error_report_err(local_err);
>> +        goto fail;
>> +    }
>> +
>> +    assert(qemu_in_coroutine());
>> +    do {
>> +        /*
>> +         * qio_channel_readv_full may have short reads, keeping calling it
>> +         * until getting VHOST_USER_HDR_SIZE or 0 bytes in total
>> +         */
>> +        rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
>> +        if (rc < 0) {
>> +            if (rc == QIO_CHANNEL_ERR_BLOCK) {
>> +                qio_channel_yield(ioc, G_IO_IN);
>> +                continue;
>> +            } else {
>> +                error_report_err(local_err);
>> +                return false;
>> +            }
>> +        }
>> +        read_bytes += rc;
>> +        if (nfds_t > 0) {
>> +            if (nfds + nfds_t > G_N_ELEMENTS(vmsg->fds)) {
>> +                error_report("A maximum of %d fds are allowed, "
>> +                             "however got %lu fds now",
>> +                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
>> +                goto fail;
>> +            }
>> +            memcpy(vmsg->fds + nfds, fds_t,
>> +                   nfds_t *sizeof(vmsg->fds[0]));
>> +            nfds += nfds_t;
>> +            g_free(fds_t);
>> +        }
>> +        if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
>> +            break;
>> +        }
>> +        iov.iov_base = (char *)vmsg + read_bytes;
>> +        iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
>> +    } while (true);
>> +
>> +    vmsg->fd_num = nfds;
>> +    /* qio_channel_readv_full will make socket fds blocking, unblock them */
>> +    vmsg_unblock_fds(vmsg);
>> +    if (vmsg->size > sizeof(vmsg->payload)) {
>> +        error_report("Error: too big message request: %d, "
>> +                     "size: vmsg->size: %u, "
>> +                     "while sizeof(vmsg->payload) = %zu",
>> +                     vmsg->request, vmsg->size, sizeof(vmsg->payload));
>> +        goto fail;
>> +    }
>> +
>> +    struct iovec iov_payload = {
>> +        .iov_base = (char *)&vmsg->payload,
>> +        .iov_len = vmsg->size,
>> +    };
>> +    if (vmsg->size) {
>> +        rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
>> +        if (rc == -1) {
>> +            error_report_err(local_err);
>> +            goto fail;
>> +        }
>> +    }
>> +
>> +    return true;
>> +
>> +fail:
>> +    vmsg_close_fds(vmsg);
>> +
>> +    return false;
>> +}
>> +
>> +
>> +static void vu_client_start(VuServer *server);
>> +static coroutine_fn void vu_client_trip(void *opaque)
>> +{
>> +    VuServer *server = opaque;
>> +
>> +    while (!server->aio_context_changed && server->sioc) {
>> +        vu_dispatch(&server->vu_dev);
>> +    }
>> +
>> +    if (server->aio_context_changed && server->sioc) {
>> +        server->aio_context_changed = false;
>> +        vu_client_start(server);
>> +    }
>> +}
>
>This is somewhat convoluted, but ok. As soon as my patch "util/async:
>Add aio_co_reschedule_self()" is merged, we can use it to simplify this
>a bit.

I will simplify this when your patch is merged.

>
>> +static void vu_client_start(VuServer *server)
>> +{
>> +    server->co_trip = qemu_coroutine_create(vu_client_trip, server);
>> +    aio_co_enter(server->ctx, server->co_trip);
>> +}
>> +
>> +/*
>> + * a wrapper for vu_kick_cb
>> + *
>> + * since aio_dispatch can only pass one user data pointer to the
>> + * callback function, pack VuDev and pvt into a struct. Then unpack it
>> + * and pass them to vu_kick_cb
>> + */
>> +static void kick_handler(void *opaque)
>> +{
>> +    KickInfo *kick_info = opaque;
>> +    kick_info->cb(kick_info->vu_dev, 0, (void *) kick_info->index);
>> +}
>> +
>> +
>> +static void
>> +set_watch(VuDev *vu_dev, int fd, int vu_evt,
>> +          vu_watch_cb cb, void *pvt)
>> +{
>> +
>> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
>> +    g_assert(vu_dev);
>> +    g_assert(fd >= 0);
>> +    long index = (intptr_t) pvt;
>> +    g_assert(cb);
>> +    KickInfo *kick_info = &server->kick_info[index];
>> +    if (!kick_info->cb) {
>> +        kick_info->fd = fd;
>> +        kick_info->cb = cb;
>> +        qemu_set_nonblock(fd);
>> +        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
>> +                           NULL, NULL, kick_info);
>> +        kick_info->vu_dev = vu_dev;
>> +    }
>> +}
>> +
>> +
>> +static void remove_watch(VuDev *vu_dev, int fd)
>> +{
>> +    VuServer *server;
>> +    int i;
>> +    int index = -1;
>> +    g_assert(vu_dev);
>> +    g_assert(fd >= 0);
>> +
>> +    server = container_of(vu_dev, VuServer, vu_dev);
>> +    for (i = 0; i < vu_dev->max_queues; i++) {
>> +        if (server->kick_info[i].fd == fd) {
>> +            index = i;
>> +            break;
>> +        }
>> +    }
>> +
>> +    if (index == -1) {
>> +        return;
>> +    }
>> +    server->kick_info[i].cb = NULL;
>> +    aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL);
>> +}
>> +
>> +
>> +static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
>> +                      gpointer opaque)
>> +{
>> +    VuServer *server = opaque;
>> +
>> +    if (server->sioc) {
>> +        warn_report("Only one vhost-user client is allowed to "
>> +                    "connect the server one time");
>> +        return;
>> +    }
>> +
>> +    if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
>> +                 vu_message_read, set_watch, remove_watch, server->vu_iface)) {
>> +        error_report("Failed to initialized libvhost-user");
>> +        return;
>> +    }
>> +
>> +    /*
>> +     * Unset the callback function for network listener to make another
>> +     * vhost-user client keeping waiting until this client disconnects
>> +     */
>> +    qio_net_listener_set_client_func(server->listener,
>> +                                     NULL,
>> +                                     NULL,
>> +                                     NULL);
>> +    server->sioc = sioc;
>> +    server->kick_info = g_new0(KickInfo, server->max_queues);
>> +    /*
>> +     * Increase the object reference, so sioc will not freed by
>> +     * qio_net_listener_channel_func which will call object_unref(OBJECT(sioc))
>> +     */
>> +    object_ref(OBJECT(server->sioc));
>> +    qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
>> +    server->ioc = QIO_CHANNEL(sioc);
>> +    object_ref(OBJECT(server->ioc));
>> +    qio_channel_attach_aio_context(server->ioc, server->ctx);
>> +    qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
>> +    vu_client_start(server);
>> +}
>> +
>> +
>> +void vhost_user_server_stop(VuServer *server)
>> +{
>> +    if (!server) {
>> +        return;
>> +    }
>
>There is no reason why the caller should even pass NULL.

Removed in v10.

>> +    if (server->sioc) {
>> +        close_client(server);
>> +        object_unref(OBJECT(server->sioc));
>
>close_client() already unrefs it. Do we really hold two references? If
>so, why?
>
>I can see that vu_accept() takes an extra reference, but the comment
>there says this is because QIOChannel takes ownership.

This is my neglect! Thank you!

>> +    }
>> +
>> +    if (server->listener) {
>> +        qio_net_listener_disconnect(server->listener);
>> +        object_unref(OBJECT(server->listener));
>> +    }
>> +
>> +    g_free(server->kick_info);
>
>Don't we need to wait for co_trip to terminate somewhere? Probably
>before freeing any objects because it could still use them.
>
>I assume vhost_user_server_stop() is always called from the main thread
>whereas co_trip runs in the server AioContext, so extra care is
>necessary.
>
>> +}
>> +
>> +static void detach_context(VuServer *server)
>> +{
>> +    int i;
>> +    AioContext *ctx = server->ioc->ctx;
>> +    qio_channel_detach_aio_context(server->ioc);
>> +    for (i = 0; i < server->vu_dev.max_queues; i++) {
>> +        if (server->kick_info[i].cb) {
>> +            aio_set_fd_handler(ctx, server->kick_info[i].fd, false, NULL,
>> +                               NULL, NULL, NULL);
>> +        }
>> +    }
>> +}
>> +
>> +static void attach_context(VuServer *server, AioContext *ctx)
>> +{
>> +    int i;
>> +    qio_channel_attach_aio_context(server->ioc, ctx);
>> +    server->aio_context_changed = true;
>> +    if (server->co_trip) {
>> +        aio_co_schedule(ctx, server->co_trip);
>> +    }
>> +    for (i = 0; i < server->vu_dev.max_queues; i++) {
>> +        if (server->kick_info[i].cb) {
>> +            aio_set_fd_handler(ctx, server->kick_info[i].fd, false,
>> +                               kick_handler, NULL, NULL,
>> +                               &server->kick_info[i]);
>> +        }
>> +    }
>> +}
>
>There is a lot of duplication between detach_context() and
>attach_context(). I think implementing this directly in
>vhost_user_server_set_aio_context() for both cases at once would result
>in simpler code.

Thank you for the advice! In v10, both cases have been dealt with in
vhost_user_server_set_aio_context since in both cases we need to iterate
over the kick handlers.

>
>> +void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server)
>> +{
>> +    server->ctx = ctx ? ctx : qemu_get_aio_context();
>> +    if (!server->sioc) {
>> +        return;
>> +    }
>> +    if (ctx) {
>> +        attach_context(server, ctx);
>> +    } else {
>> +        detach_context(server);
>> +    }
>> +}
>
>What happens if the VuServer is already attached to an AioContext and
>you change it to another AioContext? Shouldn't it be detached from the
>old context and attached to the new one instead of only doing the
>latter?

Based on my understanding, when there's a change of a block drive's AioConext,
it will first call context detachment hook and then call context
attachment hook. So this is not an issue.

>
>> +
>> +bool vhost_user_server_start(VuServer *server,
>> +                             SocketAddress *socket_addr,
>> +                             AioContext *ctx,
>> +                             uint16_t max_queues,
>> +                             DevicePanicNotifierFn *device_panic_notifier,
>> +                             const VuDevIface *vu_iface,
>> +                             Error **errp)
>> +{
>
>I think this is the function that is supposed to initialise the VuServer
>object, so would it be better to first zero it out completely?
>
>Or alternatively assign it completely like this (which automatically
>zeroes any unspecified field):
>
>    *server = (VuServer) {
>        .vu_iface       = vu_iface,
>        .max_queues     = max_queues,
>        ...
>    }

Thank you for the suggestion!

>
>> +    server->listener = qio_net_listener_new();
>> +    if (qio_net_listener_open_sync(server->listener, socket_addr, 1,
>> +                                   errp) < 0) {
>> +        return false;
>> +    }
>> +
>> +    qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
>> +
>> +    server->vu_iface = vu_iface;
>> +    server->max_queues = max_queues;
>> +    server->ctx = ctx;
>> +    server->device_panic_notifier = device_panic_notifier;
>> +    qio_net_listener_set_client_func(server->listener,
>> +                                     vu_accept,
>> +                                     server,
>> +                                     NULL);
>> +
>> +    return true;
>> +}
>> diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
>> new file mode 100644
>> index 0000000000..5baf58f96a
>> --- /dev/null
>> +++ b/util/vhost-user-server.h
>> @@ -0,0 +1,61 @@
>> +/*
>> + * Sharing QEMU devices via vhost-user protocol
>> + *
>> + * Author: Coiby Xu <coiby.xu@gmail.com>
>> + *
>> + * This work is licensed under the terms of the GNU GPL, version 2 or
>> + * later.  See the COPYING file in the top-level directory.
>> + */
>> +
>> +#ifndef VHOST_USER_SERVER_H
>> +#define VHOST_USER_SERVER_H
>> +
>> +#include "contrib/libvhost-user/libvhost-user.h"
>> +#include "io/channel-socket.h"
>> +#include "io/channel-file.h"
>> +#include "io/net-listener.h"
>> +#include "qemu/error-report.h"
>> +#include "qapi/error.h"
>> +#include "standard-headers/linux/virtio_blk.h"
>> +
>> +typedef struct KickInfo {
>> +    VuDev *vu_dev;
>> +    int fd; /*kick fd*/
>> +    long index; /*queue index*/
>> +    vu_watch_cb cb;
>> +} KickInfo;
>> +
>> +typedef struct VuServer {
>> +    QIONetListener *listener;
>> +    AioContext *ctx;
>> +    void (*device_panic_notifier)(struct VuServer *server) ;
>
>Extra space before the semicolon.
>
>> +    int max_queues;
>> +    const VuDevIface *vu_iface;
>> +    VuDev vu_dev;
>> +    QIOChannel *ioc; /* The I/O channel with the client */
>> +    QIOChannelSocket *sioc; /* The underlying data channel with the client */
>> +    /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
>> +    QIOChannel *ioc_slave;
>> +    QIOChannelSocket *sioc_slave;
>> +    Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
>> +    KickInfo *kick_info; /* an array with the length of the queue number */
>
>"an array with @max_queues elements"?

As following Stefan's advice, a linked list is used. So this problem
disappears.

>> +    /* restart coroutine co_trip if AIOContext is changed */
>> +    bool aio_context_changed;
>> +} VuServer;
>> +
>> +
>> +typedef void DevicePanicNotifierFn(struct VuServer *server);
>> +
>> +bool vhost_user_server_start(VuServer *server,
>> +                             SocketAddress *unix_socket,
>> +                             AioContext *ctx,
>> +                             uint16_t max_queues,
>> +                             DevicePanicNotifierFn *device_panic_notifier,
>> +                             const VuDevIface *vu_iface,
>> +                             Error **errp);
>> +
>> +void vhost_user_server_stop(VuServer *server);
>> +
>> +void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server);
>> +
>> +#endif /* VHOST_USER_SERVER_H */
>
>Kevin
>

Thank you for reviewing the code!

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 4/5] vhost-user block device backend server
  2020-06-18 15:57   ` Kevin Wolf
@ 2020-08-17 12:30     ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-08-17 12:30 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: open list:Block layer core, qemu-devel, Max Reitz, bharatlkmlkvm,
	stefanha, Paolo Bonzini

On Thu, Jun 18, 2020 at 05:57:40PM +0200, Kevin Wolf wrote:
>Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
>> By making use of libvhost-user, block device drive can be shared to
>> the connected vhost-user client. Only one client can connect to the
>> server one time.
>>
>> Since vhost-user-server needs a block drive to be created first, delay
>> the creation of this object.
>>
>> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
>> ---
>>  block/Makefile.objs                  |   1 +
>>  block/export/vhost-user-blk-server.c | 669 +++++++++++++++++++++++++++
>>  block/export/vhost-user-blk-server.h |  35 ++
>>  softmmu/vl.c                         |   4 +
>>  4 files changed, 709 insertions(+)
>>  create mode 100644 block/export/vhost-user-blk-server.c
>>  create mode 100644 block/export/vhost-user-blk-server.h
>>
>> diff --git a/block/Makefile.objs b/block/Makefile.objs
>> index 3635b6b4c1..0eb7eff470 100644
>> --- a/block/Makefile.objs
>> +++ b/block/Makefile.objs
>> @@ -24,6 +24,7 @@ block-obj-y += throttle-groups.o
>>  block-obj-$(CONFIG_LINUX) += nvme.o
>>
>>  block-obj-y += nbd.o
>> +block-obj-$(CONFIG_LINUX) += export/vhost-user-blk-server.o ../contrib/libvhost-user/libvhost-user.o
>>  block-obj-$(CONFIG_SHEEPDOG) += sheepdog.o
>>  block-obj-$(CONFIG_LIBISCSI) += iscsi.o
>>  block-obj-$(if $(CONFIG_LIBISCSI),y,n) += iscsi-opts.o
>> diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
>> new file mode 100644
>> index 0000000000..bbf2ceaa9b
>> --- /dev/null
>> +++ b/block/export/vhost-user-blk-server.c
>> @@ -0,0 +1,669 @@
>> +/*
>> + * Sharing QEMU block devices via vhost-user protocal
>> + *
>> + * Author: Coiby Xu <coiby.xu@gmail.com>
>> + *
>> + * This work is licensed under the terms of the GNU GPL, version 2 or
>> + * later.  See the COPYING file in the top-level directory.
>> + */
>> +#include "qemu/osdep.h"
>> +#include "block/block.h"
>> +#include "vhost-user-blk-server.h"
>> +#include "qapi/error.h"
>> +#include "qom/object_interfaces.h"
>> +#include "sysemu/block-backend.h"
>> +#include "util/block-helpers.h"
>> +
>> +enum {
>> +    VHOST_USER_BLK_MAX_QUEUES = 1,
>> +};
>> +struct virtio_blk_inhdr {
>> +    unsigned char status;
>> +};
>> +
>> +
>> +typedef struct VuBlockReq {
>> +    VuVirtqElement *elem;
>> +    int64_t sector_num;
>> +    size_t size;
>> +    struct virtio_blk_inhdr *in;
>> +    struct virtio_blk_outhdr out;
>> +    VuServer *server;
>> +    struct VuVirtq *vq;
>> +} VuBlockReq;
>> +
>> +
>> +static void vu_block_req_complete(VuBlockReq *req)
>> +{
>> +    VuDev *vu_dev = &req->server->vu_dev;
>> +
>> +    /* IO size with 1 extra status byte */
>> +    vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
>> +    vu_queue_notify(vu_dev, req->vq);
>> +
>> +    if (req->elem) {
>> +        free(req->elem);
>> +    }
>> +
>> +    g_free(req);
>> +}
>> +
>> +static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
>> +{
>> +    return container_of(server, VuBlockDev, vu_server);
>> +}
>> +
>> +static int coroutine_fn
>> +vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
>> +                              uint32_t iovcnt, uint32_t type)
>> +{
>> +    struct virtio_blk_discard_write_zeroes desc;
>> +    ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
>> +    if (unlikely(size != sizeof(desc))) {
>> +        error_report("Invalid size %ld, expect %ld", size, sizeof(desc));
>> +        return -EINVAL;
>> +    }
>> +
>> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
>> +    uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
>> +                          le32_to_cpu(desc.num_sectors) << 9 };
>> +    if (type == VIRTIO_BLK_T_DISCARD) {
>> +        if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
>> +            return 0;
>> +        }
>> +    } else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
>> +        if (blk_co_pwrite_zeroes(vdev_blk->backend,
>> +                                 range[0], range[1], 0) == 0) {
>> +            return 0;
>> +        }
>> +    }
>> +
>> +    return -EINVAL;
>> +}
>> +
>> +
>> +static void coroutine_fn vu_block_flush(VuBlockReq *req)
>> +{
>> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
>> +    BlockBackend *backend = vdev_blk->backend;
>> +    blk_co_flush(backend);
>> +}
>> +
>> +
>> +struct req_data {
>> +    VuServer *server;
>> +    VuVirtq *vq;
>> +    VuVirtqElement *elem;
>> +};
>> +
>> +static void coroutine_fn vu_block_virtio_process_req(void *opaque)
>> +{
>> +    struct req_data *data = opaque;
>> +    VuServer *server = data->server;
>> +    VuVirtq *vq = data->vq;
>> +    VuVirtqElement *elem = data->elem;
>> +    uint32_t type;
>> +    VuBlockReq *req;
>> +
>> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
>> +    BlockBackend *backend = vdev_blk->backend;
>> +
>> +    struct iovec *in_iov = elem->in_sg;
>> +    struct iovec *out_iov = elem->out_sg;
>> +    unsigned in_num = elem->in_num;
>> +    unsigned out_num = elem->out_num;
>> +    /* refer to hw/block/virtio_blk.c */
>> +    if (elem->out_num < 1 || elem->in_num < 1) {
>> +        error_report("virtio-blk request missing headers");
>> +        free(elem);
>> +        return;
>> +    }
>> +
>> +    req = g_new0(VuBlockReq, 1);
>> +    req->server = server;
>> +    req->vq = vq;
>> +    req->elem = elem;
>> +
>> +    if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
>> +                            sizeof(req->out)) != sizeof(req->out))) {
>> +        error_report("virtio-blk request outhdr too short");
>> +        goto err;
>> +    }
>> +
>> +    iov_discard_front(&out_iov, &out_num, sizeof(req->out));
>> +
>> +    if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
>> +        error_report("virtio-blk request inhdr too short");
>> +        goto err;
>> +    }
>> +
>> +    /* We always touch the last byte, so just see how big in_iov is.  */
>> +    req->in = (void *)in_iov[in_num - 1].iov_base
>> +              + in_iov[in_num - 1].iov_len
>> +              - sizeof(struct virtio_blk_inhdr);
>> +    iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr));
>> +
>> +
>> +    type = le32_to_cpu(req->out.type);
>> +    switch (type & ~VIRTIO_BLK_T_BARRIER) {
>> +    case VIRTIO_BLK_T_IN:
>> +    case VIRTIO_BLK_T_OUT: {
>> +        ssize_t ret = 0;
>> +        bool is_write = type & VIRTIO_BLK_T_OUT;
>> +        req->sector_num = le64_to_cpu(req->out.sector);
>> +
>> +        int64_t offset = req->sector_num * vdev_blk->blk_size;
>> +        QEMUIOVector qiov;
>> +        if (is_write) {
>> +            qemu_iovec_init_external(&qiov, out_iov, out_num);
>> +            ret = blk_co_pwritev(backend, offset, qiov.size,
>> +                                 &qiov, 0);
>> +        } else {
>> +            qemu_iovec_init_external(&qiov, in_iov, in_num);
>> +            ret = blk_co_preadv(backend, offset, qiov.size,
>> +                                &qiov, 0);
>> +        }
>> +        if (ret >= 0) {
>> +            req->in->status = VIRTIO_BLK_S_OK;
>> +        } else {
>> +            req->in->status = VIRTIO_BLK_S_IOERR;
>> +        }
>> +        break;
>> +    }
>> +    case VIRTIO_BLK_T_FLUSH:
>> +        vu_block_flush(req);
>> +        req->in->status = VIRTIO_BLK_S_OK;
>> +        break;
>> +    case VIRTIO_BLK_T_GET_ID: {
>> +        size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
>> +                          VIRTIO_BLK_ID_BYTES);
>> +        snprintf(elem->in_sg[0].iov_base, size, "%s", "vhost_user_blk_server");
>> +        req->in->status = VIRTIO_BLK_S_OK;
>> +        req->size = elem->in_sg[0].iov_len;
>> +        break;
>> +    }
>> +    case VIRTIO_BLK_T_DISCARD:
>> +    case VIRTIO_BLK_T_WRITE_ZEROES: {
>> +        int rc;
>> +        rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
>> +                                           out_num, type);
>> +        if (rc == 0) {
>> +            req->in->status = VIRTIO_BLK_S_OK;
>> +        } else {
>> +            req->in->status = VIRTIO_BLK_S_IOERR;
>> +        }
>> +        break;
>> +    }
>> +    default:
>> +        req->in->status = VIRTIO_BLK_S_UNSUPP;
>> +        break;
>> +    }
>> +
>> +    vu_block_req_complete(req);
>> +    return;
>> +
>> +err:
>> +    free(elem);
>> +    g_free(req);
>> +    return;
>> +}
>> +
>> +
>> +
>> +static void vu_block_process_vq(VuDev *vu_dev, int idx)
>> +{
>> +    VuServer *server;
>> +    VuVirtq *vq;
>> +
>> +    server = container_of(vu_dev, VuServer, vu_dev);
>> +    assert(server);
>> +
>> +    vq = vu_get_queue(vu_dev, idx);
>> +    assert(vq);
>> +    VuVirtqElement *elem;
>> +    while (1) {
>> +        elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
>> +                                    sizeof(VuBlockReq));
>> +        if (elem) {
>> +            struct req_data req_data = {
>> +                .server = server,
>> +                .vq = vq,
>> +                .elem = elem
>> +            };
>
>This is on the stack of the function.
>
>> +            Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
>> +                                                  &req_data);
>> +            aio_co_enter(server->ioc->ctx, co);
>
>Therefore, this code is only correct, if co accesses the data only while
>the function has not returned yet.
>
>This function is called in the context of vu_dispatch(), which in turn
>is called from vu_client_trip(). So we already run in a coroutine. In
>this case, aio_co_enter() only schedules co to run after the current
>coroutine yields or terminates. In other words, this looks wrong to me
>because req_data will be accessed when it's long out of scope.
>
>I think we need to malloc it.

vu_dispatch is only for reading vhost-user message. This function is called
by the kick handler which is no longer run as a coroutine since I
followed the advice to make better use of contrib/libvhost. Although
no error has appeared or been reported by address sanitizer with
--enable-sanitizers, using malloc is the correct way. I've fixed it in
v10.

>
>> +        } else {
>> +            break;
>> +        }
>> +    }
>> +}
>> +
>> +static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
>> +{
>> +    VuVirtq *vq;
>> +
>> +    assert(vu_dev);
>> +
>> +    vq = vu_get_queue(vu_dev, idx);
>> +    vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
>> +}
>> +
>> +static uint64_t vu_block_get_features(VuDev *dev)
>> +{
>> +    uint64_t features;
>> +    VuServer *server = container_of(dev, VuServer, vu_dev);
>> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
>> +    features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
>> +               1ull << VIRTIO_BLK_F_SEG_MAX |
>> +               1ull << VIRTIO_BLK_F_TOPOLOGY |
>> +               1ull << VIRTIO_BLK_F_BLK_SIZE |
>> +               1ull << VIRTIO_BLK_F_FLUSH |
>> +               1ull << VIRTIO_BLK_F_DISCARD |
>> +               1ull << VIRTIO_BLK_F_WRITE_ZEROES |
>> +               1ull << VIRTIO_BLK_F_CONFIG_WCE |
>> +               1ull << VIRTIO_F_VERSION_1 |
>> +               1ull << VIRTIO_RING_F_INDIRECT_DESC |
>> +               1ull << VIRTIO_RING_F_EVENT_IDX |
>> +               1ull << VHOST_USER_F_PROTOCOL_FEATURES;
>> +
>> +    if (!vdev_blk->writable) {
>> +        features |= 1ull << VIRTIO_BLK_F_RO;
>> +    }
>> +
>> +    return features;
>> +}
>> +
>> +static uint64_t vu_block_get_protocol_features(VuDev *dev)
>> +{
>> +    return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
>> +           1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
>> +}
>> +
>> +static int
>> +vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
>> +{
>> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
>> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
>> +    memcpy(config, &vdev_blk->blkcfg, len);
>> +
>> +    return 0;
>> +}
>> +
>> +static int
>> +vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
>> +                    uint32_t offset, uint32_t size, uint32_t flags)
>> +{
>> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
>> +    VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
>> +    uint8_t wce;
>> +
>> +    /* don't support live migration */
>> +    if (flags != VHOST_SET_CONFIG_TYPE_MASTER) {
>> +        return -EINVAL;
>> +    }
>> +
>> +
>> +    if (offset != offsetof(struct virtio_blk_config, wce) ||
>> +        size != 1) {
>> +        return -EINVAL;
>> +    }
>> +
>> +    wce = *data;
>> +    if (wce == vdev_blk->blkcfg.wce) {
>> +        /* Do nothing as same with old configuration */
>> +        return 0;
>> +    }
>
>This check is unnecessary. Nothing bad happens if you set the same value
>again.

Yes, removing it also simplifies the code.

>
>> +    vdev_blk->blkcfg.wce = wce;
>> +    blk_set_enable_write_cache(vdev_blk->backend, wce);
>> +    return 0;
>> +}
>> +
>> +
>> +/*
>> + * When the client disconnects, it sends a VHOST_USER_NONE request
>> + * and vu_process_message will simple call exit which cause the VM
>> + * to exit abruptly.
>> + * To avoid this issue,  process VHOST_USER_NONE request ahead
>> + * of vu_process_message.
>> + *
>> + */
>> +static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
>> +{
>> +    if (vmsg->request == VHOST_USER_NONE) {
>> +        dev->panic(dev, "disconnect");
>> +        return true;
>> +    }
>> +    return false;
>> +}
>> +
>> +
>> +static const VuDevIface vu_block_iface = {
>> +    .get_features          = vu_block_get_features,
>> +    .queue_set_started     = vu_block_queue_set_started,
>> +    .get_protocol_features = vu_block_get_protocol_features,
>> +    .get_config            = vu_block_get_config,
>> +    .set_config            = vu_block_set_config,
>> +    .process_msg           = vu_block_process_msg,
>> +};
>> +
>> +static void blk_aio_attached(AioContext *ctx, void *opaque)
>> +{
>> +    VuBlockDev *vub_dev = opaque;
>> +    aio_context_acquire(ctx);
>> +    vhost_user_server_set_aio_context(ctx, &vub_dev->vu_server);
>> +    aio_context_release(ctx);
>> +}
>> +
>> +static void blk_aio_detach(void *opaque)
>> +{
>> +    VuBlockDev *vub_dev = opaque;
>> +    AioContext *ctx = vub_dev->vu_server.ctx;
>> +    aio_context_acquire(ctx);
>> +    vhost_user_server_set_aio_context(NULL, &vub_dev->vu_server);
>> +    aio_context_release(ctx);
>> +}
>> +
>> +
>> +static void
>> +vu_block_initialize_config(BlockDriverState *bs,
>> +                           struct virtio_blk_config *config, uint32_t blk_size)
>> +{
>> +    config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
>> +    config->blk_size = blk_size;
>> +    config->size_max = 0;
>> +    config->seg_max = 128 - 2;
>> +    config->min_io_size = 1;
>> +    config->opt_io_size = 1;
>> +    config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
>> +    config->max_discard_sectors = 32768;
>> +    config->max_discard_seg = 1;
>> +    config->discard_sector_alignment = config->blk_size >> 9;
>> +    config->max_write_zeroes_sectors = 32768;
>> +    config->max_write_zeroes_seg = 1;
>> +}
>> +
>> +
>> +static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
>> +{
>> +
>> +    BlockBackend *blk;
>> +    Error *local_error = NULL;
>> +    const char *node_name = vu_block_device->node_name;
>> +    bool writable = vu_block_device->writable;
>> +    /*
>> +     * Don't allow resize while the vhost user server is running,
>> +     * otherwise we don't care what happens with the node.
>> +     */
>
>I think this comment belong to the blk_new() below where the shared
>permissions are specified.

Yes, it makes more sense to put the comment right above blk_new)_.

>
>> +    uint64_t perm = BLK_PERM_CONSISTENT_READ;
>> +    int ret;
>> +
>> +    AioContext *ctx;
>> +
>> +    BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
>> +
>> +    if (!bs) {
>> +        error_propagate(errp, local_error);
>> +        return NULL;
>> +    }
>> +
>> +    if (bdrv_is_read_only(bs)) {
>> +        writable = false;
>> +    }
>> +
>> +    if (writable) {
>> +        perm |= BLK_PERM_WRITE;
>> +    }
>> +
>> +    ctx = bdrv_get_aio_context(bs);
>> +    aio_context_acquire(ctx);
>> +    bdrv_invalidate_cache(bs, NULL);
>> +    aio_context_release(ctx);
>> +
>> +    blk = blk_new(bdrv_get_aio_context(bs), perm,
>> +                  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
>> +                  BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
>> +    ret = blk_insert_bs(blk, bs, errp);
>> +
>> +    if (ret < 0) {
>> +        goto fail;
>> +    }
>> +
>> +    blk_set_enable_write_cache(blk, false);
>> +
>> +    blk_set_allow_aio_context_change(blk, true);
>> +
>> +    vu_block_device->blkcfg.wce = 0;
>> +    vu_block_device->backend = blk;
>> +    if (!vu_block_device->blk_size) {
>> +        vu_block_device->blk_size = BDRV_SECTOR_SIZE;
>> +    }
>> +    vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
>> +    blk_set_guest_block_size(blk, vu_block_device->blk_size);
>> +    vu_block_initialize_config(bs, &vu_block_device->blkcfg,
>> +                                   vu_block_device->blk_size);
>> +    return vu_block_device;
>> +
>> +fail:
>> +    blk_unref(blk);
>> +    return NULL;
>> +}
>> +
>> +static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
>> +{
>> +    if (!vu_block_device) {
>> +        return;
>> +    }
>> +
>> +    vhost_user_server_stop(&vu_block_device->vu_server);
>> +
>> +    if (vu_block_device->backend) {
>> +        blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
>> +                                        blk_aio_detach, vu_block_device);
>> +    }
>> +
>> +    blk_unref(vu_block_device->backend);
>> +
>> +}
>> +
>> +
>> +static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
>> +                                        Error **errp)
>> +{
>> +    SocketAddress *addr = vu_block_device->addr;
>> +
>> +    if (!vu_block_init(vu_block_device, errp)) {
>> +        return;
>> +    }
>> +
>> +    AioContext *ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
>
>Please move declarations to the top of the function.

Been fixed in v10.

>
>> +    if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
>> +                                 VHOST_USER_BLK_MAX_QUEUES,
>> +                                 NULL, &vu_block_iface,
>> +                                 errp)) {
>> +        goto error;
>> +    }
>> +
>> +    blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
>> +                                 blk_aio_detach, vu_block_device);
>> +    vu_block_device->running = true;
>> +    return;
>> +
>> + error:
>> +    vhost_user_blk_server_stop(vu_block_device);
>
>vu_block_device hasn't been fully set up. You need to undo only
>vu_block_init(). You must not call vhost_user_server_stop().
>
>> +}
>> +
>> +static bool vu_prop_modificable(VuBlockDev *vus, Error **errp)
>
>The word is "modifiable".

Thank you for correcting my spelling!

>
>> +{
>> +    if (vus->running) {
>> +            error_setg(errp, "The property can't be modified "
>> +                    "while the server is running");
>> +            return false;
>
>The indentation is off here.
>
>> +    }
>> +    return true;
>> +}
>> +static void vu_set_node_name(Object *obj, const char *value, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +
>> +    if (vus->node_name) {
>> +        if (!vu_prop_modificable(vus, errp)) {
>> +            return;
>> +        }
>
>Why don't we need to check vu_prop_modificable() when the property isn't
>set yet? I assume it's because the server can't even be started without
>a node name, but it would be more obviously correct if the check were
>done unconditionally.
>
>> +        g_free(vus->node_name);
>> +    }
>> +
>> +    vus->node_name = g_strdup(value);
>> +}
>> +
>> +static char *vu_get_node_name(Object *obj, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +    return g_strdup(vus->node_name);
>> +}
>> +
>> +
>> +static void vu_set_unix_socket(Object *obj, const char *value,
>> +                               Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +
>> +    if (vus->addr) {
>> +        if (!vu_prop_modificable(vus, errp)) {
>> +            return;
>> +        }
>
>Same here.

It makes the code more readable. Thank you!

>
>> +        g_free(vus->addr->u.q_unix.path);
>> +        g_free(vus->addr);
>> +    }
>> +
>> +    SocketAddress *addr = g_new0(SocketAddress, 1);
>> +    addr->type = SOCKET_ADDRESS_TYPE_UNIX;
>> +    addr->u.q_unix.path = g_strdup(value);
>> +    vus->addr = addr;
>> +}
>> +
>> +static char *vu_get_unix_socket(Object *obj, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +    return g_strdup(vus->addr->u.q_unix.path);
>> +}
>> +
>> +static bool vu_get_block_writable(Object *obj, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +    return vus->writable;
>> +}
>> +
>> +static void vu_set_block_writable(Object *obj, bool value, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +
>> +    if (!vu_prop_modificable(vus, errp)) {
>> +            return;
>> +    }
>> +
>> +    vus->writable = value;
>> +}
>> +
>> +static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
>> +                            void *opaque, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +    uint32_t value = vus->blk_size;
>> +
>> +    visit_type_uint32(v, name, &value, errp);
>> +}
>> +
>> +static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
>> +                            void *opaque, Error **errp)
>> +{
>> +    VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
>> +
>> +    Error *local_err = NULL;
>> +    uint32_t value;
>> +
>> +    if (!vu_prop_modificable(vus, errp)) {
>> +            return;
>> +    }
>> +
>> +    visit_type_uint32(v, name, &value, &local_err);
>> +    if (local_err) {
>> +        goto out;
>> +    }
>> +
>> +    check_logical_block_size(object_get_typename(obj), name, value, &local_err);
>> +    if (local_err) {
>> +        goto out;
>> +    }
>> +
>> +    vus->blk_size = value;
>> +
>> +out:
>> +    error_propagate(errp, local_err);
>> +    vus->blk_size = value;
>
>Surely you don't want to set the value here, when some check failed?

Yes, I must have left this line of code here by mistake.

>
>> +}
>> +
>> +
>> +static void vhost_user_blk_server_instance_finalize(Object *obj)
>> +{
>> +    VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
>> +
>> +    vhost_user_blk_server_stop(vub);
>> +}
>> +
>> +static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
>> +{
>> +    Error *local_error = NULL;
>> +    VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
>> +
>> +    vhost_user_blk_server_start(vub, &local_error);
>> +
>> +    if (local_error) {
>> +        error_propagate(errp, local_error);
>> +        return;
>> +    }
>
>If you don't do anything with local_error (which is named inconsistently
>with local_err used above), you can just directly pass errp to
>vhost_user_blk_server_start().
>
>> +}
>> +
>> +static void vhost_user_blk_server_class_init(ObjectClass *klass,
>> +                                             void *class_data)
>> +{
>> +    UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
>> +    ucc->complete = vhost_user_blk_server_complete;
>> +
>> +    object_class_property_add_bool(klass, "writable",
>> +                                   vu_get_block_writable,
>> +                                   vu_set_block_writable);
>> +
>> +    object_class_property_add_str(klass, "node-name",
>> +                                  vu_get_node_name,
>> +                                  vu_set_node_name);
>> +
>> +    object_class_property_add_str(klass, "unix-socket",
>> +                                  vu_get_unix_socket,
>> +                                  vu_set_unix_socket);
>> +
>> +    object_class_property_add(klass, "logical-block-size", "uint32",
>> +                              vu_get_blk_size, vu_set_blk_size,
>> +                              NULL, NULL);
>> +}
>> +
>> +static const TypeInfo vhost_user_blk_server_info = {
>> +    .name = TYPE_VHOST_USER_BLK_SERVER,
>> +    .parent = TYPE_OBJECT,
>> +    .instance_size = sizeof(VuBlockDev),
>> +    .instance_finalize = vhost_user_blk_server_instance_finalize,
>> +    .class_init = vhost_user_blk_server_class_init,
>> +    .interfaces = (InterfaceInfo[]) {
>> +        {TYPE_USER_CREATABLE},
>> +        {}
>> +    },
>> +};
>> +
>> +static void vhost_user_blk_server_register_types(void)
>> +{
>> +    type_register_static(&vhost_user_blk_server_info);
>> +}
>> +
>
>Please remove the trailing empty line.
>
>Compared to the last version that I reviewed, this seems to get the
>architecture for concurrent requests right, which is an important
>improvement. I feel we're getting quite close to mergable now.
>
>Kevin
>
Thank you for the guidance and helpful feedbacks along the way:)


--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h>
  2020-06-19 12:00     ` [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h> Stefan Hajnoczi
@ 2020-08-17 12:49       ` Coiby Xu
  2020-08-18 15:11         ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Coiby Xu @ 2020-08-17 12:49 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel

On Fri, Jun 19, 2020 at 01:00:42PM +0100, Stefan Hajnoczi wrote:
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>---
> util/vhost-user-server.c | 1 -
> 1 file changed, 1 deletion(-)
>
>diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
>index e94a8d8a83..49ada8bc78 100644
>--- a/util/vhost-user-server.c
>+++ b/util/vhost-user-server.c
>@@ -7,7 +7,6 @@
>  * later.  See the COPYING file in the top-level directory.
>  */
> #include "qemu/osdep.h"
>-#include <sys/eventfd.h>
> #include "qemu/main-loop.h"
> #include "vhost-user-server.h"
>
>--
>2.26.2
>

All the patches have been applied to v10. I'm curious how do you find
this issue. Is there a tool to detect this issue or simply you are so
familiar with the QEMU code that you can spot it very easily?

--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h>
  2020-08-17 12:49       ` Coiby Xu
@ 2020-08-18 15:11         ` Stefan Hajnoczi
  0 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-08-18 15:11 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1120 bytes --]

On Mon, Aug 17, 2020 at 08:49:27PM +0800, Coiby Xu wrote:
> On Fri, Jun 19, 2020 at 01:00:42PM +0100, Stefan Hajnoczi wrote:
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> > util/vhost-user-server.c | 1 -
> > 1 file changed, 1 deletion(-)
> > 
> > diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
> > index e94a8d8a83..49ada8bc78 100644
> > --- a/util/vhost-user-server.c
> > +++ b/util/vhost-user-server.c
> > @@ -7,7 +7,6 @@
> >  * later.  See the COPYING file in the top-level directory.
> >  */
> > #include "qemu/osdep.h"
> > -#include <sys/eventfd.h>
> > #include "qemu/main-loop.h"
> > #include "vhost-user-server.h"
> > 
> > --
> > 2.26.2
> > 
> 
> All the patches have been applied to v10. I'm curious how do you find
> this issue. Is there a tool to detect this issue or simply you are so
> familiar with the QEMU code that you can spot it very easily?

No, I didn't use a tool.

When looking at the code I wondered if the #include was really
necessary. So I deleted the #include and recompiled to check that the
build still works.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (7 preceding siblings ...)
  2020-06-19 12:07 ` Stefan Hajnoczi
@ 2020-08-18 15:13 ` Stefan Hajnoczi
  2020-09-15 15:35 ` Stefan Hajnoczi
  9 siblings, 0 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-08-18 15:13 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 281 bytes --]

On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
> v9
>  - move logical block size check function to a utility function
>  - fix issues regarding license, coding style, memory deallocation, etc.

Great to see you are back, Coiby! Looking forward to reviewing v10.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
                   ` (8 preceding siblings ...)
  2020-08-18 15:13 ` Stefan Hajnoczi
@ 2020-09-15 15:35 ` Stefan Hajnoczi
  2020-09-18  8:13   ` Coiby Xu
  9 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2020-09-15 15:35 UTC (permalink / raw)
  To: Coiby Xu; +Cc: kwolf, bharatlkmlkvm, qemu-devel, stefanha

[-- Attachment #1: Type: text/plain, Size: 294 bytes --]

On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
> v9
>  - move logical block size check function to a utility function
>  - fix issues regarding license, coding style, memory deallocation, etc.

Hi,
Any update on v10?

Please let me know if there's anything I can do to help.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v9 0/5] vhost-user block device backend implementation
  2020-09-15 15:35 ` Stefan Hajnoczi
@ 2020-09-18  8:13   ` Coiby Xu
  0 siblings, 0 replies; 51+ messages in thread
From: Coiby Xu @ 2020-09-18  8:13 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kwolf, bharatlkmlkvm, qemu-devel, stefanha

On Tue, Sep 15, 2020 at 04:35:57PM +0100, Stefan Hajnoczi wrote:
>On Mon, Jun 15, 2020 at 02:39:02AM +0800, Coiby Xu wrote:
>> v9
>>  - move logical block size check function to a utility function
>>  - fix issues regarding license, coding style, memory deallocation, etc.
>
>Hi,
>Any update on v10?
>
>Please let me know if there's anything I can do to help.
>
>Stefan

Thank you for ping me! v10 has been submitted.


--
Best regards,
Coiby


^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2020-09-18  8:21 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
2020-06-14 18:39 ` [PATCH v9 1/5] Allow vu_message_read to be replaced Coiby Xu
2020-06-18 10:43   ` Kevin Wolf
2020-06-24  3:36     ` Coiby Xu
2020-06-24 12:24       ` Kevin Wolf
2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
2020-06-18 13:29   ` Kevin Wolf
2020-08-17  8:59     ` Coiby Xu
2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h> Stefan Hajnoczi
2020-08-17 12:49       ` Coiby Xu
2020-08-18 15:11         ` Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 3/6] vhost-user-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 4/6] vhost-user-server: mark fd handlers "external" Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 5/6] vhost-user-server: fix s/initialized/initialize/ typo Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 6/6] vhost-user-server: use DevicePanicNotifierFn everywhere Stefan Hajnoczi
2020-06-19 12:13   ` [PATCH v9 2/5] generic vhost user server Stefan Hajnoczi
2020-08-17  8:24     ` Coiby Xu
2020-06-14 18:39 ` [PATCH v9 3/5] move logical block size check function to a common utility function Coiby Xu
2020-06-18 13:44   ` Kevin Wolf
2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 2/6] block-helpers: switch to int64_t block size values Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 3/6] block-helpers: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 4/6] block-helpers: use local_err in case errp is NULL Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 5/6] block-helpers: keep the copyright line from the original file Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 6/6] block-helpers: update doc comment in gtkdoc style Stefan Hajnoczi
2020-06-14 18:39 ` [PATCH v9 4/5] vhost-user block device backend server Coiby Xu
2020-06-18 15:57   ` Kevin Wolf
2020-08-17 12:30     ` Coiby Xu
2020-06-19 12:03   ` [PATCH 1/2] vhost-user-blk-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
2020-06-19 12:03     ` [PATCH 2/2] vhost-user-blk-server: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
2020-06-14 18:39 ` [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server Coiby Xu
2020-06-18 15:17   ` Stefan Hajnoczi
2020-06-24  4:35     ` Coiby Xu
2020-06-24 10:49       ` Stefan Hajnoczi
2020-06-24 15:14   ` Thomas Huth
2020-08-17  8:16     ` Coiby Xu
2020-06-14 19:12 ` [PATCH v9 0/5] vhost-user block device backend implementation no-reply
2020-06-14 19:16 ` no-reply
2020-06-16  6:52   ` Coiby Xu
2020-06-18  8:27     ` Stefan Hajnoczi
2020-06-24  4:00       ` Coiby Xu
2020-06-18  8:28     ` Stefan Hajnoczi
2020-08-17  8:23       ` Coiby Xu
2020-06-19 12:07 ` Stefan Hajnoczi
2020-06-24  4:48   ` Coiby Xu
2020-06-25 12:46   ` Coiby Xu
2020-06-26 15:46     ` Stefan Hajnoczi
2020-08-18 15:13 ` Stefan Hajnoczi
2020-09-15 15:35 ` Stefan Hajnoczi
2020-09-18  8:13   ` Coiby Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).