All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests
@ 2020-10-27 17:35 Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 01/12] libvhost-user: follow QEMU comment style Stefan Hajnoczi
                   ` (12 more replies)
  0 siblings, 13 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

This patch series solves some issues with the new vhost-user-blk-server and
adds the qtest test case. The test case was not included in the pull request
that introduced the vhost-user-blk server because of reliability issues that
are fixed in this patch series.

Coiby Xu (1):
  test: new qTest case to test the vhost-user-blk-server

Stefan Hajnoczi (11):
  libvhost-user: follow QEMU comment style
  configure: introduce --enable-vhost-user-blk-server
  block/export: make vhost-user-blk config space little-endian
  block/export: fix vhost-user-blk get_config() information leak
  contrib/vhost-user-blk: fix get_config() information leak
  tests/qtest: add multi-queue test case to vhost-user-blk-test
  libqtest: add qtest_socket_server()
  vhost-user-blk-test: rename destroy_drive() to destroy_file()
  vhost-user-blk-test: close fork child file descriptors
  vhost-user-blk-test: drop unused return value
  vhost-user-blk-test: fix races by using fd passing

 configure                               |  15 +
 contrib/libvhost-user/libvhost-user.h   |  15 +-
 tests/qtest/libqos/libqtest.h           |  25 +
 tests/qtest/libqos/vhost-user-blk.h     |  48 ++
 block/export/export.c                   |   4 +-
 block/export/vhost-user-blk-server.c    |  28 +-
 contrib/vhost-user-blk/vhost-user-blk.c |   2 +
 tests/qtest/libqos/vhost-user-blk.c     | 129 ++++
 tests/qtest/libqtest.c                  |  76 ++-
 tests/qtest/vhost-user-blk-test.c       | 843 ++++++++++++++++++++++++
 block/export/meson.build                |   2 +-
 tests/qtest/libqos/meson.build          |   1 +
 tests/qtest/meson.build                 |   2 +
 util/meson.build                        |   2 +-
 14 files changed, 1151 insertions(+), 41 deletions(-)
 create mode 100644 tests/qtest/libqos/vhost-user-blk.h
 create mode 100644 tests/qtest/libqos/vhost-user-blk.c
 create mode 100644 tests/qtest/vhost-user-blk-test.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/12] libvhost-user: follow QEMU comment style
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 02/12] configure: introduce --enable-vhost-user-blk-server Stefan Hajnoczi
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 contrib/libvhost-user/libvhost-user.h | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
index 3bbeae8587..a1539dbb69 100644
--- a/contrib/libvhost-user/libvhost-user.h
+++ b/contrib/libvhost-user/libvhost-user.h
@@ -392,7 +392,8 @@ struct VuDev {
     bool broken;
     uint16_t max_queues;
 
-    /* @read_msg: custom method to read vhost-user message
+    /*
+     * @read_msg: custom method to read vhost-user message
      *
      * Read data from vhost_user socket fd and fill up
      * the passed VhostUserMsg *vmsg struct.
@@ -409,15 +410,19 @@ struct VuDev {
      *
      */
     vu_read_msg_cb read_msg;
-    /* @set_watch: add or update the given fd to the watch set,
-     * call cb when condition is met */
+
+    /*
+     * @set_watch: add or update the given fd to the watch set,
+     * call cb when condition is met.
+     */
     vu_set_watch_cb set_watch;
 
     /* @remove_watch: remove the given fd from the watch set */
     vu_remove_watch_cb remove_watch;
 
-    /* @panic: encountered an unrecoverable error, you may try to
-     * re-initialize */
+    /*
+     * @panic: encountered an unrecoverable error, you may try to re-initialize
+     */
     vu_panic_cb panic;
     const VuDevIface *iface;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 02/12] configure: introduce --enable-vhost-user-blk-server
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 01/12] libvhost-user: follow QEMU comment style Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 03/12] block/export: make vhost-user-blk config space little-endian Stefan Hajnoczi
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Make it possible to compile out the vhost-user-blk server. It is enabled
by default on Linux.

Note that vhost-user-server.c depends on libvhost-user, which requires
CONFIG_LINUX. The CONFIG_VHOST_USER dependency was erroneous since that
option controls vhost-user frontends (previously known as "master") and
not device backends (previously known as "slave").

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 configure                | 15 +++++++++++++++
 block/export/export.c    |  4 ++--
 block/export/meson.build |  2 +-
 util/meson.build         |  2 +-
 4 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/configure b/configure
index 55e07c82dd..b455ca8c7f 100755
--- a/configure
+++ b/configure
@@ -328,6 +328,7 @@ vhost_crypto=""
 vhost_scsi=""
 vhost_vsock=""
 vhost_user=""
+vhost_user_blk_server=""
 vhost_user_fs=""
 kvm="auto"
 hax="auto"
@@ -1240,6 +1241,10 @@ for opt do
   ;;
   --enable-vhost-vsock) vhost_vsock="yes"
   ;;
+  --disable-vhost-user-blk-server) vhost_user_blk_server="no"
+  ;;
+  --enable-vhost-user-blk-server) vhost_user_blk_server="yes"
+  ;;
   --disable-vhost-user-fs) vhost_user_fs="no"
   ;;
   --enable-vhost-user-fs) vhost_user_fs="yes"
@@ -1784,6 +1789,7 @@ disabled with --disable-FEATURE, default is enabled if available:
   vhost-crypto    vhost-user-crypto backend support
   vhost-kernel    vhost kernel backend support
   vhost-user      vhost-user backend support
+  vhost-user-blk-server    vhost-user-blk server support
   vhost-vdpa      vhost-vdpa kernel backend support
   spice           spice
   rbd             rados block device (rbd)
@@ -2375,6 +2381,12 @@ if test "$vhost_net" = ""; then
   test "$vhost_kernel" = "yes" && vhost_net=yes
 fi
 
+# libvhost-user is Linux-only
+test "$vhost_user_blk_server" = "" && vhost_user_blk_server=$linux
+if test "$vhost_user_blk_server" = "yes" && test "$linux" = "no"; then
+  error_exit "--enable-vhost-user-blk-server is only available on Linux"
+fi
+
 ##########################################
 # pkg-config probe
 
@@ -6260,6 +6272,9 @@ fi
 if test "$vhost_vdpa" = "yes" ; then
   echo "CONFIG_VHOST_VDPA=y" >> $config_host_mak
 fi
+if test "$vhost_user_blk_server" = "yes" ; then
+  echo "CONFIG_VHOST_USER_BLK_SERVER=y" >> $config_host_mak
+fi
 if test "$vhost_user_fs" = "yes" ; then
   echo "CONFIG_VHOST_USER_FS=y" >> $config_host_mak
 fi
diff --git a/block/export/export.c b/block/export/export.c
index c3478c6c97..bad6f21b1c 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -22,13 +22,13 @@
 #include "qapi/qapi-commands-block-export.h"
 #include "qapi/qapi-events-block-export.h"
 #include "qemu/id.h"
-#if defined(CONFIG_LINUX) && defined(CONFIG_VHOST_USER)
+#ifdef CONFIG_VHOST_USER_BLK_SERVER
 #include "vhost-user-blk-server.h"
 #endif
 
 static const BlockExportDriver *blk_exp_drivers[] = {
     &blk_exp_nbd,
-#if defined(CONFIG_LINUX) && defined(CONFIG_VHOST_USER)
+#ifdef CONFIG_VHOST_USER_BLK_SERVER
     &blk_exp_vhost_user_blk,
 #endif
 };
diff --git a/block/export/meson.build b/block/export/meson.build
index 9fb4fbf81d..19526435d8 100644
--- a/block/export/meson.build
+++ b/block/export/meson.build
@@ -1,2 +1,2 @@
 blockdev_ss.add(files('export.c'))
-blockdev_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
+blockdev_ss.add(when: 'CONFIG_VHOST_USER_BLK_SERVER', if_true: files('vhost-user-blk-server.c'))
diff --git a/util/meson.build b/util/meson.build
index c5159ad79d..f359af0d46 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -66,7 +66,7 @@ if have_block
   util_ss.add(files('main-loop.c'))
   util_ss.add(files('nvdimm-utils.c'))
   util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
-  util_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: [
+  util_ss.add(when: 'CONFIG_LINUX', if_true: [
     files('vhost-user-server.c'), vhost_user
   ])
   util_ss.add(files('block-helpers.c'))
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 03/12] block/export: make vhost-user-blk config space little-endian
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 01/12] libvhost-user: follow QEMU comment style Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 02/12] configure: introduce --enable-vhost-user-blk-server Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 04/12] block/export: fix vhost-user-blk get_config() information leak Stefan Hajnoczi
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

VIRTIO 1.0 devices have little-endian configuration space. The
vhost-user-blk-server.c code already uses little-endian for virtqueue
processing but not for the configuration space fields. Fix this so the
vhost-user-blk export works on big-endian hosts.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 41f4933d6e..33cc0818b8 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -264,7 +264,6 @@ static uint64_t vu_blk_get_protocol_features(VuDev *dev)
 static int
 vu_blk_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
 {
-    /* TODO blkcfg must be little-endian for VIRTIO 1.0 */
     VuServer *server = container_of(vu_dev, VuServer, vu_dev);
     VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
     memcpy(config, &vexp->blkcfg, len);
@@ -343,18 +342,18 @@ vu_blk_initialize_config(BlockDriverState *bs,
                          uint32_t blk_size,
                          uint16_t num_queues)
 {
-    config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
-    config->blk_size = blk_size;
-    config->size_max = 0;
-    config->seg_max = 128 - 2;
-    config->min_io_size = 1;
-    config->opt_io_size = 1;
-    config->num_queues = num_queues;
-    config->max_discard_sectors = 32768;
-    config->max_discard_seg = 1;
-    config->discard_sector_alignment = config->blk_size >> 9;
-    config->max_write_zeroes_sectors = 32768;
-    config->max_write_zeroes_seg = 1;
+    config->capacity = cpu_to_le64(bdrv_getlength(bs) >> BDRV_SECTOR_BITS);
+    config->blk_size = cpu_to_le32(blk_size);
+    config->size_max = cpu_to_le32(0);
+    config->seg_max = cpu_to_le32(128 - 2);
+    config->min_io_size = cpu_to_le16(1);
+    config->opt_io_size = cpu_to_le32(1);
+    config->num_queues = cpu_to_le16(num_queues);
+    config->max_discard_sectors = cpu_to_le32(32768);
+    config->max_discard_seg = cpu_to_le32(1);
+    config->discard_sector_alignment = cpu_to_le32(config->blk_size >> 9);
+    config->max_write_zeroes_sectors = cpu_to_le32(32768);
+    config->max_write_zeroes_seg = cpu_to_le32(1);
 }
 
 static void vu_blk_exp_request_shutdown(BlockExport *exp)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 04/12] block/export: fix vhost-user-blk get_config() information leak
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 03/12] block/export: make vhost-user-blk config space little-endian Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 05/12] contrib/vhost-user-blk: fix " Stefan Hajnoczi
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Refuse get_config() requests in excess of sizeof(struct virtio_blk_config).

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 33cc0818b8..62672d1cb9 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -266,6 +266,9 @@ vu_blk_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
 {
     VuServer *server = container_of(vu_dev, VuServer, vu_dev);
     VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
+
+    g_return_val_if_fail(len <= sizeof(struct virtio_blk_config), -1);
+
     memcpy(config, &vexp->blkcfg, len);
     return 0;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 05/12] contrib/vhost-user-blk: fix get_config() information leak
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 04/12] block/export: fix vhost-user-blk get_config() information leak Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 06/12] test: new qTest case to test the vhost-user-blk-server Stefan Hajnoczi
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Refuse get_config() in excess of sizeof(struct virtio_blk_config).

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 contrib/vhost-user-blk/vhost-user-blk.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/contrib/vhost-user-blk/vhost-user-blk.c b/contrib/vhost-user-blk/vhost-user-blk.c
index 25eccd02b5..caad88637e 100644
--- a/contrib/vhost-user-blk/vhost-user-blk.c
+++ b/contrib/vhost-user-blk/vhost-user-blk.c
@@ -404,6 +404,8 @@ vub_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
     VugDev *gdev;
     VubDev *vdev_blk;
 
+    g_return_val_if_fail(len <= sizeof(struct virtio_blk_config), -1);
+
     gdev = container_of(vu_dev, VugDev, parent);
     vdev_blk = container_of(gdev, VubDev, parent);
     memcpy(config, &vdev_blk->blkcfg, len);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 06/12] test: new qTest case to test the vhost-user-blk-server
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 05/12] contrib/vhost-user-blk: fix " Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 07/12] tests/qtest: add multi-queue test case to vhost-user-blk-test Stefan Hajnoczi
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Marc-André Lureau, Paolo Bonzini, Max Reitz

From: Coiby Xu <coiby.xu@gmail.com>

This test case has the same tests as tests/virtio-blk-test.c except for
tests have block_resize. Since vhost-user server can only server one
client one time, two instances of vhost-user-blk-server are started by
qemu-storage-daemon for the hotplug test.

In order to not block scripts/tap-driver.pl, vhost-user-blk-server will
send "quit" command to qemu-storage-daemon's QMP monitor. So a function
is added to libqtest.c to establish socket connection with socket
server.

Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20200918080912.321299-7-coiby.xu@gmail.com
[Update meson.build to only test when CONFIG_TOOLS has built
qemu-storage-daemon. This prevents CI failures with --disable-tools.
Also bump RAM to 256 MB because that is the minimum RAM granularity on
ppc64 spapr machines.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/libqos/libqtest.h       |  17 +
 tests/qtest/libqos/vhost-user-blk.h |  48 ++
 tests/qtest/libqos/vhost-user-blk.c | 129 +++++
 tests/qtest/libqtest.c              |  36 +-
 tests/qtest/vhost-user-blk-test.c   | 751 ++++++++++++++++++++++++++++
 tests/qtest/libqos/meson.build      |   1 +
 tests/qtest/meson.build             |   2 +
 7 files changed, 982 insertions(+), 2 deletions(-)
 create mode 100644 tests/qtest/libqos/vhost-user-blk.h
 create mode 100644 tests/qtest/libqos/vhost-user-blk.c
 create mode 100644 tests/qtest/vhost-user-blk-test.c

diff --git a/tests/qtest/libqos/libqtest.h b/tests/qtest/libqos/libqtest.h
index 5c959f1853..241b5f89fb 100644
--- a/tests/qtest/libqos/libqtest.h
+++ b/tests/qtest/libqos/libqtest.h
@@ -132,6 +132,23 @@ void qtest_qmp_send(QTestState *s, const char *fmt, ...)
 void qtest_qmp_send_raw(QTestState *s, const char *fmt, ...)
     GCC_FMT_ATTR(2, 3);
 
+/**
+ * qtest_socket_client:
+ * @server_socket_path: the socket server's path
+ *
+ * Connect to a socket server.
+ */
+int qtest_socket_client(char *server_socket_path);
+
+/**
+ * qtest_create_state_with_qmp_fd:
+ * @fd: socket fd
+ *
+ * Wrap socket fd in QTestState to make use of qtest_qmp*
+ * functions
+ */
+QTestState *qtest_create_state_with_qmp_fd(int fd);
+
 /**
  * qtest_vqmp_fds:
  * @s: #QTestState instance to operate on.
diff --git a/tests/qtest/libqos/vhost-user-blk.h b/tests/qtest/libqos/vhost-user-blk.h
new file mode 100644
index 0000000000..2a03456a45
--- /dev/null
+++ b/tests/qtest/libqos/vhost-user-blk.h
@@ -0,0 +1,48 @@
+/*
+ * libqos driver framework
+ *
+ * Based on tests/qtest/libqos/virtio-blk.c
+ *
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
+ *
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License version 2 as published by the Free Software Foundation.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#ifndef TESTS_LIBQOS_VHOST_USER_BLK_H
+#define TESTS_LIBQOS_VHOST_USER_BLK_H
+
+#include "qgraph.h"
+#include "virtio.h"
+#include "virtio-pci.h"
+
+typedef struct QVhostUserBlk QVhostUserBlk;
+typedef struct QVhostUserBlkPCI QVhostUserBlkPCI;
+typedef struct QVhostUserBlkDevice QVhostUserBlkDevice;
+
+struct QVhostUserBlk {
+    QVirtioDevice *vdev;
+};
+
+struct QVhostUserBlkPCI {
+    QVirtioPCIDevice pci_vdev;
+    QVhostUserBlk blk;
+};
+
+struct QVhostUserBlkDevice {
+    QOSGraphObject obj;
+    QVhostUserBlk blk;
+};
+
+#endif
diff --git a/tests/qtest/libqos/vhost-user-blk.c b/tests/qtest/libqos/vhost-user-blk.c
new file mode 100644
index 0000000000..58c7e1eb69
--- /dev/null
+++ b/tests/qtest/libqos/vhost-user-blk.c
@@ -0,0 +1,129 @@
+/*
+ * libqos driver framework
+ *
+ * Based on tests/qtest/libqos/virtio-blk.c
+ *
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
+ *
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License version 2.1 as published by the Free Software Foundation.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest.h"
+#include "qemu/module.h"
+#include "standard-headers/linux/virtio_blk.h"
+#include "vhost-user-blk.h"
+
+#define PCI_SLOT                0x04
+#define PCI_FN                  0x00
+
+/* virtio-blk-device */
+static void *qvhost_user_blk_get_driver(QVhostUserBlk *v_blk,
+                                    const char *interface)
+{
+    if (!g_strcmp0(interface, "vhost-user-blk")) {
+        return v_blk;
+    }
+    if (!g_strcmp0(interface, "virtio")) {
+        return v_blk->vdev;
+    }
+
+    fprintf(stderr, "%s not present in vhost-user-blk-device\n", interface);
+    g_assert_not_reached();
+}
+
+static void *qvhost_user_blk_device_get_driver(void *object,
+                                           const char *interface)
+{
+    QVhostUserBlkDevice *v_blk = object;
+    return qvhost_user_blk_get_driver(&v_blk->blk, interface);
+}
+
+static void *vhost_user_blk_device_create(void *virtio_dev,
+                                      QGuestAllocator *t_alloc,
+                                      void *addr)
+{
+    QVhostUserBlkDevice *vhost_user_blk = g_new0(QVhostUserBlkDevice, 1);
+    QVhostUserBlk *interface = &vhost_user_blk->blk;
+
+    interface->vdev = virtio_dev;
+
+    vhost_user_blk->obj.get_driver = qvhost_user_blk_device_get_driver;
+
+    return &vhost_user_blk->obj;
+}
+
+/* virtio-blk-pci */
+static void *qvhost_user_blk_pci_get_driver(void *object, const char *interface)
+{
+    QVhostUserBlkPCI *v_blk = object;
+    if (!g_strcmp0(interface, "pci-device")) {
+        return v_blk->pci_vdev.pdev;
+    }
+    return qvhost_user_blk_get_driver(&v_blk->blk, interface);
+}
+
+static void *vhost_user_blk_pci_create(void *pci_bus, QGuestAllocator *t_alloc,
+                                      void *addr)
+{
+    QVhostUserBlkPCI *vhost_user_blk = g_new0(QVhostUserBlkPCI, 1);
+    QVhostUserBlk *interface = &vhost_user_blk->blk;
+    QOSGraphObject *obj = &vhost_user_blk->pci_vdev.obj;
+
+    virtio_pci_init(&vhost_user_blk->pci_vdev, pci_bus, addr);
+    interface->vdev = &vhost_user_blk->pci_vdev.vdev;
+
+    g_assert_cmphex(interface->vdev->device_type, ==, VIRTIO_ID_BLOCK);
+
+    obj->get_driver = qvhost_user_blk_pci_get_driver;
+
+    return obj;
+}
+
+static void vhost_user_blk_register_nodes(void)
+{
+    /*
+     * FIXME: every test using these two nodes needs to setup a
+     * -drive,id=drive0 otherwise QEMU is not going to start.
+     * Therefore, we do not include "produces" edge for virtio
+     * and pci-device yet.
+     */
+
+    char *arg = g_strdup_printf("id=drv0,chardev=char1,addr=%x.%x",
+                                PCI_SLOT, PCI_FN);
+
+    QPCIAddress addr = {
+        .devfn = QPCI_DEVFN(PCI_SLOT, PCI_FN),
+    };
+
+    QOSGraphEdgeOptions opts = { };
+
+    /* virtio-blk-device */
+    /** opts.extra_device_opts = "drive=drive0"; */
+    qos_node_create_driver("vhost-user-blk-device", vhost_user_blk_device_create);
+    qos_node_consumes("vhost-user-blk-device", "virtio-bus", &opts);
+    qos_node_produces("vhost-user-blk-device", "vhost-user-blk");
+
+    /* virtio-blk-pci */
+    opts.extra_device_opts = arg;
+    add_qpci_address(&opts, &addr);
+    qos_node_create_driver("vhost-user-blk-pci", vhost_user_blk_pci_create);
+    qos_node_consumes("vhost-user-blk-pci", "pci-bus", &opts);
+    qos_node_produces("vhost-user-blk-pci", "vhost-user-blk");
+
+    g_free(arg);
+}
+
+libqos_init(vhost_user_blk_register_nodes);
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
index 99deff47ef..ab34075f2b 100644
--- a/tests/qtest/libqtest.c
+++ b/tests/qtest/libqtest.c
@@ -4,11 +4,13 @@
  * Copyright IBM, Corp. 2012
  * Copyright Red Hat, Inc. 2012
  * Copyright SUSE LINUX Products GmbH 2013
+ * Copyright Copyright (c) Coiby Xu
  *
  * Authors:
  *  Anthony Liguori   <aliguori@us.ibm.com>
  *  Paolo Bonzini     <pbonzini@redhat.com>
  *  Andreas Färber    <afaerber@suse.de>
+ *  Coiby Xu          <coiby.xu@gmail.com>
  *
  * This work is licensed under the terms of the GNU GPL, version 2 or later.
  * See the COPYING file in the top-level directory.
@@ -52,8 +54,7 @@ typedef struct QTestClientTransportOps {
     QTestRecvFn     recv_line; /* for receiving qtest command responses */
 } QTestTransportOps;
 
-struct QTestState
-{
+struct QTestState {
     int fd;
     int qmp_fd;
     pid_t qemu_pid;  /* our child QEMU process */
@@ -630,6 +631,37 @@ QDict *qtest_qmp_receive_dict(QTestState *s)
     return qmp_fd_receive(s->qmp_fd);
 }
 
+QTestState *qtest_create_state_with_qmp_fd(int fd)
+{
+    QTestState *qmp_test_state = g_new0(QTestState, 1);
+    qmp_test_state->qmp_fd = fd;
+    return qmp_test_state;
+}
+
+int qtest_socket_client(char *server_socket_path)
+{
+    struct sockaddr_un serv_addr;
+    int sock;
+    int ret;
+    int retries = 0;
+    sock = socket(PF_UNIX, SOCK_STREAM, 0);
+    g_assert_cmpint(sock, !=, -1);
+    serv_addr.sun_family = AF_UNIX;
+    snprintf(serv_addr.sun_path, sizeof(serv_addr.sun_path), "%s",
+             server_socket_path);
+
+    for (retries = 0; retries < 3; retries++) {
+        ret = connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
+        if (ret == 0) {
+            break;
+        }
+        g_usleep(G_USEC_PER_SEC);
+    }
+
+    g_assert_cmpint(ret, ==, 0);
+    return sock;
+}
+
 /**
  * Allow users to send a message without waiting for the reply,
  * in the case that they choose to discard all replies up until
diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
new file mode 100644
index 0000000000..e7e44f9bf0
--- /dev/null
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -0,0 +1,751 @@
+/*
+ * QTest testcase for Vhost-user Block Device
+ *
+ * Based on tests/qtest//virtio-blk-test.c
+
+ * Copyright (c) 2014 SUSE LINUX Products GmbH
+ * Copyright (c) 2014 Marc Marí
+ * Copyright (c) 2020 Coiby Xu
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest-single.h"
+#include "qemu/bswap.h"
+#include "qemu/module.h"
+#include "standard-headers/linux/virtio_blk.h"
+#include "standard-headers/linux/virtio_pci.h"
+#include "libqos/qgraph.h"
+#include "libqos/vhost-user-blk.h"
+#include "libqos/libqos-pc.h"
+
+#define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
+#define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
+#define PCI_SLOT_HP             0x06
+
+typedef struct QVirtioBlkReq {
+    uint32_t type;
+    uint32_t ioprio;
+    uint64_t sector;
+    char *data;
+    uint8_t status;
+} QVirtioBlkReq;
+
+#ifdef HOST_WORDS_BIGENDIAN
+static const bool host_is_big_endian = true;
+#else
+static const bool host_is_big_endian; /* false */
+#endif
+
+static inline void virtio_blk_fix_request(QVirtioDevice *d, QVirtioBlkReq *req)
+{
+    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
+        req->type = bswap32(req->type);
+        req->ioprio = bswap32(req->ioprio);
+        req->sector = bswap64(req->sector);
+    }
+}
+
+static inline void virtio_blk_fix_dwz_hdr(QVirtioDevice *d,
+    struct virtio_blk_discard_write_zeroes *dwz_hdr)
+{
+    if (qvirtio_is_big_endian(d) != host_is_big_endian) {
+        dwz_hdr->sector = bswap64(dwz_hdr->sector);
+        dwz_hdr->num_sectors = bswap32(dwz_hdr->num_sectors);
+        dwz_hdr->flags = bswap32(dwz_hdr->flags);
+    }
+}
+
+static uint64_t virtio_blk_request(QGuestAllocator *alloc, QVirtioDevice *d,
+                                   QVirtioBlkReq *req, uint64_t data_size)
+{
+    uint64_t addr;
+    uint8_t status = 0xFF;
+    QTestState *qts = global_qtest;
+
+    switch (req->type) {
+    case VIRTIO_BLK_T_IN:
+    case VIRTIO_BLK_T_OUT:
+        g_assert_cmpuint(data_size % 512, ==, 0);
+        break;
+    case VIRTIO_BLK_T_DISCARD:
+    case VIRTIO_BLK_T_WRITE_ZEROES:
+        g_assert_cmpuint(data_size %
+                         sizeof(struct virtio_blk_discard_write_zeroes), ==, 0);
+        break;
+    default:
+        g_assert_cmpuint(data_size, ==, 0);
+    }
+
+    addr = guest_alloc(alloc, sizeof(*req) + data_size);
+
+    virtio_blk_fix_request(d, req);
+
+    qtest_memwrite(qts, addr, req, 16);
+    qtest_memwrite(qts, addr + 16, req->data, data_size);
+    qtest_memwrite(qts, addr + 16 + data_size, &status, sizeof(status));
+
+    return addr;
+}
+
+/* Returns the request virtqueue so the caller can perform further tests */
+static QVirtQueue *test_basic(QVirtioDevice *dev, QGuestAllocator *alloc)
+{
+    QVirtioBlkReq req;
+    uint64_t req_addr;
+    uint64_t capacity;
+    uint64_t features;
+    uint32_t free_head;
+    uint8_t status;
+    char *data;
+    QTestState *qts = global_qtest;
+    QVirtQueue *vq;
+
+    features = qvirtio_get_features(dev);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                    (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+                    (1u << VIRTIO_RING_F_EVENT_IDX) |
+                    (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev, features);
+
+    capacity = qvirtio_config_readq(dev, 0);
+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
+
+    vq = qvirtqueue_setup(dev, alloc, 0);
+
+    qvirtio_set_driver_ok(dev);
+
+    /* Write and read with 3 descriptor layout */
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    guest_free(alloc, req_addr);
+
+    /* Read request */
+    req.type = VIRTIO_BLK_T_IN;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+
+    req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    data = g_malloc0(512);
+    qtest_memread(qts, req_addr + 16, data, 512);
+    g_assert_cmpstr(data, ==, "TEST");
+    g_free(data);
+
+    guest_free(alloc, req_addr);
+
+    if (features & (1u << VIRTIO_BLK_F_WRITE_ZEROES)) {
+        struct virtio_blk_discard_write_zeroes dwz_hdr;
+        void *expected;
+
+        /*
+         * WRITE_ZEROES request on the same sector of previous test where
+         * we wrote "TEST".
+         */
+        req.type = VIRTIO_BLK_T_WRITE_ZEROES;
+        req.data = (char *) &dwz_hdr;
+        dwz_hdr.sector = 0;
+        dwz_hdr.num_sectors = 1;
+        dwz_hdr.flags = 0;
+
+        virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr), 1, true,
+                       false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 16 + sizeof(dwz_hdr));
+        g_assert_cmpint(status, ==, 0);
+
+        guest_free(alloc, req_addr);
+
+        /* Read request to check if the sector contains all zeroes */
+        req.type = VIRTIO_BLK_T_IN;
+        req.ioprio = 1;
+        req.sector = 0;
+        req.data = g_malloc0(512);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+        g_free(req.data);
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
+        qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 528);
+        g_assert_cmpint(status, ==, 0);
+
+        data = g_malloc(512);
+        expected = g_malloc0(512);
+        qtest_memread(qts, req_addr + 16, data, 512);
+        g_assert_cmpmem(data, 512, expected, 512);
+        g_free(expected);
+        g_free(data);
+
+        guest_free(alloc, req_addr);
+    }
+
+    if (features & (1u << VIRTIO_BLK_F_DISCARD)) {
+        struct virtio_blk_discard_write_zeroes dwz_hdr;
+
+        req.type = VIRTIO_BLK_T_DISCARD;
+        req.data = (char *) &dwz_hdr;
+        dwz_hdr.sector = 0;
+        dwz_hdr.num_sectors = 1;
+        dwz_hdr.flags = 0;
+
+        virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr),
+                       1, true, false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 16 + sizeof(dwz_hdr));
+        g_assert_cmpint(status, ==, 0);
+
+        guest_free(alloc, req_addr);
+    }
+
+    if (features & (1u << VIRTIO_F_ANY_LAYOUT)) {
+        /* Write and read with 2 descriptor layout */
+        /* Write request */
+        req.type = VIRTIO_BLK_T_OUT;
+        req.ioprio = 1;
+        req.sector = 1;
+        req.data = g_malloc0(512);
+        strcpy(req.data, "TEST");
+
+        req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+        g_free(req.data);
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 528, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 528);
+        g_assert_cmpint(status, ==, 0);
+
+        guest_free(alloc, req_addr);
+
+        /* Read request */
+        req.type = VIRTIO_BLK_T_IN;
+        req.ioprio = 1;
+        req.sector = 1;
+        req.data = g_malloc0(512);
+
+        req_addr = virtio_blk_request(alloc, dev, &req, 512);
+
+        g_free(req.data);
+
+        free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+        qvirtqueue_add(qts, vq, req_addr + 16, 513, true, false);
+
+        qvirtqueue_kick(qts, dev, vq, free_head);
+
+        qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                               QVIRTIO_BLK_TIMEOUT_US);
+        status = readb(req_addr + 528);
+        g_assert_cmpint(status, ==, 0);
+
+        data = g_malloc0(512);
+        qtest_memread(qts, req_addr + 16, data, 512);
+        g_assert_cmpstr(data, ==, "TEST");
+        g_free(data);
+
+        guest_free(alloc, req_addr);
+    }
+
+    return vq;
+}
+
+static void basic(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+    QVhostUserBlk *blk_if = obj;
+    QVirtQueue *vq;
+
+    vq = test_basic(blk_if->vdev, t_alloc);
+    qvirtqueue_cleanup(blk_if->vdev->bus, vq, t_alloc);
+
+}
+
+static void indirect(void *obj, void *u_data, QGuestAllocator *t_alloc)
+{
+    QVirtQueue *vq;
+    QVhostUserBlk *blk_if = obj;
+    QVirtioDevice *dev = blk_if->vdev;
+    QVirtioBlkReq req;
+    QVRingIndirectDesc *indirect;
+    uint64_t req_addr;
+    uint64_t capacity;
+    uint64_t features;
+    uint32_t free_head;
+    uint8_t status;
+    char *data;
+    QTestState *qts = global_qtest;
+
+    features = qvirtio_get_features(dev);
+    g_assert_cmphex(features & (1u << VIRTIO_RING_F_INDIRECT_DESC), !=, 0);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                            (1u << VIRTIO_RING_F_EVENT_IDX) |
+                            (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev, features);
+
+    capacity = qvirtio_config_readq(dev, 0);
+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
+
+    vq = qvirtqueue_setup(dev, t_alloc, 0);
+    qvirtio_set_driver_ok(dev);
+
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 528, false);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 528, 1, true);
+    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    g_free(indirect);
+    guest_free(t_alloc, req_addr);
+
+    /* Read request */
+    req.type = VIRTIO_BLK_T_IN;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    indirect = qvring_indirect_desc_setup(qts, dev, t_alloc, 2);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr, 16, false);
+    qvring_indirect_desc_add(dev, qts, indirect, req_addr + 16, 513, true);
+    free_head = qvirtqueue_add_indirect(qts, vq, indirect);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    data = g_malloc0(512);
+    qtest_memread(qts, req_addr + 16, data, 512);
+    g_assert_cmpstr(data, ==, "TEST");
+    g_free(data);
+
+    g_free(indirect);
+    guest_free(t_alloc, req_addr);
+    qvirtqueue_cleanup(dev->bus, vq, t_alloc);
+}
+
+static void idx(void *obj, void *u_data, QGuestAllocator *t_alloc)
+{
+    QVirtQueue *vq;
+    QVhostUserBlkPCI *blk = obj;
+    QVirtioPCIDevice *pdev = &blk->pci_vdev;
+    QVirtioDevice *dev = &pdev->vdev;
+    QVirtioBlkReq req;
+    uint64_t req_addr;
+    uint64_t capacity;
+    uint64_t features;
+    uint32_t free_head;
+    uint32_t write_head;
+    uint32_t desc_idx;
+    uint8_t status;
+    char *data;
+    QOSGraphObject *blk_object = obj;
+    QPCIDevice *pci_dev = blk_object->get_driver(blk_object, "pci-device");
+    QTestState *qts = global_qtest;
+
+    if (qpci_check_buggy_msi(pci_dev)) {
+        return;
+    }
+
+    qpci_msix_enable(pdev->pdev);
+    qvirtio_pci_set_msix_configuration_vector(pdev, t_alloc, 0);
+
+    features = qvirtio_get_features(dev);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                            (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+                            (1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+                            (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev, features);
+
+    capacity = qvirtio_config_readq(dev, 0);
+    g_assert_cmpint(capacity, ==, TEST_IMAGE_SIZE / 512);
+
+    vq = qvirtqueue_setup(dev, t_alloc, 0);
+    qvirtqueue_pci_msix_setup(pdev, (QVirtQueuePCI *)vq, t_alloc, 1);
+
+    qvirtio_set_driver_ok(dev);
+
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 0;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+
+    /* Write request */
+    req.type = VIRTIO_BLK_T_OUT;
+    req.ioprio = 1;
+    req.sector = 1;
+    req.data = g_malloc0(512);
+    strcpy(req.data, "TEST");
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    /* Notify after processing the third request */
+    qvirtqueue_set_used_event(qts, vq, 2);
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+    qvirtqueue_kick(qts, dev, vq, free_head);
+    write_head = free_head;
+
+    /* No notification expected */
+    status = qvirtio_wait_status_byte_no_isr(qts, dev,
+                                             vq, req_addr + 528,
+                                             QVIRTIO_BLK_TIMEOUT_US);
+    g_assert_cmpint(status, ==, 0);
+
+    guest_free(t_alloc, req_addr);
+
+    /* Read request */
+    req.type = VIRTIO_BLK_T_IN;
+    req.ioprio = 1;
+    req.sector = 1;
+    req.data = g_malloc0(512);
+
+    req_addr = virtio_blk_request(t_alloc, dev, &req, 512);
+
+    g_free(req.data);
+
+    free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+    qvirtqueue_add(qts, vq, req_addr + 16, 512, true, true);
+    qvirtqueue_add(qts, vq, req_addr + 528, 1, true, false);
+
+    qvirtqueue_kick(qts, dev, vq, free_head);
+
+    /* We get just one notification for both requests */
+    qvirtio_wait_used_elem(qts, dev, vq, write_head, NULL,
+                           QVIRTIO_BLK_TIMEOUT_US);
+    g_assert(qvirtqueue_get_buf(qts, vq, &desc_idx, NULL));
+    g_assert_cmpint(desc_idx, ==, free_head);
+
+    status = readb(req_addr + 528);
+    g_assert_cmpint(status, ==, 0);
+
+    data = g_malloc0(512);
+    qtest_memread(qts, req_addr + 16, data, 512);
+    g_assert_cmpstr(data, ==, "TEST");
+    g_free(data);
+
+    guest_free(t_alloc, req_addr);
+
+    /* End test */
+    qpci_msix_disable(pdev->pdev);
+
+    qvirtqueue_cleanup(dev->bus, vq, t_alloc);
+}
+
+static void pci_hotplug(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+    QVirtioPCIDevice *dev1 = obj;
+    QVirtioPCIDevice *dev;
+    QTestState *qts = dev1->pdev->bus->qts;
+
+    /* plug secondary disk */
+    qtest_qmp_device_add(qts, "vhost-user-blk-pci", "drv1",
+                         "{'addr': %s, 'chardev': 'char2'}",
+                         stringify(PCI_SLOT_HP) ".0");
+
+    dev = virtio_pci_new(dev1->pdev->bus,
+                         &(QPCIAddress) { .devfn = QPCI_DEVFN(PCI_SLOT_HP, 0)
+                                        });
+    g_assert_nonnull(dev);
+    g_assert_cmpint(dev->vdev.device_type, ==, VIRTIO_ID_BLOCK);
+    qvirtio_pci_device_disable(dev);
+    qos_object_destroy((QOSGraphObject *)dev);
+
+    /* unplug secondary disk */
+    qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
+}
+
+/*
+ * Check that setting the vring addr on a non-existent virtqueue does
+ * not crash.
+ */
+static void test_nonexistent_virtqueue(void *obj, void *data,
+                                       QGuestAllocator *t_alloc)
+{
+    QVhostUserBlkPCI *blk = obj;
+    QVirtioPCIDevice *pdev = &blk->pci_vdev;
+    QPCIBar bar0;
+    QPCIDevice *dev;
+
+    dev = qpci_device_find(pdev->pdev->bus, QPCI_DEVFN(4, 0));
+    g_assert(dev != NULL);
+    qpci_device_enable(dev);
+
+    bar0 = qpci_iomap(dev, 0, NULL);
+
+    qpci_io_writeb(dev, bar0, VIRTIO_PCI_QUEUE_SEL, 2);
+    qpci_io_writel(dev, bar0, VIRTIO_PCI_QUEUE_PFN, 1);
+
+    g_free(dev);
+}
+
+static const char *qtest_qemu_storage_daemon_binary(void)
+{
+    const char *qemu_storage_daemon_bin;
+
+    qemu_storage_daemon_bin = getenv("QTEST_QEMU_STORAGE_DAEMON_BINARY");
+    if (!qemu_storage_daemon_bin) {
+        fprintf(stderr, "Environment variable "
+                        "QTEST_QEMU_STORAGE_DAEMON_BINARY required\n");
+        exit(0);
+    }
+
+    return qemu_storage_daemon_bin;
+}
+
+static void drive_destroy(void *path)
+{
+    unlink(path);
+    g_free(path);
+    qos_invalidate_command_line();
+}
+
+static char *drive_create(void)
+{
+    int fd, ret;
+    /** vhost-user-blk won't recognize drive located in /tmp */
+    char *t_path = g_strdup("qtest.XXXXXX");
+
+    /** Create a temporary raw image */
+    fd = mkstemp(t_path);
+    g_assert_cmpint(fd, >=, 0);
+    ret = ftruncate(fd, TEST_IMAGE_SIZE);
+    g_assert_cmpint(ret, ==, 0);
+    close(fd);
+
+    g_test_queue_destroy(drive_destroy, t_path);
+    return t_path;
+}
+
+static char sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.XXXXXX";
+static char qmp_sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.qmp.XXXXXX";
+
+static void quit_storage_daemon(void *qmp_test_state)
+{
+    const char quit_str[] = "{ 'execute': 'quit' }";
+
+    /* Before quiting storate-daemon, quit qemu to avoid dubious messages */
+    qobject_unref(qtest_qmp(global_qtest, quit_str));
+
+    /*
+     * Give storage-daemon enough time to wake up&terminate
+     * vu_client_trip coroutine so the Coroutine object could
+     * be cleaned up. Otherwise LeakSanitizer would complain
+     * about memory leaks.
+     */
+    g_usleep(1000);
+
+    qobject_unref(qtest_qmp((QTestState *)qmp_test_state, quit_str));
+    g_free(qmp_test_state);
+}
+
+static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
+{
+    const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
+    int fd, qmp_fd, i;
+    QTestState *qmp_test_state;
+    gchar *img_path;
+    char *sock_path = NULL;
+    char *qmp_sock_path = g_strdup(qmp_sock_path_tempate);
+    GString *storage_daemon_command = g_string_new(NULL);
+
+    qmp_fd = mkstemp(qmp_sock_path);
+    g_assert_cmpint(qmp_fd, >=, 0);
+    g_test_queue_destroy(drive_destroy, qmp_sock_path);
+
+    g_string_append_printf(storage_daemon_command,
+            "exec %s "
+            "--chardev socket,id=qmp,path=%s,server,nowait --monitor chardev=qmp ",
+            vhost_user_blk_bin, qmp_sock_path);
+
+    g_string_append_printf(cmd_line,
+            " -object memory-backend-memfd,id=mem,size=256M,share=on -M memory-backend=mem ");
+
+    for (i = 0; i < vus_instances; i++) {
+        sock_path = g_strdup(sock_path_tempate);
+        fd = mkstemp(sock_path);
+        g_assert_cmpint(fd, >=, 0);
+        g_test_queue_destroy(drive_destroy, sock_path);
+        /* create image file */
+        img_path = drive_create();
+        g_string_append_printf(storage_daemon_command,
+            "--blockdev driver=file,node-name=disk%d,filename=%s "
+            "--export type=vhost-user-blk,id=disk%d,addr.type=unix,addr.path=%s,"
+            "node-name=disk%i,writable=on ",
+            i, img_path, i, sock_path, i);
+
+        g_string_append_printf(cmd_line, "-chardev socket,id=char%d,path=%s ",
+                               i + 1, sock_path);
+    }
+
+    g_test_message("starting vhost-user backend: %s",
+                   storage_daemon_command->str);
+    pid_t pid = fork();
+    if (pid == 0) {
+        execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
+        exit(1);
+    }
+    g_string_free(storage_daemon_command, true);
+
+    qmp_test_state = qtest_create_state_with_qmp_fd(
+                             qtest_socket_client(qmp_sock_path));
+    /*
+     * Ask qemu-storage-daemon to quit so it
+     * will not block scripts/tap-driver.pl.
+     */
+    g_test_queue_destroy(quit_storage_daemon, qmp_test_state);
+
+    qobject_unref(qtest_qmp(qmp_test_state, "{'execute': 'qmp_capabilities'}"));
+    return sock_path;
+}
+
+static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
+{
+    start_vhost_user_blk(cmd_line, 1);
+    return arg;
+}
+
+/*
+ * Setup for hotplug.
+ *
+ * Since vhost-user server only serves one vhost-user client one time,
+ * another exprot
+ *
+ */
+static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
+{
+    /* "-chardev socket,id=char2" is used for pci_hotplug*/
+    start_vhost_user_blk(cmd_line, 2);
+    return arg;
+}
+
+static void register_vhost_user_blk_test(void)
+{
+    QOSGraphTestOptions opts = {
+        .before = vhost_user_blk_test_setup,
+    };
+
+    /*
+     * tests for vhost-user-blk and vhost-user-blk-pci
+     * The tests are borrowed from tests/virtio-blk-test.c. But some tests
+     * regarding block_resize don't work for vhost-user-blk.
+     * vhost-user-blk device doesn't have -drive, so tests containing
+     * block_resize are also abandoned,
+     *  - config
+     *  - resize
+     */
+    qos_add_test("basic", "vhost-user-blk", basic, &opts);
+    qos_add_test("indirect", "vhost-user-blk", indirect, &opts);
+    qos_add_test("idx", "vhost-user-blk-pci", idx, &opts);
+    qos_add_test("nxvirtq", "vhost-user-blk-pci",
+                 test_nonexistent_virtqueue, &opts);
+
+    opts.before = vhost_user_blk_hotplug_test_setup;
+    qos_add_test("hotplug", "vhost-user-blk-pci", pci_hotplug, &opts);
+}
+
+libqos_init(register_vhost_user_blk_test);
diff --git a/tests/qtest/libqos/meson.build b/tests/qtest/libqos/meson.build
index 1cddf5bdaa..1f5c8f1053 100644
--- a/tests/qtest/libqos/meson.build
+++ b/tests/qtest/libqos/meson.build
@@ -32,6 +32,7 @@ libqos_srcs = files('../libqtest.c',
         'virtio-9p.c',
         'virtio-balloon.c',
         'virtio-blk.c',
+        'vhost-user-blk.c',
         'virtio-mmio.c',
         'virtio-net.c',
         'virtio-pci.c',
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index ba8ebeead6..e8d0424c0f 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -195,6 +195,7 @@ qos_test_ss.add(
 )
 qos_test_ss.add(when: 'CONFIG_VIRTFS', if_true: files('virtio-9p-test.c'))
 qos_test_ss.add(when: 'CONFIG_VHOST_USER', if_true: files('vhost-user-test.c'))
+qos_test_ss.add(when: ['CONFIG_LINUX', 'CONFIG_TOOLS'], if_true: files('vhost-user-blk-test.c'))
 
 tpmemu_files = ['tpm-emu.c', 'tpm-util.c', 'tpm-tests.c']
 
@@ -233,6 +234,7 @@ foreach dir : target_dirs
   endif
   qtest_env.set('G_TEST_DBUS_DAEMON', meson.source_root() / 'tests/dbus-vmstate-daemon.sh')
   qtest_env.set('QTEST_QEMU_BINARY', './qemu-system-' + target_base)
+  qtest_env.set('QTEST_QEMU_STORAGE_DAEMON_BINARY', './storage-daemon/qemu-storage-daemon')
   
   foreach test : target_qtests
     # Executables are shared across targets, declare them only the first time we
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 07/12] tests/qtest: add multi-queue test case to vhost-user-blk-test
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (5 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 06/12] test: new qTest case to test the vhost-user-blk-server Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 08/12] libqtest: add qtest_socket_server() Stefan Hajnoczi
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20201001144604.559733-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 81 +++++++++++++++++++++++++++++--
 1 file changed, 76 insertions(+), 5 deletions(-)

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index e7e44f9bf0..31f2335f97 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -559,6 +559,67 @@ static void pci_hotplug(void *obj, void *data, QGuestAllocator *t_alloc)
     qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
 }
 
+static void multiqueue(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+    QVirtioPCIDevice *pdev1 = obj;
+    QVirtioDevice *dev1 = &pdev1->vdev;
+    QVirtioPCIDevice *pdev8;
+    QVirtioDevice *dev8;
+    QTestState *qts = pdev1->pdev->bus->qts;
+    uint64_t features;
+    uint16_t num_queues;
+
+    /*
+     * The primary device has 1 queue and VIRTIO_BLK_F_MQ is not enabled. The
+     * VIRTIO specification allows VIRTIO_BLK_F_MQ to be enabled when there is
+     * only 1 virtqueue, but --device vhost-user-blk-pci doesn't do this (which
+     * is also spec-compliant).
+     */
+    features = qvirtio_get_features(dev1);
+    g_assert_cmpint(features & (1u << VIRTIO_BLK_F_MQ), ==, 0);
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                            (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+                            (1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+                            (1u << VIRTIO_BLK_F_SCSI));
+    qvirtio_set_features(dev1, features);
+
+    /* Hotplug a secondary device with 8 queues */
+    qtest_qmp_device_add(qts, "vhost-user-blk-pci", "drv1",
+                         "{'addr': %s, 'chardev': 'char2', 'num-queues': 8}",
+                         stringify(PCI_SLOT_HP) ".0");
+
+    pdev8 = virtio_pci_new(pdev1->pdev->bus,
+                           &(QPCIAddress) {
+                               .devfn = QPCI_DEVFN(PCI_SLOT_HP, 0)
+                           });
+    g_assert_nonnull(pdev8);
+    g_assert_cmpint(pdev8->vdev.device_type, ==, VIRTIO_ID_BLOCK);
+
+    qos_object_start_hw(&pdev8->obj);
+
+    dev8 = &pdev8->vdev;
+    features = qvirtio_get_features(dev8);
+    g_assert_cmpint(features & (1u << VIRTIO_BLK_F_MQ),
+                    ==,
+                    (1u << VIRTIO_BLK_F_MQ));
+    features = features & ~(QVIRTIO_F_BAD_FEATURE |
+                            (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+                            (1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+                            (1u << VIRTIO_BLK_F_SCSI) |
+                            (1u << VIRTIO_BLK_F_MQ));
+    qvirtio_set_features(dev8, features);
+
+    num_queues = qvirtio_config_readw(dev8,
+            offsetof(struct virtio_blk_config, num_queues));
+    g_assert_cmpint(num_queues, ==, 8);
+
+    qvirtio_pci_device_disable(pdev8);
+    qos_object_destroy(&pdev8->obj);
+
+    /* unplug secondary disk */
+    qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
+}
+
 /*
  * Check that setting the vring addr on a non-existent virtqueue does
  * not crash.
@@ -643,7 +704,8 @@ static void quit_storage_daemon(void *qmp_test_state)
     g_free(qmp_test_state);
 }
 
-static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
+static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
+                                  int num_queues)
 {
     const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
     int fd, qmp_fd, i;
@@ -675,8 +737,8 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
         g_string_append_printf(storage_daemon_command,
             "--blockdev driver=file,node-name=disk%d,filename=%s "
             "--export type=vhost-user-blk,id=disk%d,addr.type=unix,addr.path=%s,"
-            "node-name=disk%i,writable=on ",
-            i, img_path, i, sock_path, i);
+            "node-name=disk%i,writable=on,num-queues=%d ",
+            i, img_path, i, sock_path, i, num_queues);
 
         g_string_append_printf(cmd_line, "-chardev socket,id=char%d,path=%s ",
                                i + 1, sock_path);
@@ -705,7 +767,7 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
 
 static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
 {
-    start_vhost_user_blk(cmd_line, 1);
+    start_vhost_user_blk(cmd_line, 1, 1);
     return arg;
 }
 
@@ -719,7 +781,13 @@ static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
 static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
 {
     /* "-chardev socket,id=char2" is used for pci_hotplug*/
-    start_vhost_user_blk(cmd_line, 2);
+    start_vhost_user_blk(cmd_line, 2, 1);
+    return arg;
+}
+
+static void *vhost_user_blk_multiqueue_test_setup(GString *cmd_line, void *arg)
+{
+    start_vhost_user_blk(cmd_line, 2, 8);
     return arg;
 }
 
@@ -746,6 +814,9 @@ static void register_vhost_user_blk_test(void)
 
     opts.before = vhost_user_blk_hotplug_test_setup;
     qos_add_test("hotplug", "vhost-user-blk-pci", pci_hotplug, &opts);
+
+    opts.before = vhost_user_blk_multiqueue_test_setup;
+    qos_add_test("multiqueue", "vhost-user-blk-pci", multiqueue, &opts);
 }
 
 libqos_init(register_vhost_user_blk_test);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 08/12] libqtest: add qtest_socket_server()
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (6 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 07/12] tests/qtest: add multi-queue test case to vhost-user-blk-test Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 09/12] vhost-user-blk-test: rename destroy_drive() to destroy_file() Stefan Hajnoczi
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

There is a qtest_socket_client() API. Add an equivalent
qtest_socket_server() API that returns a new UNIX domain socket in the
listen state. The code for this was already there but only used
internally in init_socket().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/libqos/libqtest.h |  8 +++++++
 tests/qtest/libqtest.c        | 40 ++++++++++++++++++++---------------
 2 files changed, 31 insertions(+), 17 deletions(-)

diff --git a/tests/qtest/libqos/libqtest.h b/tests/qtest/libqos/libqtest.h
index 241b5f89fb..699be8c2a2 100644
--- a/tests/qtest/libqos/libqtest.h
+++ b/tests/qtest/libqos/libqtest.h
@@ -132,6 +132,14 @@ void qtest_qmp_send(QTestState *s, const char *fmt, ...)
 void qtest_qmp_send_raw(QTestState *s, const char *fmt, ...)
     GCC_FMT_ATTR(2, 3);
 
+/**
+ * qtest_socket_server:
+ * @socket_path: the UNIX domain socket path
+ *
+ * Create and return a listen socket file descriptor, or abort on failure.
+ */
+int qtest_socket_server(const char *socket_path);
+
 /**
  * qtest_socket_client:
  * @server_socket_path: the socket server's path
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
index ab34075f2b..d652ffc90d 100644
--- a/tests/qtest/libqtest.c
+++ b/tests/qtest/libqtest.c
@@ -82,24 +82,8 @@ static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv);
 
 static int init_socket(const char *socket_path)
 {
-    struct sockaddr_un addr;
-    int sock;
-    int ret;
-
-    sock = socket(PF_UNIX, SOCK_STREAM, 0);
-    g_assert_cmpint(sock, !=, -1);
-
-    addr.sun_family = AF_UNIX;
-    snprintf(addr.sun_path, sizeof(addr.sun_path), "%s", socket_path);
+    int sock = qtest_socket_server(socket_path);
     qemu_set_cloexec(sock);
-
-    do {
-        ret = bind(sock, (struct sockaddr *)&addr, sizeof(addr));
-    } while (ret == -1 && errno == EINTR);
-    g_assert_cmpint(ret, !=, -1);
-    ret = listen(sock, 1);
-    g_assert_cmpint(ret, !=, -1);
-
     return sock;
 }
 
@@ -638,6 +622,28 @@ QTestState *qtest_create_state_with_qmp_fd(int fd)
     return qmp_test_state;
 }
 
+int qtest_socket_server(const char *socket_path)
+{
+    struct sockaddr_un addr;
+    int sock;
+    int ret;
+
+    sock = socket(PF_UNIX, SOCK_STREAM, 0);
+    g_assert_cmpint(sock, !=, -1);
+
+    addr.sun_family = AF_UNIX;
+    snprintf(addr.sun_path, sizeof(addr.sun_path), "%s", socket_path);
+
+    do {
+        ret = bind(sock, (struct sockaddr *)&addr, sizeof(addr));
+    } while (ret == -1 && errno == EINTR);
+    g_assert_cmpint(ret, !=, -1);
+    ret = listen(sock, 1);
+    g_assert_cmpint(ret, !=, -1);
+
+    return sock;
+}
+
 int qtest_socket_client(char *server_socket_path)
 {
     struct sockaddr_un serv_addr;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 09/12] vhost-user-blk-test: rename destroy_drive() to destroy_file()
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (7 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 08/12] libqtest: add qtest_socket_server() Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 10/12] vhost-user-blk-test: close fork child file descriptors Stefan Hajnoczi
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

The function is used not just for image files but also for UNIX domain
sockets (QMP monitor and vhost-user-blk). Reflect that in the name.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index 31f2335f97..f05f14c192 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -658,7 +658,8 @@ static const char *qtest_qemu_storage_daemon_binary(void)
     return qemu_storage_daemon_bin;
 }
 
-static void drive_destroy(void *path)
+/* g_test_queue_destroy() cleanup function for files */
+static void destroy_file(void *path)
 {
     unlink(path);
     g_free(path);
@@ -678,7 +679,7 @@ static char *drive_create(void)
     g_assert_cmpint(ret, ==, 0);
     close(fd);
 
-    g_test_queue_destroy(drive_destroy, t_path);
+    g_test_queue_destroy(destroy_file, t_path);
     return t_path;
 }
 
@@ -717,7 +718,7 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
 
     qmp_fd = mkstemp(qmp_sock_path);
     g_assert_cmpint(qmp_fd, >=, 0);
-    g_test_queue_destroy(drive_destroy, qmp_sock_path);
+    g_test_queue_destroy(destroy_file, qmp_sock_path);
 
     g_string_append_printf(storage_daemon_command,
             "exec %s "
@@ -731,7 +732,7 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
         sock_path = g_strdup(sock_path_tempate);
         fd = mkstemp(sock_path);
         g_assert_cmpint(fd, >=, 0);
-        g_test_queue_destroy(drive_destroy, sock_path);
+        g_test_queue_destroy(drive_file, sock_path);
         /* create image file */
         img_path = drive_create();
         g_string_append_printf(storage_daemon_command,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 10/12] vhost-user-blk-test: close fork child file descriptors
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (8 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 09/12] vhost-user-blk-test: rename destroy_drive() to destroy_file() Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 11/12] vhost-user-blk-test: drop unused return value Stefan Hajnoczi
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Do not leave stdin, stdout, stderr open after fork. stdout is the
tap-driver.pl pipe. If we keep the pipe open then tap-driver.pl will not
detect that qos-test has terminated and it will hang.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index f05f14c192..15daf8ccbc 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -749,6 +749,17 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
                    storage_daemon_command->str);
     pid_t pid = fork();
     if (pid == 0) {
+        /*
+         * Close standard file descriptors so tap-driver.pl pipe detects when
+         * our parent terminates.
+         */
+        close(0);
+        close(1);
+        close(2);
+        open("/dev/null", O_RDONLY);
+        open("/dev/null", O_WRONLY);
+        open("/dev/null", O_WRONLY);
+
         execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
         exit(1);
     }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 11/12] vhost-user-blk-test: drop unused return value
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (9 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 10/12] vhost-user-blk-test: close fork child file descriptors Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-27 17:35 ` [PATCH 12/12] vhost-user-blk-test: fix races by using fd passing Stefan Hajnoczi
  2020-10-30 12:42 ` [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Michael S. Tsirkin
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

The sock_path return value was unused and bogus (it doesn't make sense
when there are multiple drives because only the last path is arbitrarily
returned).

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index 15daf8ccbc..0d056cc189 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -705,8 +705,8 @@ static void quit_storage_daemon(void *qmp_test_state)
     g_free(qmp_test_state);
 }
 
-static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
-                                  int num_queues)
+static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
+                                 int num_queues)
 {
     const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
     int fd, qmp_fd, i;
@@ -774,7 +774,6 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
     g_test_queue_destroy(quit_storage_daemon, qmp_test_state);
 
     qobject_unref(qtest_qmp(qmp_test_state, "{'execute': 'qmp_capabilities'}"));
-    return sock_path;
 }
 
 static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 12/12] vhost-user-blk-test: fix races by using fd passing
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (10 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 11/12] vhost-user-blk-test: drop unused return value Stefan Hajnoczi
@ 2020-10-27 17:35 ` Stefan Hajnoczi
  2020-10-30 12:42 ` [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Michael S. Tsirkin
  12 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-10-27 17:35 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block,
	Michael S. Tsirkin, Coiby Xu, Raphael Norwitz, Stefan Hajnoczi,
	Paolo Bonzini, Max Reitz

Pass the QMP and vhost-user-blk server sockets as file descriptors. That
way the sockets are already open and in a listen state when the QEMU
process is launched.

This solves the race with qemu-storage-daemon startup where the UNIX
domain sockets may not be ready yet when QEMU attempts to connect. It
also saves us sleeping for 1 second if the qemu-storage-daemon QMP
socket is not ready yet.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 42 +++++++++++++++++++------------
 1 file changed, 26 insertions(+), 16 deletions(-)

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index 0d056cc189..9589f90b14 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -683,8 +683,22 @@ static char *drive_create(void)
     return t_path;
 }
 
-static char sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.XXXXXX";
-static char qmp_sock_path_tempate[] = "/tmp/qtest.vhost_user_blk.qmp.XXXXXX";
+static char *create_listen_socket(int *fd)
+{
+    int tmp_fd;
+    char *path;
+
+    /* No race because our pid makes the path unique */
+    path = g_strdup_printf("/tmp/qtest-%d-sock.XXXXXX", getpid());
+    tmp_fd = mkstemp(path);
+    g_assert_cmpint(tmp_fd, >=, 0);
+    close(tmp_fd);
+    unlink(path);
+
+    *fd = qtest_socket_server(path);
+    g_test_queue_destroy(destroy_file, path);
+    return path;
+}
 
 static void quit_storage_daemon(void *qmp_test_state)
 {
@@ -709,37 +723,33 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
                                  int num_queues)
 {
     const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
-    int fd, qmp_fd, i;
+    int qmp_fd, i;
     QTestState *qmp_test_state;
     gchar *img_path;
-    char *sock_path = NULL;
-    char *qmp_sock_path = g_strdup(qmp_sock_path_tempate);
+    char *qmp_sock_path;
     GString *storage_daemon_command = g_string_new(NULL);
 
-    qmp_fd = mkstemp(qmp_sock_path);
-    g_assert_cmpint(qmp_fd, >=, 0);
-    g_test_queue_destroy(destroy_file, qmp_sock_path);
+    qmp_sock_path = create_listen_socket(&qmp_fd);
 
     g_string_append_printf(storage_daemon_command,
             "exec %s "
-            "--chardev socket,id=qmp,path=%s,server,nowait --monitor chardev=qmp ",
-            vhost_user_blk_bin, qmp_sock_path);
+            "--chardev socket,id=qmp,fd=%d,server,nowait --monitor chardev=qmp ",
+            vhost_user_blk_bin, qmp_fd);
 
     g_string_append_printf(cmd_line,
             " -object memory-backend-memfd,id=mem,size=256M,share=on -M memory-backend=mem ");
 
     for (i = 0; i < vus_instances; i++) {
-        sock_path = g_strdup(sock_path_tempate);
-        fd = mkstemp(sock_path);
-        g_assert_cmpint(fd, >=, 0);
-        g_test_queue_destroy(drive_file, sock_path);
+        int fd;
+        char *sock_path = create_listen_socket(&fd);
+
         /* create image file */
         img_path = drive_create();
         g_string_append_printf(storage_daemon_command,
             "--blockdev driver=file,node-name=disk%d,filename=%s "
-            "--export type=vhost-user-blk,id=disk%d,addr.type=unix,addr.path=%s,"
+            "--export type=vhost-user-blk,id=disk%d,addr.type=fd,addr.str=%d,"
             "node-name=disk%i,writable=on,num-queues=%d ",
-            i, img_path, i, sock_path, i, num_queues);
+            i, img_path, i, fd, i, num_queues);
 
         g_string_append_printf(cmd_line, "-chardev socket,id=char%d,path=%s ",
                                i + 1, sock_path);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests
  2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
                   ` (11 preceding siblings ...)
  2020-10-27 17:35 ` [PATCH 12/12] vhost-user-blk-test: fix races by using fd passing Stefan Hajnoczi
@ 2020-10-30 12:42 ` Michael S. Tsirkin
  2020-11-02 10:43   ` Michael S. Tsirkin
  12 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 12:42 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block, qemu-devel,
	Coiby Xu, Raphael Norwitz, Paolo Bonzini, Max Reitz

On Tue, Oct 27, 2020 at 05:35:16PM +0000, Stefan Hajnoczi wrote:
> This patch series solves some issues with the new vhost-user-blk-server and
> adds the qtest test case. The test case was not included in the pull request
> that introduced the vhost-user-blk server because of reliability issues that
> are fixed in this patch series.


Fails make check for me:

Running test qtest-i386/qos-test
Broken pipe
../qemu/tests/qtest/libqtest.c:161: kill_qemu() detected QEMU death from signal 11 (Segmentation fault) (core dumped)
ERROR qtest-i386/qos-test - too few tests run (expected 92, got 65)
make: *** [Makefile.mtest:1857: run-test-230] Error 1


> Coiby Xu (1):
>   test: new qTest case to test the vhost-user-blk-server
> 
> Stefan Hajnoczi (11):
>   libvhost-user: follow QEMU comment style
>   configure: introduce --enable-vhost-user-blk-server
>   block/export: make vhost-user-blk config space little-endian
>   block/export: fix vhost-user-blk get_config() information leak
>   contrib/vhost-user-blk: fix get_config() information leak
>   tests/qtest: add multi-queue test case to vhost-user-blk-test
>   libqtest: add qtest_socket_server()
>   vhost-user-blk-test: rename destroy_drive() to destroy_file()
>   vhost-user-blk-test: close fork child file descriptors
>   vhost-user-blk-test: drop unused return value
>   vhost-user-blk-test: fix races by using fd passing
> 
>  configure                               |  15 +
>  contrib/libvhost-user/libvhost-user.h   |  15 +-
>  tests/qtest/libqos/libqtest.h           |  25 +
>  tests/qtest/libqos/vhost-user-blk.h     |  48 ++
>  block/export/export.c                   |   4 +-
>  block/export/vhost-user-blk-server.c    |  28 +-
>  contrib/vhost-user-blk/vhost-user-blk.c |   2 +
>  tests/qtest/libqos/vhost-user-blk.c     | 129 ++++
>  tests/qtest/libqtest.c                  |  76 ++-
>  tests/qtest/vhost-user-blk-test.c       | 843 ++++++++++++++++++++++++
>  block/export/meson.build                |   2 +-
>  tests/qtest/libqos/meson.build          |   1 +
>  tests/qtest/meson.build                 |   2 +
>  util/meson.build                        |   2 +-
>  14 files changed, 1151 insertions(+), 41 deletions(-)
>  create mode 100644 tests/qtest/libqos/vhost-user-blk.h
>  create mode 100644 tests/qtest/libqos/vhost-user-blk.c
>  create mode 100644 tests/qtest/vhost-user-blk-test.c
> 
> -- 
> 2.26.2
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests
  2020-10-30 12:42 ` [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Michael S. Tsirkin
@ 2020-11-02 10:43   ` Michael S. Tsirkin
  2020-11-02 17:00     ` Stefan Hajnoczi
  0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2020-11-02 10:43 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block, qemu-devel,
	Coiby Xu, Raphael Norwitz, Paolo Bonzini, Max Reitz

On Fri, Oct 30, 2020 at 08:42:22AM -0400, Michael S. Tsirkin wrote:
> On Tue, Oct 27, 2020 at 05:35:16PM +0000, Stefan Hajnoczi wrote:
> > This patch series solves some issues with the new vhost-user-blk-server and
> > adds the qtest test case. The test case was not included in the pull request
> > that introduced the vhost-user-blk server because of reliability issues that
> > are fixed in this patch series.
> 
> 
> Fails make check for me:
> 
> Running test qtest-i386/qos-test
> Broken pipe
> ../qemu/tests/qtest/libqtest.c:161: kill_qemu() detected QEMU death from signal 11 (Segmentation fault) (core dumped)
> ERROR qtest-i386/qos-test - too few tests run (expected 92, got 65)
> make: *** [Makefile.mtest:1857: run-test-230] Error 1

And here's the coredump:



[mst@tuck qemu-oot]$ coredumpctl  debug 853792
           PID: 853792 (qemu-system-i38)
           UID: 1000 (mst)
           GID: 1000 (mst)
        Signal: 11 (SEGV)
     Timestamp: Fri 2020-10-30 08:41:31 EDT (2 days ago)
  Command Line: ./qemu-system-i386 -qtest unix:/tmp/qtest-853536.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-853536.qmp,id=char0 -mon chardev=char0,mode=control -display none -M pc -device vhost-user-blk-pci,id=drv0,chardev=char1,addr=4.0 -object memory-backend-memfd,id=mem,size=256M,share=on -M memory-backend=mem -chardev socket,id=char1,path=/tmp/qtest-853536-sock.krlJyA -accel qtest
    Executable: /scm/qemu-oot/qemu-system-i386
 Control Group: /user.slice/user-1000.slice/session-4.scope
          Unit: session-4.scope
         Slice: user-1000.slice
       Session: 4
     Owner UID: 1000 (mst)
       Boot ID: 978b4cacb2df46319a9c6310b653f95d
    Machine ID: 6234c9d2c9c34980ad6b1b2de307f043
      Hostname: tuck.redhat.com
       Storage: /var/lib/systemd/coredump/core.qemu-system-i38.1000.978b4cacb2df46319a9c6310b653f95d.853792.1604061691000000.lz4
       Message: Process 853792 (qemu-system-i38) of user 1000 dumped core.
                
                Stack trace of thread 853792:
                #0  0x000055b2ace9ca0b vhost_dev_has_iommu (qemu-system-i386 + 0x657a0b)
                #1  0x000055b2ace9fbbf vhost_dev_prepare_inflight (qemu-system-i386 + 0x65abbf)
                #2  0x000055b2ace51d01 vhost_user_blk_start (qemu-system-i386 + 0x60cd01)
                #3  0x000055b2ace51f1a vhost_user_blk_set_status (qemu-system-i386 + 0x60cf1a)
                #4  0x000055b2acde489b virtio_set_status (qemu-system-i386 + 0x59f89b)
                #5  0x000055b2acb38638 virtio_pci_common_write (qemu-system-i386 + 0x2f3638)
                #6  0x000055b2ace4818c memory_region_write_accessor (qemu-system-i386 + 0x60318c)
                #7  0x000055b2ace46cae access_with_adjusted_size (qemu-system-i386 + 0x601cae)
                #8  0x000055b2ace4a4c3 memory_region_dispatch_write (qemu-system-i386 + 0x6054c3)
                #9  0x000055b2ace72010 flatview_write_continue (qemu-system-i386 + 0x62d010)
                #10 0x000055b2ace74fc5 flatview_write (qemu-system-i386 + 0x62ffc5)
                #11 0x000055b2acdc0981 qtest_process_command (qemu-system-i386 + 0x57b981)
                #12 0x000055b2acdc111d qtest_process_inbuf (qemu-system-i386 + 0x57c11d)
                #13 0x000055b2acf7092e tcp_chr_read (qemu-system-i386 + 0x72b92e)
                #14 0x00007f0554a5f78f g_main_context_dispatch (libglib-2.0.so.0 + 0x5278f)
                #15 0x000055b2acfbbed8 glib_pollfds_poll (qemu-system-i386 + 0x776ed8)
                #16 0x000055b2acdccbf2 qemu_main_loop (qemu-system-i386 + 0x587bf2)
                #17 0x000055b2acb0a5be main (qemu-system-i386 + 0x2c55be)
                #18 0x00007f0553000042 __libc_start_main (libc.so.6 + 0x27042)
                #19 0x000055b2acb0e97e _start (qemu-system-i386 + 0x2c997e)
                
                Stack trace of thread 853794:
                #0  0x00007f05530a1801 clock_nanosleep@@GLIBC_2.17 (libc.so.6 + 0xc8801)
                #1  0x00007f05530a7157 __nanosleep (libc.so.6 + 0xce157)
                #2  0x00007f0554a8b2d7 g_usleep (libglib-2.0.so.0 + 0x7e2d7)
                #3  0x000055b2acfbfa9a call_rcu_thread (qemu-system-i386 + 0x77aa9a)
                #4  0x000055b2acfd1cb9 qemu_thread_start (qemu-system-i386 + 0x78ccb9)
                #5  0x00007f05531ac432 start_thread (libpthread.so.0 + 0x9432)
                #6  0x00007f05530da913 __clone (libc.so.6 + 0x101913)
                
                Stack trace of thread 853796:
                #0  0x00007f0553016962 __sigtimedwait (libc.so.6 + 0x3d962)
                #1  0x00007f05531b75bc sigwait (libpthread.so.0 + 0x145bc)
                #2  0x000055b2ace8c223 dummy_cpu_thread_fn (qemu-system-i386 + 0x647223)
                #3  0x000055b2acfd1cb9 qemu_thread_start (qemu-system-i386 + 0x78ccb9)
                #4  0x00007f05531ac432 start_thread (libpthread.so.0 + 0x9432)
                #5  0x00007f05530da913 __clone (libc.so.6 + 0x101913)
                
                Stack trace of thread 853795:
                #0  0x00007f05530cfaaf __poll (libc.so.6 + 0xf6aaf)
                #1  0x00007f0554a5faae g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52aae)
                #2  0x00007f0554a5fe33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                #3  0x000055b2acedcd01 iothread_run (qemu-system-i386 + 0x697d01)
                #4  0x000055b2acfd1cb9 qemu_thread_start (qemu-system-i386 + 0x78ccb9)
                #5  0x00007f05531ac432 start_thread (libpthread.so.0 + 0x9432)
                #6  0x00007f05530da913 __clone (libc.so.6 + 0x101913)

GNU gdb (GDB) Fedora 9.1-6.fc32
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /scm/qemu-oot/qemu-system-i386...

warning: core file may not match specified executable file.
[New LWP 853792]
[New LWP 853794]
[New LWP 853796]
[New LWP 853795]
Core was generated by `./qemu-system-i386 -qtest unix:/tmp/qtest-853536.sock -qtest-log /dev/null -cha'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000055b2ace9ca0b in ?? ()
[Current thread is 1 (LWP 853792)]
warning: File "/scm/qemu/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load:/usr/lib/golang/src/runtime/runtime-gdb.py".
To enable execution of this file add
        add-auto-load-safe-path /scm/qemu/.gdbinit
line to your configuration file "/home/mst/.gdbinit".
To completely disable this security protection add
        set auto-load safe-path /
line to your configuration file "/home/mst/.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual.  E.g., run from the shell:
        info "(gdb)Auto-loading safe path"


> 
> > Coiby Xu (1):
> >   test: new qTest case to test the vhost-user-blk-server
> > 
> > Stefan Hajnoczi (11):
> >   libvhost-user: follow QEMU comment style
> >   configure: introduce --enable-vhost-user-blk-server
> >   block/export: make vhost-user-blk config space little-endian
> >   block/export: fix vhost-user-blk get_config() information leak
> >   contrib/vhost-user-blk: fix get_config() information leak
> >   tests/qtest: add multi-queue test case to vhost-user-blk-test
> >   libqtest: add qtest_socket_server()
> >   vhost-user-blk-test: rename destroy_drive() to destroy_file()
> >   vhost-user-blk-test: close fork child file descriptors
> >   vhost-user-blk-test: drop unused return value
> >   vhost-user-blk-test: fix races by using fd passing
> > 
> >  configure                               |  15 +
> >  contrib/libvhost-user/libvhost-user.h   |  15 +-
> >  tests/qtest/libqos/libqtest.h           |  25 +
> >  tests/qtest/libqos/vhost-user-blk.h     |  48 ++
> >  block/export/export.c                   |   4 +-
> >  block/export/vhost-user-blk-server.c    |  28 +-
> >  contrib/vhost-user-blk/vhost-user-blk.c |   2 +
> >  tests/qtest/libqos/vhost-user-blk.c     | 129 ++++
> >  tests/qtest/libqtest.c                  |  76 ++-
> >  tests/qtest/vhost-user-blk-test.c       | 843 ++++++++++++++++++++++++
> >  block/export/meson.build                |   2 +-
> >  tests/qtest/libqos/meson.build          |   1 +
> >  tests/qtest/meson.build                 |   2 +
> >  util/meson.build                        |   2 +-
> >  14 files changed, 1151 insertions(+), 41 deletions(-)
> >  create mode 100644 tests/qtest/libqos/vhost-user-blk.h
> >  create mode 100644 tests/qtest/libqos/vhost-user-blk.c
> >  create mode 100644 tests/qtest/vhost-user-blk-test.c
> > 
> > -- 
> > 2.26.2
> > 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests
  2020-11-02 10:43   ` Michael S. Tsirkin
@ 2020-11-02 17:00     ` Stefan Hajnoczi
  0 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2020-11-02 17:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Laurent Vivier, Kevin Wolf, Thomas Huth, qemu-block, Jin Yu,
	qemu-devel, Coiby Xu, Raphael Norwitz, Paolo Bonzini, Max Reitz

[-- Attachment #1: Type: text/plain, Size: 1315 bytes --]

On Mon, Nov 02, 2020 at 05:43:19AM -0500, Michael S. Tsirkin wrote:
> On Fri, Oct 30, 2020 at 08:42:22AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Oct 27, 2020 at 05:35:16PM +0000, Stefan Hajnoczi wrote:
> > > This patch series solves some issues with the new vhost-user-blk-server and
> > > adds the qtest test case. The test case was not included in the pull request
> > > that introduced the vhost-user-blk server because of reliability issues that
> > > are fixed in this patch series.
> > 
> > 
> > Fails make check for me:
> > 
> > Running test qtest-i386/qos-test
> > Broken pipe
> > ../qemu/tests/qtest/libqtest.c:161: kill_qemu() detected QEMU death from signal 11 (Segmentation fault) (core dumped)
> > ERROR qtest-i386/qos-test - too few tests run (expected 92, got 65)
> > make: *** [Makefile.mtest:1857: run-test-230] Error 1
> 
> And here's the coredump:

Thanks! qemu.git/master is broken. The segfault was introduced in
adb29c027341ba095a3ef4beef6aaef86d3a520e ("vhost-blk: set features
before setting inflight feature"). The code in question has no test
coverage so we didn't know that vhost-user-blk is broken in QEMU.

I have sent a patch to revert the commit. Let's do that for QEMU 5.2
unless a straightforward fix can be provided in place of the revert.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-11-02 17:04 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-27 17:35 [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 01/12] libvhost-user: follow QEMU comment style Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 02/12] configure: introduce --enable-vhost-user-blk-server Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 03/12] block/export: make vhost-user-blk config space little-endian Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 04/12] block/export: fix vhost-user-blk get_config() information leak Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 05/12] contrib/vhost-user-blk: fix " Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 06/12] test: new qTest case to test the vhost-user-blk-server Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 07/12] tests/qtest: add multi-queue test case to vhost-user-blk-test Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 08/12] libqtest: add qtest_socket_server() Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 09/12] vhost-user-blk-test: rename destroy_drive() to destroy_file() Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 10/12] vhost-user-blk-test: close fork child file descriptors Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 11/12] vhost-user-blk-test: drop unused return value Stefan Hajnoczi
2020-10-27 17:35 ` [PATCH 12/12] vhost-user-blk-test: fix races by using fd passing Stefan Hajnoczi
2020-10-30 12:42 ` [PATCH 00/12] block/export: vhost-user-blk server cleanups and tests Michael S. Tsirkin
2020-11-02 10:43   ` Michael S. Tsirkin
2020-11-02 17:00     ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.