* [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support
@ 2019-06-21 9:40 Stefan Hajnoczi
2019-06-21 9:40 ` [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper Stefan Hajnoczi
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-06-21 9:40 UTC (permalink / raw)
To: qemu-devel
Cc: Michael S. Tsirkin, Dr. David Alan Gilbert, Sebastien Boeuf,
Gerd Hoffmann, Stefan Hajnoczi, Marc-André Lureau
Sebastien Boeuf <sebastien.boeuf@intel.com> pointed out that libvhost-user
doesn't advertise VHOST_USER_PROTOCOL_F_MQ. Today this prevents vhost-user-net
multiqueue from working.
In virtio-fs we also want to support multiqueue so I'm sending patches to add
this. It's free to advertise VHOST_USER_PROTOCOL_F_MQ for all devices so we
can do it unconditionally in libvhost-user.
Several related improvements are included:
Patch 1 - clean up duplicated and risky VhostUserMsg reply building code
Patch 2 - remove hardcoded 8 virtqueue limit in libvhost-user
Patch 4 - clarify vhost-user multiqueue specification
Stefan Hajnoczi (4):
libvhost-user: add vmsg_set_reply_u64() helper
libvhost-user: support many virtqueues
libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ
docs: avoid vhost-user-net specifics in multiqueue section
contrib/libvhost-user/libvhost-user-glib.h | 2 +-
contrib/libvhost-user/libvhost-user.h | 10 +++-
contrib/libvhost-user/libvhost-user-glib.c | 12 +++-
contrib/libvhost-user/libvhost-user.c | 65 +++++++++++++---------
contrib/vhost-user-blk/vhost-user-blk.c | 16 +++---
contrib/vhost-user-gpu/main.c | 9 ++-
contrib/vhost-user-input/main.c | 11 +++-
contrib/vhost-user-scsi/vhost-user-scsi.c | 21 +++----
tests/vhost-user-bridge.c | 42 +++++++++-----
docs/interop/vhost-user.rst | 21 +++----
10 files changed, 132 insertions(+), 77 deletions(-)
--
2.21.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper
2019-06-21 9:40 [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
@ 2019-06-21 9:40 ` Stefan Hajnoczi
2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 9:40 ` [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues Stefan Hajnoczi
` (3 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-06-21 9:40 UTC (permalink / raw)
To: qemu-devel
Cc: Michael S. Tsirkin, Dr. David Alan Gilbert, Sebastien Boeuf,
Gerd Hoffmann, Stefan Hajnoczi, Marc-André Lureau
The VhostUserMsg request is reused as the reply by message processing
functions. This is risky since request fields may corrupt the reply if
the vhost-user message handler function forgets to re-initialize them.
Changing this practice would be very invasive but we can introduce a
helper function to make u64 payload replies safe. This also eliminates
code duplication in message processing functions.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
contrib/libvhost-user/libvhost-user.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index 443b7e08c3..a8657c7af2 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -216,6 +216,15 @@ vmsg_close_fds(VhostUserMsg *vmsg)
}
}
+/* Set reply payload.u64 and clear request flags and fd_num */
+static void vmsg_set_reply_u64(VhostUserMsg *vmsg, uint64_t val)
+{
+ vmsg->flags = 0; /* defaults will be set by vu_send_reply() */
+ vmsg->size = sizeof(vmsg->payload.u64);
+ vmsg->payload.u64 = val;
+ vmsg->fd_num = 0;
+}
+
/* A test to see if we have userfault available */
static bool
have_userfault(void)
@@ -1168,10 +1177,7 @@ vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
features |= dev->iface->get_protocol_features(dev);
}
- vmsg->payload.u64 = features;
- vmsg->size = sizeof(vmsg->payload.u64);
- vmsg->fd_num = 0;
-
+ vmsg_set_reply_u64(vmsg, features);
return true;
}
@@ -1307,17 +1313,14 @@ out:
static bool
vu_set_postcopy_listen(VuDev *dev, VhostUserMsg *vmsg)
{
- vmsg->payload.u64 = -1;
- vmsg->size = sizeof(vmsg->payload.u64);
-
if (dev->nregions) {
vu_panic(dev, "Regions already registered at postcopy-listen");
+ vmsg_set_reply_u64(vmsg, -1);
return true;
}
dev->postcopy_listening = true;
- vmsg->flags = VHOST_USER_VERSION | VHOST_USER_REPLY_MASK;
- vmsg->payload.u64 = 0; /* Success */
+ vmsg_set_reply_u64(vmsg, 0);
return true;
}
@@ -1332,10 +1335,7 @@ vu_set_postcopy_end(VuDev *dev, VhostUserMsg *vmsg)
DPRINT("%s: Done close\n", __func__);
}
- vmsg->fd_num = 0;
- vmsg->payload.u64 = 0;
- vmsg->size = sizeof(vmsg->payload.u64);
- vmsg->flags = VHOST_USER_VERSION | VHOST_USER_REPLY_MASK;
+ vmsg_set_reply_u64(vmsg, 0);
DPRINT("%s: exit\n", __func__);
return true;
}
--
2.21.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues
2019-06-21 9:40 [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
2019-06-21 9:40 ` [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper Stefan Hajnoczi
@ 2019-06-21 9:40 ` Stefan Hajnoczi
2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 9:40 ` [Qemu-devel] [PATCH 3/4] libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ Stefan Hajnoczi
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-06-21 9:40 UTC (permalink / raw)
To: qemu-devel
Cc: Michael S. Tsirkin, Dr. David Alan Gilbert, Sebastien Boeuf,
Gerd Hoffmann, Stefan Hajnoczi, Marc-André Lureau
Currently libvhost-user is hardcoded to at most 8 virtqueues. The
device backend should decide the number of virtqueues, not
libvhost-user. This is important for multiqueue device backends where
the guest driver needs an accurate number of virtqueues.
This change breaks libvhost-user and libvhost-user-glib API stability.
There is no stability guarantee yet, so make this change now and update
all in-tree library users.
This patch touches up vhost-user-blk, vhost-user-gpu, vhost-user-input,
vhost-user-scsi, and vhost-user-bridge. If the device has a fixed
number of queues that exact number is used. Otherwise the previous
default of 8 virtqueues is used.
vu_init() and vug_init() can now fail if malloc() returns NULL. I
considered aborting with an error in libvhost-user but it should be safe
to instantiate new vhost-user instances at runtime without risk of
terminating the process. Therefore callers need to handle the vu_init()
failure now.
vhost-user-blk and vhost-user-scsi duplicate virtqueue index checks that
are already performed by libvhost-user. This code would need to be
modified to use max_queues but remove it completely instead since it's
redundant.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
contrib/libvhost-user/libvhost-user-glib.h | 2 +-
contrib/libvhost-user/libvhost-user.h | 10 ++++--
contrib/libvhost-user/libvhost-user-glib.c | 12 +++++--
contrib/libvhost-user/libvhost-user.c | 32 ++++++++++++-----
contrib/vhost-user-blk/vhost-user-blk.c | 16 +++++----
contrib/vhost-user-gpu/main.c | 9 ++++-
contrib/vhost-user-input/main.c | 11 +++++-
contrib/vhost-user-scsi/vhost-user-scsi.c | 21 +++++------
tests/vhost-user-bridge.c | 42 ++++++++++++++--------
9 files changed, 104 insertions(+), 51 deletions(-)
diff --git a/contrib/libvhost-user/libvhost-user-glib.h b/contrib/libvhost-user/libvhost-user-glib.h
index d3200f3afc..64d539d93a 100644
--- a/contrib/libvhost-user/libvhost-user-glib.h
+++ b/contrib/libvhost-user/libvhost-user-glib.h
@@ -25,7 +25,7 @@ typedef struct VugDev {
GSource *src;
} VugDev;
-void vug_init(VugDev *dev, int socket,
+bool vug_init(VugDev *dev, uint16_t max_queues, int socket,
vu_panic_cb panic, const VuDevIface *iface);
void vug_deinit(VugDev *dev);
diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
index 3b888ff0a5..46b600799b 100644
--- a/contrib/libvhost-user/libvhost-user.h
+++ b/contrib/libvhost-user/libvhost-user.h
@@ -25,7 +25,6 @@
#define VHOST_USER_F_PROTOCOL_FEATURES 30
#define VHOST_LOG_PAGE 4096
-#define VHOST_MAX_NR_VIRTQUEUE 8
#define VIRTQUEUE_MAX_SIZE 1024
#define VHOST_MEMORY_MAX_NREGIONS 8
@@ -353,7 +352,7 @@ struct VuDev {
int sock;
uint32_t nregions;
VuDevRegion regions[VHOST_MEMORY_MAX_NREGIONS];
- VuVirtq vq[VHOST_MAX_NR_VIRTQUEUE];
+ VuVirtq *vq;
VuDevInflightInfo inflight_info;
int log_call_fd;
int slave_fd;
@@ -362,6 +361,7 @@ struct VuDev {
uint64_t features;
uint64_t protocol_features;
bool broken;
+ uint16_t max_queues;
/* @set_watch: add or update the given fd to the watch set,
* call cb when condition is met */
@@ -391,6 +391,7 @@ typedef struct VuVirtqElement {
/**
* vu_init:
* @dev: a VuDev context
+ * @max_queues: maximum number of virtqueues
* @socket: the socket connected to vhost-user master
* @panic: a panic callback
* @set_watch: a set_watch callback
@@ -398,8 +399,11 @@ typedef struct VuVirtqElement {
* @iface: a VuDevIface structure with vhost-user device callbacks
*
* Intializes a VuDev vhost-user context.
+ *
+ * Returns: true on success, false on failure.
**/
-void vu_init(VuDev *dev,
+bool vu_init(VuDev *dev,
+ uint16_t max_queues,
int socket,
vu_panic_cb panic,
vu_set_watch_cb set_watch,
diff --git a/contrib/libvhost-user/libvhost-user-glib.c b/contrib/libvhost-user/libvhost-user-glib.c
index 42660a1b36..99edd2f3de 100644
--- a/contrib/libvhost-user/libvhost-user-glib.c
+++ b/contrib/libvhost-user/libvhost-user-glib.c
@@ -131,18 +131,24 @@ static void vug_watch(VuDev *dev, int condition, void *data)
}
}
-void
-vug_init(VugDev *dev, int socket,
+bool
+vug_init(VugDev *dev, uint16_t max_queues, int socket,
vu_panic_cb panic, const VuDevIface *iface)
{
g_assert(dev);
g_assert(iface);
- vu_init(&dev->parent, socket, panic, set_watch, remove_watch, iface);
+ if (!vu_init(&dev->parent, max_queues, socket, panic, set_watch,
+ remove_watch, iface)) {
+ return false;
+ }
+
dev->fdmap = g_hash_table_new_full(NULL, NULL, NULL,
(GDestroyNotify) g_source_destroy);
dev->src = vug_source_new(dev, socket, G_IO_IN, vug_watch, NULL);
+
+ return true;
}
void
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index a8657c7af2..0c88431e8f 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -493,9 +493,9 @@ vu_get_features_exec(VuDev *dev, VhostUserMsg *vmsg)
static void
vu_set_enable_all_rings(VuDev *dev, bool enabled)
{
- int i;
+ uint16_t i;
- for (i = 0; i < VHOST_MAX_NR_VIRTQUEUE; i++) {
+ for (i = 0; i < dev->max_queues; i++) {
dev->vq[i].enable = enabled;
}
}
@@ -916,7 +916,7 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
{
int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
- if (index >= VHOST_MAX_NR_VIRTQUEUE) {
+ if (index >= dev->max_queues) {
vmsg_close_fds(vmsg);
vu_panic(dev, "Invalid queue index: %u", index);
return false;
@@ -1213,7 +1213,7 @@ vu_set_vring_enable_exec(VuDev *dev, VhostUserMsg *vmsg)
DPRINT("State.index: %d\n", index);
DPRINT("State.enable: %d\n", enable);
- if (index >= VHOST_MAX_NR_VIRTQUEUE) {
+ if (index >= dev->max_queues) {
vu_panic(dev, "Invalid vring_enable index: %u", index);
return false;
}
@@ -1582,7 +1582,7 @@ vu_deinit(VuDev *dev)
}
dev->nregions = 0;
- for (i = 0; i < VHOST_MAX_NR_VIRTQUEUE; i++) {
+ for (i = 0; i < dev->max_queues; i++) {
VuVirtq *vq = &dev->vq[i];
if (vq->call_fd != -1) {
@@ -1627,18 +1627,23 @@ vu_deinit(VuDev *dev)
if (dev->sock != -1) {
close(dev->sock);
}
+
+ free(dev->vq);
+ dev->vq = NULL;
}
-void
+bool
vu_init(VuDev *dev,
+ uint16_t max_queues,
int socket,
vu_panic_cb panic,
vu_set_watch_cb set_watch,
vu_remove_watch_cb remove_watch,
const VuDevIface *iface)
{
- int i;
+ uint16_t i;
+ assert(max_queues > 0);
assert(socket >= 0);
assert(set_watch);
assert(remove_watch);
@@ -1654,18 +1659,27 @@ vu_init(VuDev *dev,
dev->iface = iface;
dev->log_call_fd = -1;
dev->slave_fd = -1;
- for (i = 0; i < VHOST_MAX_NR_VIRTQUEUE; i++) {
+
+ dev->vq = malloc(max_queues * sizeof(dev->vq[0]));
+ if (!dev->vq) {
+ DPRINT("%s: failed to malloc virtqueues\n", __func__);
+ return false;
+ }
+
+ for (i = 0; i < max_queues; i++) {
dev->vq[i] = (VuVirtq) {
.call_fd = -1, .kick_fd = -1, .err_fd = -1,
.notification = true,
};
}
+
+ return true;
}
VuVirtq *
vu_get_queue(VuDev *dev, int qidx)
{
- assert(qidx < VHOST_MAX_NR_VIRTQUEUE);
+ assert(qidx < dev->max_queues);
return &dev->vq[qidx];
}
diff --git a/contrib/vhost-user-blk/vhost-user-blk.c b/contrib/vhost-user-blk/vhost-user-blk.c
index 86a3987744..ae61034656 100644
--- a/contrib/vhost-user-blk/vhost-user-blk.c
+++ b/contrib/vhost-user-blk/vhost-user-blk.c
@@ -25,6 +25,10 @@
#include <sys/ioctl.h>
#endif
+enum {
+ VHOST_USER_BLK_MAX_QUEUES = 8,
+};
+
struct virtio_blk_inhdr {
unsigned char status;
};
@@ -334,12 +338,6 @@ static void vub_process_vq(VuDev *vu_dev, int idx)
VuVirtq *vq;
int ret;
- if ((idx < 0) || (idx >= VHOST_MAX_NR_VIRTQUEUE)) {
- fprintf(stderr, "VQ Index out of range: %d\n", idx);
- vub_panic_cb(vu_dev, NULL);
- return;
- }
-
gdev = container_of(vu_dev, VugDev, parent);
vdev_blk = container_of(gdev, VubDev, parent);
assert(vdev_blk);
@@ -631,7 +629,11 @@ int main(int argc, char **argv)
vdev_blk->enable_ro = true;
}
- vug_init(&vdev_blk->parent, csock, vub_panic_cb, &vub_iface);
+ if (!vug_init(&vdev_blk->parent, VHOST_USER_BLK_MAX_QUEUES, csock,
+ vub_panic_cb, &vub_iface)) {
+ fprintf(stderr, "Failed to initialized libvhost-user-glib\n");
+ goto err;
+ }
g_main_loop_run(vdev_blk->loop);
diff --git a/contrib/vhost-user-gpu/main.c b/contrib/vhost-user-gpu/main.c
index 04b753046f..b45d2019b4 100644
--- a/contrib/vhost-user-gpu/main.c
+++ b/contrib/vhost-user-gpu/main.c
@@ -25,6 +25,10 @@
#include "virgl.h"
#include "vugbm.h"
+enum {
+ VHOST_USER_GPU_MAX_QUEUES = 2,
+};
+
struct virtio_gpu_simple_resource {
uint32_t resource_id;
uint32_t width;
@@ -1169,7 +1173,10 @@ main(int argc, char *argv[])
exit(EXIT_FAILURE);
}
- vug_init(&g.dev, fd, vg_panic, &vuiface);
+ if (!vug_init(&g.dev, VHOST_USER_GPU_MAX_QUEUES, fd, vg_panic, &vuiface)) {
+ g_printerr("Failed to initialize libvhost-user-glib.\n");
+ exit(EXIT_FAILURE);
+ }
loop = g_main_loop_new(NULL, FALSE);
g_main_loop_run(loop);
diff --git a/contrib/vhost-user-input/main.c b/contrib/vhost-user-input/main.c
index 8b4e7d2536..449fd2171a 100644
--- a/contrib/vhost-user-input/main.c
+++ b/contrib/vhost-user-input/main.c
@@ -17,6 +17,10 @@
#include "standard-headers/linux/virtio_input.h"
#include "qapi/error.h"
+enum {
+ VHOST_USER_INPUT_MAX_QUEUES = 2,
+};
+
typedef struct virtio_input_event virtio_input_event;
typedef struct virtio_input_config virtio_input_config;
@@ -384,7 +388,12 @@ main(int argc, char *argv[])
g_printerr("Invalid vhost-user socket.\n");
exit(EXIT_FAILURE);
}
- vug_init(&vi.dev, fd, vi_panic, &vuiface);
+
+ if (!vug_init(&vi.dev, VHOST_USER_INPUT_MAX_QUEUES, fd, vi_panic,
+ &vuiface)) {
+ g_printerr("Failed to initialize libvhost-user-glib.\n");
+ exit(EXIT_FAILURE);
+ }
loop = g_main_loop_new(NULL, FALSE);
g_main_loop_run(loop);
diff --git a/contrib/vhost-user-scsi/vhost-user-scsi.c b/contrib/vhost-user-scsi/vhost-user-scsi.c
index 496dd6e693..0fc14d7899 100644
--- a/contrib/vhost-user-scsi/vhost-user-scsi.c
+++ b/contrib/vhost-user-scsi/vhost-user-scsi.c
@@ -19,6 +19,10 @@
#define VUS_ISCSI_INITIATOR "iqn.2016-11.com.nutanix:vhost-user-scsi"
+enum {
+ VHOST_USER_SCSI_MAX_QUEUES = 8,
+};
+
typedef struct VusIscsiLun {
struct iscsi_context *iscsi_ctx;
int iscsi_lun;
@@ -231,11 +235,6 @@ static void vus_proc_req(VuDev *vu_dev, int idx)
gdev = container_of(vu_dev, VugDev, parent);
vdev_scsi = container_of(gdev, VusDev, parent);
- if (idx < 0 || idx >= VHOST_MAX_NR_VIRTQUEUE) {
- g_warning("VQ Index out of range: %d", idx);
- vus_panic_cb(vu_dev, NULL);
- return;
- }
vq = vu_get_queue(vu_dev, idx);
if (!vq) {
@@ -295,12 +294,6 @@ static void vus_queue_set_started(VuDev *vu_dev, int idx, bool started)
assert(vu_dev);
- if (idx < 0 || idx >= VHOST_MAX_NR_VIRTQUEUE) {
- g_warning("VQ Index out of range: %d", idx);
- vus_panic_cb(vu_dev, NULL);
- return;
- }
-
vq = vu_get_queue(vu_dev, idx);
if (idx == 0 || idx == 1) {
@@ -398,7 +391,11 @@ int main(int argc, char **argv)
goto err;
}
- vug_init(&vdev_scsi->parent, csock, vus_panic_cb, &vus_iface);
+ if (!vug_init(&vdev_scsi->parent, VHOST_USER_SCSI_MAX_QUEUES, csock,
+ vus_panic_cb, &vus_iface)) {
+ g_printerr("Failed to initialize libvhost-user-glib\n");
+ goto err;
+ }
g_main_loop_run(vdev_scsi->loop);
diff --git a/tests/vhost-user-bridge.c b/tests/vhost-user-bridge.c
index 0bb03af0e5..c4e350e1f5 100644
--- a/tests/vhost-user-bridge.c
+++ b/tests/vhost-user-bridge.c
@@ -45,6 +45,10 @@
} \
} while (0)
+enum {
+ VHOST_USER_BRIDGE_MAX_QUEUES = 8,
+};
+
typedef void (*CallbackFunc)(int sock, void *ctx);
typedef struct Event {
@@ -512,12 +516,16 @@ vubr_accept_cb(int sock, void *ctx)
}
DPRINT("Got connection from remote peer on sock %d\n", conn_fd);
- vu_init(&dev->vudev,
- conn_fd,
- vubr_panic,
- vubr_set_watch,
- vubr_remove_watch,
- &vuiface);
+ if (!vu_init(&dev->vudev,
+ VHOST_USER_BRIDGE_MAX_QUEUES,
+ conn_fd,
+ vubr_panic,
+ vubr_set_watch,
+ vubr_remove_watch,
+ &vuiface)) {
+ fprintf(stderr, "Failed to initialize libvhost-user\n");
+ exit(1);
+ }
dispatcher_add(&dev->dispatcher, conn_fd, ctx, vubr_receive_cb);
dispatcher_remove(&dev->dispatcher, sock);
@@ -560,12 +568,18 @@ vubr_new(const char *path, bool client)
if (connect(dev->sock, (struct sockaddr *)&un, len) == -1) {
vubr_die("connect");
}
- vu_init(&dev->vudev,
- dev->sock,
- vubr_panic,
- vubr_set_watch,
- vubr_remove_watch,
- &vuiface);
+
+ if (!vu_init(&dev->vudev,
+ VHOST_USER_BRIDGE_MAX_QUEUES,
+ dev->sock,
+ vubr_panic,
+ vubr_set_watch,
+ vubr_remove_watch,
+ &vuiface)) {
+ fprintf(stderr, "Failed to initialize libvhost-user\n");
+ exit(1);
+ }
+
cb = vubr_receive_cb;
}
@@ -584,7 +598,7 @@ static void *notifier_thread(void *arg)
int qidx;
while (true) {
- for (qidx = 0; qidx < VHOST_MAX_NR_VIRTQUEUE; qidx++) {
+ for (qidx = 0; qidx < VHOST_USER_BRIDGE_MAX_QUEUES; qidx++) {
uint16_t *n = vubr->notifier.addr + pagesize * qidx;
if (*n == qidx) {
@@ -616,7 +630,7 @@ vubr_host_notifier_setup(VubrDev *dev)
void *addr;
int fd;
- length = getpagesize() * VHOST_MAX_NR_VIRTQUEUE;
+ length = getpagesize() * VHOST_USER_BRIDGE_MAX_QUEUES;
fd = mkstemp(template);
if (fd < 0) {
--
2.21.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PATCH 3/4] libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ
2019-06-21 9:40 [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
2019-06-21 9:40 ` [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper Stefan Hajnoczi
2019-06-21 9:40 ` [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues Stefan Hajnoczi
@ 2019-06-21 9:40 ` Stefan Hajnoczi
2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 9:40 ` [Qemu-devel] [PATCH 4/4] docs: avoid vhost-user-net specifics in multiqueue section Stefan Hajnoczi
2019-07-03 9:20 ` [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
4 siblings, 1 reply; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-06-21 9:40 UTC (permalink / raw)
To: qemu-devel
Cc: Michael S. Tsirkin, Dr. David Alan Gilbert, Sebastien Boeuf,
Gerd Hoffmann, Stefan Hajnoczi, Marc-André Lureau
Existing vhost-user device backends, including vhost-user-scsi and
vhost-user-blk, support multiqueue but libvhost-user currently does not
advertise this.
VHOST_USER_PROTOCOL_F_MQ enables the VHOST_USER_GET_QUEUE_NUM request
needed for a vhost-user master to query the number of queues. For
example, QEMU's vhost-user-net master depends on
VHOST_USER_PROTOCOL_F_MQ for multiqueue.
If you're wondering how any device backend with more than one virtqueue
functions today, it's because device types with a fixed number of
virtqueues do not require querying the number of queues. Therefore the
vhost-user master for vhost-user-input with 2 virtqueues, for example,
doesn't actually depend on VHOST_USER_PROTOCOL_F_MQ. It just enables
virtqueues 0 and 1 without asking.
Let there be multiqueue!
Suggested-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
contrib/libvhost-user/libvhost-user.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index 0c88431e8f..312c54f260 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -1160,7 +1160,8 @@ vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
static bool
vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
{
- uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD |
+ uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_MQ |
+ 1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD |
1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ |
1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER |
1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD;
@@ -1200,8 +1201,8 @@ vu_set_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
static bool
vu_get_queue_num_exec(VuDev *dev, VhostUserMsg *vmsg)
{
- DPRINT("Function %s() not implemented yet.\n", __func__);
- return false;
+ vmsg_set_reply_u64(vmsg, dev->max_queues);
+ return true;
}
static bool
--
2.21.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PATCH 4/4] docs: avoid vhost-user-net specifics in multiqueue section
2019-06-21 9:40 [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
` (2 preceding siblings ...)
2019-06-21 9:40 ` [Qemu-devel] [PATCH 3/4] libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ Stefan Hajnoczi
@ 2019-06-21 9:40 ` Stefan Hajnoczi
2019-06-21 13:52 ` Marc-André Lureau
2019-07-03 9:20 ` [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
4 siblings, 1 reply; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-06-21 9:40 UTC (permalink / raw)
To: qemu-devel
Cc: Michael S. Tsirkin, Dr. David Alan Gilbert, Sebastien Boeuf,
Gerd Hoffmann, Stefan Hajnoczi, Marc-André Lureau
The "Multiple queue support" section makes references to vhost-user-net
"queue pairs". This is confusing for two reasons:
1. This actually applies to all device types, not just vhost-user-net.
2. VHOST_USER_GET_QUEUE_NUM returns the number of virtqueues, not the
number of queue pairs.
Reword the section so that the vhost-user-net specific part is relegated
to the very end: we acknowledge that vhost-user-net historically
automatically enabled the first queue pair.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
docs/interop/vhost-user.rst | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index dc0ff9211f..5750668aba 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -324,19 +324,20 @@ must support changing some configuration aspects on the fly.
Multiple queue support
----------------------
-Multiple queue is treated as a protocol extension, hence the slave has
-to implement protocol features first. The multiple queues feature is
-supported only when the protocol feature ``VHOST_USER_PROTOCOL_F_MQ``
-(bit 0) is set.
+Multiple queue support allows the slave to advertise the maximum number of
+queues. This is treated as a protocol extension, hence the slave has to
+implement protocol features first. The multiple queues feature is supported
+only when the protocol feature ``VHOST_USER_PROTOCOL_F_MQ`` (bit 0) is set.
-The max number of queue pairs the slave supports can be queried with
-message ``VHOST_USER_GET_QUEUE_NUM``. Master should stop when the
-number of requested queues is bigger than that.
+The max number of queues the slave supports can be queried with message
+``VHOST_USER_GET_QUEUE_NUM``. Master should stop when the number of requested
+queues is bigger than that.
As all queues share one connection, the master uses a unique index for each
-queue in the sent message to identify a specified queue. One queue pair
-is enabled initially. More queues are enabled dynamically, by sending
-message ``VHOST_USER_SET_VRING_ENABLE``.
+queue in the sent message to identify a specified queue.
+
+The master enables queues by sending message ``VHOST_USER_SET_VRING_ENABLE``.
+vhost-user-net has historically automatically enabled the first queue pair.
Migration
---------
--
2.21.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ
2019-06-21 9:40 ` [Qemu-devel] [PATCH 3/4] libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ Stefan Hajnoczi
@ 2019-06-21 13:48 ` Marc-André Lureau
0 siblings, 0 replies; 11+ messages in thread
From: Marc-André Lureau @ 2019-06-21 13:48 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Sebastien Boeuf, Michael S. Tsirkin, qemu-devel,
Dr. David Alan Gilbert, Gerd Hoffmann
On Fri, Jun 21, 2019 at 11:40 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> Existing vhost-user device backends, including vhost-user-scsi and
> vhost-user-blk, support multiqueue but libvhost-user currently does not
> advertise this.
>
> VHOST_USER_PROTOCOL_F_MQ enables the VHOST_USER_GET_QUEUE_NUM request
> needed for a vhost-user master to query the number of queues. For
> example, QEMU's vhost-user-net master depends on
> VHOST_USER_PROTOCOL_F_MQ for multiqueue.
>
> If you're wondering how any device backend with more than one virtqueue
> functions today, it's because device types with a fixed number of
> virtqueues do not require querying the number of queues. Therefore the
> vhost-user master for vhost-user-input with 2 virtqueues, for example,
> doesn't actually depend on VHOST_USER_PROTOCOL_F_MQ. It just enables
> virtqueues 0 and 1 without asking.
>
> Let there be multiqueue!
>
> Suggested-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> ---
> contrib/libvhost-user/libvhost-user.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
> index 0c88431e8f..312c54f260 100644
> --- a/contrib/libvhost-user/libvhost-user.c
> +++ b/contrib/libvhost-user/libvhost-user.c
> @@ -1160,7 +1160,8 @@ vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
> static bool
> vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
> {
> - uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD |
> + uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_MQ |
> + 1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD |
> 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ |
> 1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER |
> 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD;
> @@ -1200,8 +1201,8 @@ vu_set_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
> static bool
> vu_get_queue_num_exec(VuDev *dev, VhostUserMsg *vmsg)
> {
> - DPRINT("Function %s() not implemented yet.\n", __func__);
> - return false;
> + vmsg_set_reply_u64(vmsg, dev->max_queues);
> + return true;
> }
>
> static bool
> --
> 2.21.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues
2019-06-21 9:40 ` [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues Stefan Hajnoczi
@ 2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 16:27 ` Stefan Hajnoczi
0 siblings, 1 reply; 11+ messages in thread
From: Marc-André Lureau @ 2019-06-21 13:48 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Sebastien Boeuf, Michael S. Tsirkin, qemu-devel,
Dr. David Alan Gilbert, Gerd Hoffmann
Hi
On Fri, Jun 21, 2019 at 11:40 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> Currently libvhost-user is hardcoded to at most 8 virtqueues. The
> device backend should decide the number of virtqueues, not
> libvhost-user. This is important for multiqueue device backends where
> the guest driver needs an accurate number of virtqueues.
>
> This change breaks libvhost-user and libvhost-user-glib API stability.
> There is no stability guarantee yet, so make this change now and update
> all in-tree library users.
>
> This patch touches up vhost-user-blk, vhost-user-gpu, vhost-user-input,
> vhost-user-scsi, and vhost-user-bridge. If the device has a fixed
> number of queues that exact number is used. Otherwise the previous
> default of 8 virtqueues is used.
>
> vu_init() and vug_init() can now fail if malloc() returns NULL. I
> considered aborting with an error in libvhost-user but it should be safe
> to instantiate new vhost-user instances at runtime without risk of
> terminating the process. Therefore callers need to handle the vu_init()
> failure now.
>
> vhost-user-blk and vhost-user-scsi duplicate virtqueue index checks that
> are already performed by libvhost-user. This code would need to be
> modified to use max_queues but remove it completely instead since it's
> redundant.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> contrib/libvhost-user/libvhost-user-glib.h | 2 +-
> contrib/libvhost-user/libvhost-user.h | 10 ++++--
> contrib/libvhost-user/libvhost-user-glib.c | 12 +++++--
> contrib/libvhost-user/libvhost-user.c | 32 ++++++++++++-----
> contrib/vhost-user-blk/vhost-user-blk.c | 16 +++++----
> contrib/vhost-user-gpu/main.c | 9 ++++-
> contrib/vhost-user-input/main.c | 11 +++++-
> contrib/vhost-user-scsi/vhost-user-scsi.c | 21 +++++------
> tests/vhost-user-bridge.c | 42 ++++++++++++++--------
> 9 files changed, 104 insertions(+), 51 deletions(-)
>
> diff --git a/contrib/libvhost-user/libvhost-user-glib.h b/contrib/libvhost-user/libvhost-user-glib.h
> index d3200f3afc..64d539d93a 100644
> --- a/contrib/libvhost-user/libvhost-user-glib.h
> +++ b/contrib/libvhost-user/libvhost-user-glib.h
> @@ -25,7 +25,7 @@ typedef struct VugDev {
> GSource *src;
> } VugDev;
>
> -void vug_init(VugDev *dev, int socket,
> +bool vug_init(VugDev *dev, uint16_t max_queues, int socket,
> vu_panic_cb panic, const VuDevIface *iface);
> void vug_deinit(VugDev *dev);
>
> diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
> index 3b888ff0a5..46b600799b 100644
> --- a/contrib/libvhost-user/libvhost-user.h
> +++ b/contrib/libvhost-user/libvhost-user.h
> @@ -25,7 +25,6 @@
> #define VHOST_USER_F_PROTOCOL_FEATURES 30
> #define VHOST_LOG_PAGE 4096
>
> -#define VHOST_MAX_NR_VIRTQUEUE 8
> #define VIRTQUEUE_MAX_SIZE 1024
>
> #define VHOST_MEMORY_MAX_NREGIONS 8
> @@ -353,7 +352,7 @@ struct VuDev {
> int sock;
> uint32_t nregions;
> VuDevRegion regions[VHOST_MEMORY_MAX_NREGIONS];
> - VuVirtq vq[VHOST_MAX_NR_VIRTQUEUE];
> + VuVirtq *vq;
> VuDevInflightInfo inflight_info;
> int log_call_fd;
> int slave_fd;
> @@ -362,6 +361,7 @@ struct VuDev {
> uint64_t features;
> uint64_t protocol_features;
> bool broken;
> + uint16_t max_queues;
>
> /* @set_watch: add or update the given fd to the watch set,
> * call cb when condition is met */
> @@ -391,6 +391,7 @@ typedef struct VuVirtqElement {
> /**
> * vu_init:
> * @dev: a VuDev context
> + * @max_queues: maximum number of virtqueues
> * @socket: the socket connected to vhost-user master
> * @panic: a panic callback
> * @set_watch: a set_watch callback
> @@ -398,8 +399,11 @@ typedef struct VuVirtqElement {
> * @iface: a VuDevIface structure with vhost-user device callbacks
> *
> * Intializes a VuDev vhost-user context.
> + *
> + * Returns: true on success, false on failure.
> **/
> -void vu_init(VuDev *dev,
> +bool vu_init(VuDev *dev,
> + uint16_t max_queues,
> int socket,
> vu_panic_cb panic,
> vu_set_watch_cb set_watch,
> diff --git a/contrib/libvhost-user/libvhost-user-glib.c b/contrib/libvhost-user/libvhost-user-glib.c
> index 42660a1b36..99edd2f3de 100644
> --- a/contrib/libvhost-user/libvhost-user-glib.c
> +++ b/contrib/libvhost-user/libvhost-user-glib.c
> @@ -131,18 +131,24 @@ static void vug_watch(VuDev *dev, int condition, void *data)
> }
> }
>
> -void
> -vug_init(VugDev *dev, int socket,
> +bool
> +vug_init(VugDev *dev, uint16_t max_queues, int socket,
> vu_panic_cb panic, const VuDevIface *iface)
> {
> g_assert(dev);
> g_assert(iface);
>
> - vu_init(&dev->parent, socket, panic, set_watch, remove_watch, iface);
> + if (!vu_init(&dev->parent, max_queues, socket, panic, set_watch,
> + remove_watch, iface)) {
> + return false;
> + }
> +
> dev->fdmap = g_hash_table_new_full(NULL, NULL, NULL,
> (GDestroyNotify) g_source_destroy);
>
> dev->src = vug_source_new(dev, socket, G_IO_IN, vug_watch, NULL);
> +
> + return true;
> }
>
> void
> diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
> index a8657c7af2..0c88431e8f 100644
> --- a/contrib/libvhost-user/libvhost-user.c
> +++ b/contrib/libvhost-user/libvhost-user.c
> @@ -493,9 +493,9 @@ vu_get_features_exec(VuDev *dev, VhostUserMsg *vmsg)
> static void
> vu_set_enable_all_rings(VuDev *dev, bool enabled)
> {
> - int i;
> + uint16_t i;
>
> - for (i = 0; i < VHOST_MAX_NR_VIRTQUEUE; i++) {
> + for (i = 0; i < dev->max_queues; i++) {
> dev->vq[i].enable = enabled;
> }
> }
> @@ -916,7 +916,7 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
> {
> int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
>
> - if (index >= VHOST_MAX_NR_VIRTQUEUE) {
> + if (index >= dev->max_queues) {
> vmsg_close_fds(vmsg);
> vu_panic(dev, "Invalid queue index: %u", index);
> return false;
> @@ -1213,7 +1213,7 @@ vu_set_vring_enable_exec(VuDev *dev, VhostUserMsg *vmsg)
> DPRINT("State.index: %d\n", index);
> DPRINT("State.enable: %d\n", enable);
>
> - if (index >= VHOST_MAX_NR_VIRTQUEUE) {
> + if (index >= dev->max_queues) {
> vu_panic(dev, "Invalid vring_enable index: %u", index);
> return false;
> }
> @@ -1582,7 +1582,7 @@ vu_deinit(VuDev *dev)
> }
> dev->nregions = 0;
>
> - for (i = 0; i < VHOST_MAX_NR_VIRTQUEUE; i++) {
> + for (i = 0; i < dev->max_queues; i++) {
> VuVirtq *vq = &dev->vq[i];
>
> if (vq->call_fd != -1) {
> @@ -1627,18 +1627,23 @@ vu_deinit(VuDev *dev)
> if (dev->sock != -1) {
> close(dev->sock);
> }
> +
> + free(dev->vq);
> + dev->vq = NULL;
> }
>
> -void
> +bool
> vu_init(VuDev *dev,
> + uint16_t max_queues,
> int socket,
> vu_panic_cb panic,
> vu_set_watch_cb set_watch,
> vu_remove_watch_cb remove_watch,
> const VuDevIface *iface)
> {
> - int i;
> + uint16_t i;
>
> + assert(max_queues > 0);
> assert(socket >= 0);
> assert(set_watch);
> assert(remove_watch);
> @@ -1654,18 +1659,27 @@ vu_init(VuDev *dev,
> dev->iface = iface;
> dev->log_call_fd = -1;
> dev->slave_fd = -1;
> - for (i = 0; i < VHOST_MAX_NR_VIRTQUEUE; i++) {
> +
> + dev->vq = malloc(max_queues * sizeof(dev->vq[0]));
> + if (!dev->vq) {
> + DPRINT("%s: failed to malloc virtqueues\n", __func__);
> + return false;
> + }
> +
> + for (i = 0; i < max_queues; i++) {
> dev->vq[i] = (VuVirtq) {
> .call_fd = -1, .kick_fd = -1, .err_fd = -1,
> .notification = true,
> };
> }
> +
> + return true;
> }
>
> VuVirtq *
> vu_get_queue(VuDev *dev, int qidx)
> {
> - assert(qidx < VHOST_MAX_NR_VIRTQUEUE);
> + assert(qidx < dev->max_queues);
> return &dev->vq[qidx];
> }
>
> diff --git a/contrib/vhost-user-blk/vhost-user-blk.c b/contrib/vhost-user-blk/vhost-user-blk.c
> index 86a3987744..ae61034656 100644
> --- a/contrib/vhost-user-blk/vhost-user-blk.c
> +++ b/contrib/vhost-user-blk/vhost-user-blk.c
> @@ -25,6 +25,10 @@
> #include <sys/ioctl.h>
> #endif
>
> +enum {
> + VHOST_USER_BLK_MAX_QUEUES = 8,
> +};
why do you use enum,(and not const int) ? (similarly for other devices)
other than than
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> +
> struct virtio_blk_inhdr {
> unsigned char status;
> };
> @@ -334,12 +338,6 @@ static void vub_process_vq(VuDev *vu_dev, int idx)
> VuVirtq *vq;
> int ret;
>
> - if ((idx < 0) || (idx >= VHOST_MAX_NR_VIRTQUEUE)) {
> - fprintf(stderr, "VQ Index out of range: %d\n", idx);
> - vub_panic_cb(vu_dev, NULL);
> - return;
> - }
> -
> gdev = container_of(vu_dev, VugDev, parent);
> vdev_blk = container_of(gdev, VubDev, parent);
> assert(vdev_blk);
> @@ -631,7 +629,11 @@ int main(int argc, char **argv)
> vdev_blk->enable_ro = true;
> }
>
> - vug_init(&vdev_blk->parent, csock, vub_panic_cb, &vub_iface);
> + if (!vug_init(&vdev_blk->parent, VHOST_USER_BLK_MAX_QUEUES, csock,
> + vub_panic_cb, &vub_iface)) {
> + fprintf(stderr, "Failed to initialized libvhost-user-glib\n");
> + goto err;
> + }
>
> g_main_loop_run(vdev_blk->loop);
>
> diff --git a/contrib/vhost-user-gpu/main.c b/contrib/vhost-user-gpu/main.c
> index 04b753046f..b45d2019b4 100644
> --- a/contrib/vhost-user-gpu/main.c
> +++ b/contrib/vhost-user-gpu/main.c
> @@ -25,6 +25,10 @@
> #include "virgl.h"
> #include "vugbm.h"
>
> +enum {
> + VHOST_USER_GPU_MAX_QUEUES = 2,
> +};
> +
> struct virtio_gpu_simple_resource {
> uint32_t resource_id;
> uint32_t width;
> @@ -1169,7 +1173,10 @@ main(int argc, char *argv[])
> exit(EXIT_FAILURE);
> }
>
> - vug_init(&g.dev, fd, vg_panic, &vuiface);
> + if (!vug_init(&g.dev, VHOST_USER_GPU_MAX_QUEUES, fd, vg_panic, &vuiface)) {
> + g_printerr("Failed to initialize libvhost-user-glib.\n");
> + exit(EXIT_FAILURE);
> + }
>
> loop = g_main_loop_new(NULL, FALSE);
> g_main_loop_run(loop);
> diff --git a/contrib/vhost-user-input/main.c b/contrib/vhost-user-input/main.c
> index 8b4e7d2536..449fd2171a 100644
> --- a/contrib/vhost-user-input/main.c
> +++ b/contrib/vhost-user-input/main.c
> @@ -17,6 +17,10 @@
> #include "standard-headers/linux/virtio_input.h"
> #include "qapi/error.h"
>
> +enum {
> + VHOST_USER_INPUT_MAX_QUEUES = 2,
> +};
> +
> typedef struct virtio_input_event virtio_input_event;
> typedef struct virtio_input_config virtio_input_config;
>
> @@ -384,7 +388,12 @@ main(int argc, char *argv[])
> g_printerr("Invalid vhost-user socket.\n");
> exit(EXIT_FAILURE);
> }
> - vug_init(&vi.dev, fd, vi_panic, &vuiface);
> +
> + if (!vug_init(&vi.dev, VHOST_USER_INPUT_MAX_QUEUES, fd, vi_panic,
> + &vuiface)) {
> + g_printerr("Failed to initialize libvhost-user-glib.\n");
> + exit(EXIT_FAILURE);
> + }
>
> loop = g_main_loop_new(NULL, FALSE);
> g_main_loop_run(loop);
> diff --git a/contrib/vhost-user-scsi/vhost-user-scsi.c b/contrib/vhost-user-scsi/vhost-user-scsi.c
> index 496dd6e693..0fc14d7899 100644
> --- a/contrib/vhost-user-scsi/vhost-user-scsi.c
> +++ b/contrib/vhost-user-scsi/vhost-user-scsi.c
> @@ -19,6 +19,10 @@
>
> #define VUS_ISCSI_INITIATOR "iqn.2016-11.com.nutanix:vhost-user-scsi"
>
> +enum {
> + VHOST_USER_SCSI_MAX_QUEUES = 8,
> +};
> +
> typedef struct VusIscsiLun {
> struct iscsi_context *iscsi_ctx;
> int iscsi_lun;
> @@ -231,11 +235,6 @@ static void vus_proc_req(VuDev *vu_dev, int idx)
>
> gdev = container_of(vu_dev, VugDev, parent);
> vdev_scsi = container_of(gdev, VusDev, parent);
> - if (idx < 0 || idx >= VHOST_MAX_NR_VIRTQUEUE) {
> - g_warning("VQ Index out of range: %d", idx);
> - vus_panic_cb(vu_dev, NULL);
> - return;
> - }
>
> vq = vu_get_queue(vu_dev, idx);
> if (!vq) {
> @@ -295,12 +294,6 @@ static void vus_queue_set_started(VuDev *vu_dev, int idx, bool started)
>
> assert(vu_dev);
>
> - if (idx < 0 || idx >= VHOST_MAX_NR_VIRTQUEUE) {
> - g_warning("VQ Index out of range: %d", idx);
> - vus_panic_cb(vu_dev, NULL);
> - return;
> - }
> -
> vq = vu_get_queue(vu_dev, idx);
>
> if (idx == 0 || idx == 1) {
> @@ -398,7 +391,11 @@ int main(int argc, char **argv)
> goto err;
> }
>
> - vug_init(&vdev_scsi->parent, csock, vus_panic_cb, &vus_iface);
> + if (!vug_init(&vdev_scsi->parent, VHOST_USER_SCSI_MAX_QUEUES, csock,
> + vus_panic_cb, &vus_iface)) {
> + g_printerr("Failed to initialize libvhost-user-glib\n");
> + goto err;
> + }
>
> g_main_loop_run(vdev_scsi->loop);
>
> diff --git a/tests/vhost-user-bridge.c b/tests/vhost-user-bridge.c
> index 0bb03af0e5..c4e350e1f5 100644
> --- a/tests/vhost-user-bridge.c
> +++ b/tests/vhost-user-bridge.c
> @@ -45,6 +45,10 @@
> } \
> } while (0)
>
> +enum {
> + VHOST_USER_BRIDGE_MAX_QUEUES = 8,
> +};
> +
> typedef void (*CallbackFunc)(int sock, void *ctx);
>
> typedef struct Event {
> @@ -512,12 +516,16 @@ vubr_accept_cb(int sock, void *ctx)
> }
> DPRINT("Got connection from remote peer on sock %d\n", conn_fd);
>
> - vu_init(&dev->vudev,
> - conn_fd,
> - vubr_panic,
> - vubr_set_watch,
> - vubr_remove_watch,
> - &vuiface);
> + if (!vu_init(&dev->vudev,
> + VHOST_USER_BRIDGE_MAX_QUEUES,
> + conn_fd,
> + vubr_panic,
> + vubr_set_watch,
> + vubr_remove_watch,
> + &vuiface)) {
> + fprintf(stderr, "Failed to initialize libvhost-user\n");
> + exit(1);
> + }
>
> dispatcher_add(&dev->dispatcher, conn_fd, ctx, vubr_receive_cb);
> dispatcher_remove(&dev->dispatcher, sock);
> @@ -560,12 +568,18 @@ vubr_new(const char *path, bool client)
> if (connect(dev->sock, (struct sockaddr *)&un, len) == -1) {
> vubr_die("connect");
> }
> - vu_init(&dev->vudev,
> - dev->sock,
> - vubr_panic,
> - vubr_set_watch,
> - vubr_remove_watch,
> - &vuiface);
> +
> + if (!vu_init(&dev->vudev,
> + VHOST_USER_BRIDGE_MAX_QUEUES,
> + dev->sock,
> + vubr_panic,
> + vubr_set_watch,
> + vubr_remove_watch,
> + &vuiface)) {
> + fprintf(stderr, "Failed to initialize libvhost-user\n");
> + exit(1);
> + }
> +
> cb = vubr_receive_cb;
> }
>
> @@ -584,7 +598,7 @@ static void *notifier_thread(void *arg)
> int qidx;
>
> while (true) {
> - for (qidx = 0; qidx < VHOST_MAX_NR_VIRTQUEUE; qidx++) {
> + for (qidx = 0; qidx < VHOST_USER_BRIDGE_MAX_QUEUES; qidx++) {
> uint16_t *n = vubr->notifier.addr + pagesize * qidx;
>
> if (*n == qidx) {
> @@ -616,7 +630,7 @@ vubr_host_notifier_setup(VubrDev *dev)
> void *addr;
> int fd;
>
> - length = getpagesize() * VHOST_MAX_NR_VIRTQUEUE;
> + length = getpagesize() * VHOST_USER_BRIDGE_MAX_QUEUES;
>
> fd = mkstemp(template);
> if (fd < 0) {
> --
> 2.21.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper
2019-06-21 9:40 ` [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper Stefan Hajnoczi
@ 2019-06-21 13:48 ` Marc-André Lureau
0 siblings, 0 replies; 11+ messages in thread
From: Marc-André Lureau @ 2019-06-21 13:48 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Sebastien Boeuf, Michael S. Tsirkin, qemu-devel,
Dr. David Alan Gilbert, Gerd Hoffmann
On Fri, Jun 21, 2019 at 11:40 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> The VhostUserMsg request is reused as the reply by message processing
> functions. This is risky since request fields may corrupt the reply if
> the vhost-user message handler function forgets to re-initialize them.
>
> Changing this practice would be very invasive but we can introduce a
> helper function to make u64 payload replies safe. This also eliminates
> code duplication in message processing functions.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> ---
> contrib/libvhost-user/libvhost-user.c | 26 +++++++++++++-------------
> 1 file changed, 13 insertions(+), 13 deletions(-)
>
> diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
> index 443b7e08c3..a8657c7af2 100644
> --- a/contrib/libvhost-user/libvhost-user.c
> +++ b/contrib/libvhost-user/libvhost-user.c
> @@ -216,6 +216,15 @@ vmsg_close_fds(VhostUserMsg *vmsg)
> }
> }
>
> +/* Set reply payload.u64 and clear request flags and fd_num */
> +static void vmsg_set_reply_u64(VhostUserMsg *vmsg, uint64_t val)
> +{
> + vmsg->flags = 0; /* defaults will be set by vu_send_reply() */
> + vmsg->size = sizeof(vmsg->payload.u64);
> + vmsg->payload.u64 = val;
> + vmsg->fd_num = 0;
> +}
> +
> /* A test to see if we have userfault available */
> static bool
> have_userfault(void)
> @@ -1168,10 +1177,7 @@ vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg)
> features |= dev->iface->get_protocol_features(dev);
> }
>
> - vmsg->payload.u64 = features;
> - vmsg->size = sizeof(vmsg->payload.u64);
> - vmsg->fd_num = 0;
> -
> + vmsg_set_reply_u64(vmsg, features);
> return true;
> }
>
> @@ -1307,17 +1313,14 @@ out:
> static bool
> vu_set_postcopy_listen(VuDev *dev, VhostUserMsg *vmsg)
> {
> - vmsg->payload.u64 = -1;
> - vmsg->size = sizeof(vmsg->payload.u64);
> -
> if (dev->nregions) {
> vu_panic(dev, "Regions already registered at postcopy-listen");
> + vmsg_set_reply_u64(vmsg, -1);
> return true;
> }
> dev->postcopy_listening = true;
>
> - vmsg->flags = VHOST_USER_VERSION | VHOST_USER_REPLY_MASK;
> - vmsg->payload.u64 = 0; /* Success */
> + vmsg_set_reply_u64(vmsg, 0);
> return true;
> }
>
> @@ -1332,10 +1335,7 @@ vu_set_postcopy_end(VuDev *dev, VhostUserMsg *vmsg)
> DPRINT("%s: Done close\n", __func__);
> }
>
> - vmsg->fd_num = 0;
> - vmsg->payload.u64 = 0;
> - vmsg->size = sizeof(vmsg->payload.u64);
> - vmsg->flags = VHOST_USER_VERSION | VHOST_USER_REPLY_MASK;
> + vmsg_set_reply_u64(vmsg, 0);
> DPRINT("%s: exit\n", __func__);
> return true;
> }
> --
> 2.21.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] docs: avoid vhost-user-net specifics in multiqueue section
2019-06-21 9:40 ` [Qemu-devel] [PATCH 4/4] docs: avoid vhost-user-net specifics in multiqueue section Stefan Hajnoczi
@ 2019-06-21 13:52 ` Marc-André Lureau
0 siblings, 0 replies; 11+ messages in thread
From: Marc-André Lureau @ 2019-06-21 13:52 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Sebastien Boeuf, Michael S. Tsirkin, qemu-devel,
Dr. David Alan Gilbert, Gerd Hoffmann
On Fri, Jun 21, 2019 at 11:41 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> The "Multiple queue support" section makes references to vhost-user-net
> "queue pairs". This is confusing for two reasons:
> 1. This actually applies to all device types, not just vhost-user-net.
> 2. VHOST_USER_GET_QUEUE_NUM returns the number of virtqueues, not the
> number of queue pairs.
>
> Reword the section so that the vhost-user-net specific part is relegated
> to the very end: we acknowledge that vhost-user-net historically
> automatically enabled the first queue pair.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> ---
> docs/interop/vhost-user.rst | 21 +++++++++++----------
> 1 file changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
> index dc0ff9211f..5750668aba 100644
> --- a/docs/interop/vhost-user.rst
> +++ b/docs/interop/vhost-user.rst
> @@ -324,19 +324,20 @@ must support changing some configuration aspects on the fly.
> Multiple queue support
> ----------------------
>
> -Multiple queue is treated as a protocol extension, hence the slave has
> -to implement protocol features first. The multiple queues feature is
> -supported only when the protocol feature ``VHOST_USER_PROTOCOL_F_MQ``
> -(bit 0) is set.
> +Multiple queue support allows the slave to advertise the maximum number of
> +queues. This is treated as a protocol extension, hence the slave has to
> +implement protocol features first. The multiple queues feature is supported
> +only when the protocol feature ``VHOST_USER_PROTOCOL_F_MQ`` (bit 0) is set.
>
> -The max number of queue pairs the slave supports can be queried with
> -message ``VHOST_USER_GET_QUEUE_NUM``. Master should stop when the
> -number of requested queues is bigger than that.
> +The max number of queues the slave supports can be queried with message
> +``VHOST_USER_GET_QUEUE_NUM``. Master should stop when the number of requested
> +queues is bigger than that.
>
> As all queues share one connection, the master uses a unique index for each
> -queue in the sent message to identify a specified queue. One queue pair
> -is enabled initially. More queues are enabled dynamically, by sending
> -message ``VHOST_USER_SET_VRING_ENABLE``.
> +queue in the sent message to identify a specified queue.
> +
> +The master enables queues by sending message ``VHOST_USER_SET_VRING_ENABLE``.
> +vhost-user-net has historically automatically enabled the first queue pair.
>
> Migration
> ---------
> --
> 2.21.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues
2019-06-21 13:48 ` Marc-André Lureau
@ 2019-06-21 16:27 ` Stefan Hajnoczi
0 siblings, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-06-21 16:27 UTC (permalink / raw)
To: Marc-André Lureau
Cc: Sebastien Boeuf, Michael S. Tsirkin, qemu-devel,
Dr. David Alan Gilbert, Gerd Hoffmann
[-- Attachment #1: Type: text/plain, Size: 1030 bytes --]
On Fri, Jun 21, 2019 at 03:48:36PM +0200, Marc-André Lureau wrote:
> On Fri, Jun 21, 2019 at 11:40 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > diff --git a/contrib/vhost-user-blk/vhost-user-blk.c b/contrib/vhost-user-blk/vhost-user-blk.c
> > index 86a3987744..ae61034656 100644
> > --- a/contrib/vhost-user-blk/vhost-user-blk.c
> > +++ b/contrib/vhost-user-blk/vhost-user-blk.c
> > @@ -25,6 +25,10 @@
> > #include <sys/ioctl.h>
> > #endif
> >
> > +enum {
> > + VHOST_USER_BLK_MAX_QUEUES = 8,
> > +};
>
> why do you use enum,(and not const int) ? (similarly for other devices)
>
> other than than
> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
This is how I was taught when I was a little boy.
With an actual variable there's a risk that the compiler reserves space
for a variable when you actually just need a constant. Whether modern
compilers do that or not, I don't know.
The type is clearer when a variable is used instead of an enum.
Pros and cons...
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support
2019-06-21 9:40 [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
` (3 preceding siblings ...)
2019-06-21 9:40 ` [Qemu-devel] [PATCH 4/4] docs: avoid vhost-user-net specifics in multiqueue section Stefan Hajnoczi
@ 2019-07-03 9:20 ` Stefan Hajnoczi
4 siblings, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2019-07-03 9:20 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Marc-André Lureau, Sebastien Boeuf, Gerd Hoffmann,
qemu-devel, Dr. David Alan Gilbert
[-- Attachment #1: Type: text/plain, Size: 1652 bytes --]
On Fri, Jun 21, 2019 at 10:40:01AM +0100, Stefan Hajnoczi wrote:
> Sebastien Boeuf <sebastien.boeuf@intel.com> pointed out that libvhost-user
> doesn't advertise VHOST_USER_PROTOCOL_F_MQ. Today this prevents vhost-user-net
> multiqueue from working.
>
> In virtio-fs we also want to support multiqueue so I'm sending patches to add
> this. It's free to advertise VHOST_USER_PROTOCOL_F_MQ for all devices so we
> can do it unconditionally in libvhost-user.
>
> Several related improvements are included:
> Patch 1 - clean up duplicated and risky VhostUserMsg reply building code
> Patch 2 - remove hardcoded 8 virtqueue limit in libvhost-user
> Patch 4 - clarify vhost-user multiqueue specification
>
> Stefan Hajnoczi (4):
> libvhost-user: add vmsg_set_reply_u64() helper
> libvhost-user: support many virtqueues
> libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ
> docs: avoid vhost-user-net specifics in multiqueue section
>
> contrib/libvhost-user/libvhost-user-glib.h | 2 +-
> contrib/libvhost-user/libvhost-user.h | 10 +++-
> contrib/libvhost-user/libvhost-user-glib.c | 12 +++-
> contrib/libvhost-user/libvhost-user.c | 65 +++++++++++++---------
> contrib/vhost-user-blk/vhost-user-blk.c | 16 +++---
> contrib/vhost-user-gpu/main.c | 9 ++-
> contrib/vhost-user-input/main.c | 11 +++-
> contrib/vhost-user-scsi/vhost-user-scsi.c | 21 +++----
> tests/vhost-user-bridge.c | 42 +++++++++-----
> docs/interop/vhost-user.rst | 21 +++----
> 10 files changed, 132 insertions(+), 77 deletions(-)
>
> --
> 2.21.0
Ping?
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2019-07-03 9:29 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-21 9:40 [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
2019-06-21 9:40 ` [Qemu-devel] [PATCH 1/4] libvhost-user: add vmsg_set_reply_u64() helper Stefan Hajnoczi
2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 9:40 ` [Qemu-devel] [PATCH 2/4] libvhost-user: support many virtqueues Stefan Hajnoczi
2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 16:27 ` Stefan Hajnoczi
2019-06-21 9:40 ` [Qemu-devel] [PATCH 3/4] libvhost-user: implement VHOST_USER_PROTOCOL_F_MQ Stefan Hajnoczi
2019-06-21 13:48 ` Marc-André Lureau
2019-06-21 9:40 ` [Qemu-devel] [PATCH 4/4] docs: avoid vhost-user-net specifics in multiqueue section Stefan Hajnoczi
2019-06-21 13:52 ` Marc-André Lureau
2019-07-03 9:20 ` [Qemu-devel] [PATCH 0/4] libvhost-user: VHOST_USER_PROTOCOL_F_MQ support Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).