* [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental)
@ 2019-08-16 14:33 Dr. David Alan Gilbert (git)
2019-08-16 14:33 ` [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device Dr. David Alan Gilbert (git)
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-08-16 14:33 UTC (permalink / raw)
To: qemu-devel, mst; +Cc: vgoyal, stefanha
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Hi,
This pair of patches adds the core of the virtio-fs support to qemu;
it's marked experimental since the kernel patch and spec changes aren't
in yet; but they're bubbling along.
While the spec change is still in progress; the ID number is already
reserved.
A future set of patches will add the optional DAX mapping support.
The actual qemu change is pretty minimal, since it's really only
a virtio device with some queues.
Some links:
Mailing list: https://www.redhat.com/mailman/listinfo/virtio-fs
Dev tree: Including filesystem daemon: https://gitlab.com/virtio-fs/qemu
kernel: https://gitlab.com/virtio-fs/linux
virtio spec changes: https://lists.oasis-open.org/archives/virtio-dev/201908/msg00056.html
Dr. David Alan Gilbert (2):
virtio: add vhost-user-fs base device
virtio: add vhost-user-fs-pci device
configure | 13 +
hw/virtio/Makefile.objs | 2 +
hw/virtio/vhost-user-fs-pci.c | 79 ++++++
hw/virtio/vhost-user-fs.c | 297 ++++++++++++++++++++
include/hw/virtio/vhost-user-fs.h | 45 +++
include/standard-headers/linux/virtio_fs.h | 41 +++
include/standard-headers/linux/virtio_ids.h | 1 +
7 files changed, 478 insertions(+)
create mode 100644 hw/virtio/vhost-user-fs-pci.c
create mode 100644 hw/virtio/vhost-user-fs.c
create mode 100644 include/hw/virtio/vhost-user-fs.h
create mode 100644 include/standard-headers/linux/virtio_fs.h
--
2.21.0
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-16 14:33 [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) Dr. David Alan Gilbert (git)
@ 2019-08-16 14:33 ` Dr. David Alan Gilbert (git)
2019-08-18 11:08 ` Michael S. Tsirkin
2019-08-16 14:33 ` [Qemu-devel] [PATCH 2/2] virtio: add vhost-user-fs-pci device Dr. David Alan Gilbert (git)
2019-08-16 18:38 ` [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) no-reply
2 siblings, 1 reply; 13+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-08-16 14:33 UTC (permalink / raw)
To: qemu-devel, mst; +Cc: vgoyal, stefanha
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
The virtio-fs virtio device provides shared file system access using
the FUSE protocol carried ovew virtio.
The actual file server is implemented in an external vhost-user-fs device
backend process.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
configure | 13 +
hw/virtio/Makefile.objs | 1 +
hw/virtio/vhost-user-fs.c | 297 ++++++++++++++++++++
include/hw/virtio/vhost-user-fs.h | 45 +++
include/standard-headers/linux/virtio_fs.h | 41 +++
include/standard-headers/linux/virtio_ids.h | 1 +
6 files changed, 398 insertions(+)
create mode 100644 hw/virtio/vhost-user-fs.c
create mode 100644 include/hw/virtio/vhost-user-fs.h
create mode 100644 include/standard-headers/linux/virtio_fs.h
diff --git a/configure b/configure
index 714e7fb6a1..e7e33ee783 100755
--- a/configure
+++ b/configure
@@ -382,6 +382,7 @@ vhost_crypto=""
vhost_scsi=""
vhost_vsock=""
vhost_user=""
+vhost_user_fs=""
kvm="no"
hax="no"
hvf="no"
@@ -1316,6 +1317,10 @@ for opt do
;;
--enable-vhost-vsock) vhost_vsock="yes"
;;
+ --disable-vhost-user-fs) vhost_user_fs="no"
+ ;;
+ --enable-vhost-user-fs) vhost_user_fs="yes"
+ ;;
--disable-opengl) opengl="no"
;;
--enable-opengl) opengl="yes"
@@ -2269,6 +2274,10 @@ test "$vhost_crypto" = "" && vhost_crypto=$vhost_user
if test "$vhost_crypto" = "yes" && test "$vhost_user" = "no"; then
error_exit "--enable-vhost-crypto requires --enable-vhost-user"
fi
+test "$vhost_user_fs" = "" && vhost_user_fs=$vhost_user
+if test "$vhost_user_fs" = "yes" && test "$vhost_user" = "no"; then
+ error_exit "--enable-vhost-user-fs requires --enable-vhost-user"
+fi
# OR the vhost-kernel and vhost-user values for simplicity
if test "$vhost_net" = ""; then
@@ -6425,6 +6434,7 @@ echo "vhost-crypto support $vhost_crypto"
echo "vhost-scsi support $vhost_scsi"
echo "vhost-vsock support $vhost_vsock"
echo "vhost-user support $vhost_user"
+echo "vhost-user-fs support $vhost_user_fs"
echo "Trace backends $trace_backends"
if have_backend "simple"; then
echo "Trace output file $trace_file-<pid>"
@@ -6921,6 +6931,9 @@ fi
if test "$vhost_user" = "yes" ; then
echo "CONFIG_VHOST_USER=y" >> $config_host_mak
fi
+if test "$vhost_user_fs" = "yes" ; then
+ echo "CONFIG_VHOST_USER_FS=y" >> $config_host_mak
+fi
if test "$blobs" = "yes" ; then
echo "INSTALL_BLOBS=yes" >> $config_host_mak
fi
diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 964ce78607..47ffbf22c4 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -11,6 +11,7 @@ common-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
common-obj-$(CONFIG_VIRTIO_MMIO) += virtio-mmio.o
obj-$(CONFIG_VIRTIO_BALLOON) += virtio-balloon.o
obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
+obj-$(CONFIG_VHOST_USER_FS) += vhost-user-fs.o
obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-pci.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio-pmem.o
common-obj-$(call land,$(CONFIG_VIRTIO_PMEM),$(CONFIG_VIRTIO_PCI)) += virtio-pmem-pci.o
diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
new file mode 100644
index 0000000000..2753c2c07a
--- /dev/null
+++ b/hw/virtio/vhost-user-fs.c
@@ -0,0 +1,297 @@
+/*
+ * Vhost-user filesystem virtio device
+ *
+ * Copyright 2018 Red Hat, Inc.
+ *
+ * Authors:
+ * Stefan Hajnoczi <stefanha@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * (at your option) any later version. See the COPYING file in the
+ * top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include <sys/ioctl.h>
+#include "standard-headers/linux/virtio_fs.h"
+#include "qapi/error.h"
+#include "hw/virtio/virtio-bus.h"
+#include "hw/virtio/virtio-access.h"
+#include "qemu/error-report.h"
+#include "hw/virtio/vhost-user-fs.h"
+#include "monitor/monitor.h"
+
+static void vuf_get_config(VirtIODevice *vdev, uint8_t *config)
+{
+ VHostUserFS *fs = VHOST_USER_FS(vdev);
+ struct virtio_fs_config fscfg = {};
+
+ memcpy((char *)fscfg.tag, fs->conf.tag,
+ MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag)));
+
+ virtio_stl_p(vdev, &fscfg.num_queues, fs->conf.num_queues);
+
+ memcpy(config, &fscfg, sizeof(fscfg));
+}
+
+static void vuf_start(VirtIODevice *vdev)
+{
+ VHostUserFS *fs = VHOST_USER_FS(vdev);
+ BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
+ VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+ int ret;
+ int i;
+
+ if (!k->set_guest_notifiers) {
+ error_report("binding does not support guest notifiers");
+ return;
+ }
+
+ ret = vhost_dev_enable_notifiers(&fs->vhost_dev, vdev);
+ if (ret < 0) {
+ error_report("Error enabling host notifiers: %d", -ret);
+ return;
+ }
+
+ ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, true);
+ if (ret < 0) {
+ error_report("Error binding guest notifier: %d", -ret);
+ goto err_host_notifiers;
+ }
+
+ fs->vhost_dev.acked_features = vdev->guest_features;
+ ret = vhost_dev_start(&fs->vhost_dev, vdev);
+ if (ret < 0) {
+ error_report("Error starting vhost: %d", -ret);
+ goto err_guest_notifiers;
+ }
+
+ /*
+ * guest_notifier_mask/pending not used yet, so just unmask
+ * everything here. virtio-pci will do the right thing by
+ * enabling/disabling irqfd.
+ */
+ for (i = 0; i < fs->vhost_dev.nvqs; i++) {
+ vhost_virtqueue_mask(&fs->vhost_dev, vdev, i, false);
+ }
+
+ return;
+
+err_guest_notifiers:
+ k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
+err_host_notifiers:
+ vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
+}
+
+static void vuf_stop(VirtIODevice *vdev)
+{
+ VHostUserFS *fs = VHOST_USER_FS(vdev);
+ BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
+ VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+ int ret;
+
+ if (!k->set_guest_notifiers) {
+ return;
+ }
+
+ vhost_dev_stop(&fs->vhost_dev, vdev);
+
+ ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
+ if (ret < 0) {
+ error_report("vhost guest notifier cleanup failed: %d", ret);
+ return;
+ }
+
+ vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
+}
+
+static void vuf_set_status(VirtIODevice *vdev, uint8_t status)
+{
+ VHostUserFS *fs = VHOST_USER_FS(vdev);
+ bool should_start = status & VIRTIO_CONFIG_S_DRIVER_OK;
+
+ if (!vdev->vm_running) {
+ should_start = false;
+ }
+
+ if (fs->vhost_dev.started == should_start) {
+ return;
+ }
+
+ if (should_start) {
+ vuf_start(vdev);
+ } else {
+ vuf_stop(vdev);
+ }
+}
+
+static uint64_t vuf_get_features(VirtIODevice *vdev,
+ uint64_t requested_features,
+ Error **errp)
+{
+ /* No feature bits used yet */
+ return requested_features;
+}
+
+static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+{
+ /* Do nothing */
+}
+
+static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
+ bool mask)
+{
+ VHostUserFS *fs = VHOST_USER_FS(vdev);
+
+ vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
+}
+
+static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
+{
+ VHostUserFS *fs = VHOST_USER_FS(vdev);
+
+ return vhost_virtqueue_pending(&fs->vhost_dev, idx);
+}
+
+static void vuf_device_realize(DeviceState *dev, Error **errp)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VHostUserFS *fs = VHOST_USER_FS(dev);
+ unsigned int i;
+ size_t len;
+ int ret;
+
+ if (!fs->conf.chardev.chr) {
+ error_setg(errp, "missing chardev");
+ return;
+ }
+
+ if (!fs->conf.tag) {
+ error_setg(errp, "missing tag property");
+ return;
+ }
+ len = strlen(fs->conf.tag);
+ if (len == 0) {
+ error_setg(errp, "tag property cannot be empty");
+ return;
+ }
+ if (len > sizeof_field(struct virtio_fs_config, tag)) {
+ error_setg(errp, "tag property must be %zu bytes or less",
+ sizeof_field(struct virtio_fs_config, tag));
+ return;
+ }
+
+ if (fs->conf.num_queues == 0) {
+ error_setg(errp, "num-queues property must be larger than 0");
+ return;
+ }
+
+ if (!is_power_of_2(fs->conf.queue_size)) {
+ error_setg(errp, "queue-size property must be a power of 2");
+ return;
+ }
+
+ if (fs->conf.queue_size > VIRTQUEUE_MAX_SIZE) {
+ error_setg(errp, "queue-size property must be %u or smaller",
+ VIRTQUEUE_MAX_SIZE);
+ return;
+ }
+
+ if (!vhost_user_init(&fs->vhost_user, &fs->conf.chardev, errp)) {
+ return;
+ }
+
+ virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS,
+ sizeof(struct virtio_fs_config));
+
+ /* Notifications queue */
+ virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
+
+ /* Hiprio queue */
+ virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
+
+ /* Request queues */
+ for (i = 0; i < fs->conf.num_queues; i++) {
+ virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
+ }
+
+ /* 1 high prio queue, plus the number configured */
+ fs->vhost_dev.nvqs = 1 + fs->conf.num_queues;
+ fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs);
+ ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user,
+ VHOST_BACKEND_TYPE_USER, 0);
+ if (ret < 0) {
+ error_setg_errno(errp, -ret, "vhost_dev_init failed");
+ goto err_virtio;
+ }
+
+ return;
+
+err_virtio:
+ vhost_user_cleanup(&fs->vhost_user);
+ virtio_cleanup(vdev);
+ g_free(fs->vhost_dev.vqs);
+ return;
+}
+
+static void vuf_device_unrealize(DeviceState *dev, Error **errp)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VHostUserFS *fs = VHOST_USER_FS(dev);
+
+ /* This will stop vhost backend if appropriate. */
+ vuf_set_status(vdev, 0);
+
+ vhost_dev_cleanup(&fs->vhost_dev);
+
+ vhost_user_cleanup(&fs->vhost_user);
+
+ virtio_cleanup(vdev);
+ g_free(fs->vhost_dev.vqs);
+ fs->vhost_dev.vqs = NULL;
+}
+
+static const VMStateDescription vuf_vmstate = {
+ .name = "vhost-user-fs",
+ .unmigratable = 1,
+};
+
+static Property vuf_properties[] = {
+ DEFINE_PROP_CHR("chardev", VHostUserFS, conf.chardev),
+ DEFINE_PROP_STRING("tag", VHostUserFS, conf.tag),
+ DEFINE_PROP_UINT16("num-queues", VHostUserFS, conf.num_queues, 1),
+ DEFINE_PROP_UINT16("queue-size", VHostUserFS, conf.queue_size, 128),
+ DEFINE_PROP_STRING("vhostfd", VHostUserFS, conf.vhostfd),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void vuf_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+
+ dc->props = vuf_properties;
+ dc->vmsd = &vuf_vmstate;
+ set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
+ vdc->realize = vuf_device_realize;
+ vdc->unrealize = vuf_device_unrealize;
+ vdc->get_features = vuf_get_features;
+ vdc->get_config = vuf_get_config;
+ vdc->set_status = vuf_set_status;
+ vdc->guest_notifier_mask = vuf_guest_notifier_mask;
+ vdc->guest_notifier_pending = vuf_guest_notifier_pending;
+}
+
+static const TypeInfo vuf_info = {
+ .name = TYPE_VHOST_USER_FS,
+ .parent = TYPE_VIRTIO_DEVICE,
+ .instance_size = sizeof(VHostUserFS),
+ .class_init = vuf_class_init,
+};
+
+static void vuf_register_types(void)
+{
+ type_register_static(&vuf_info);
+}
+
+type_init(vuf_register_types)
diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-user-fs.h
new file mode 100644
index 0000000000..d07ab134b9
--- /dev/null
+++ b/include/hw/virtio/vhost-user-fs.h
@@ -0,0 +1,45 @@
+/*
+ * Vhost-user filesystem virtio device
+ *
+ * Copyright 2018 Red Hat, Inc.
+ *
+ * Authors:
+ * Stefan Hajnoczi <stefanha@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * (at your option) any later version. See the COPYING file in the
+ * top-level directory.
+ */
+
+#ifndef _QEMU_VHOST_USER_FS_H
+#define _QEMU_VHOST_USER_FS_H
+
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/vhost.h"
+#include "hw/virtio/vhost-user.h"
+#include "chardev/char-fe.h"
+
+#define TYPE_VHOST_USER_FS "x-vhost-user-fs-device"
+#define VHOST_USER_FS(obj) \
+ OBJECT_CHECK(VHostUserFS, (obj), TYPE_VHOST_USER_FS)
+
+typedef struct {
+ CharBackend chardev;
+ char *tag;
+ uint16_t num_queues;
+ uint16_t queue_size;
+ char *vhostfd;
+} VHostUserFSConf;
+
+typedef struct {
+ /*< private >*/
+ VirtIODevice parent;
+ VHostUserFSConf conf;
+ struct vhost_virtqueue *vhost_vqs;
+ struct vhost_dev vhost_dev;
+ VhostUserState vhost_user;
+
+ /*< public >*/
+} VHostUserFS;
+
+#endif /* _QEMU_VHOST_USER_FS_H */
diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-headers/linux/virtio_fs.h
new file mode 100644
index 0000000000..4f811a0b70
--- /dev/null
+++ b/include/standard-headers/linux/virtio_fs.h
@@ -0,0 +1,41 @@
+#ifndef _LINUX_VIRTIO_FS_H
+#define _LINUX_VIRTIO_FS_H
+/* This header is BSD licensed so anyone can use the definitions to implement
+ * compatible drivers/servers.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of IBM nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE. */
+#include "standard-headers/linux/types.h"
+#include "standard-headers/linux/virtio_ids.h"
+#include "standard-headers/linux/virtio_config.h"
+#include "standard-headers/linux/virtio_types.h"
+
+struct virtio_fs_config {
+ /* Filesystem name (UTF-8, not NUL-terminated, padded with NULs) */
+ uint8_t tag[36];
+
+ /* Number of request queues */
+ uint32_t num_queues;
+} QEMU_PACKED;
+
+#endif /* _LINUX_VIRTIO_FS_H */
diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
index 32b2f94d1f..73fc004807 100644
--- a/include/standard-headers/linux/virtio_ids.h
+++ b/include/standard-headers/linux/virtio_ids.h
@@ -43,6 +43,7 @@
#define VIRTIO_ID_INPUT 18 /* virtio input */
#define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */
#define VIRTIO_ID_CRYPTO 20 /* virtio crypto */
+#define VIRTIO_ID_FS 26 /* virtio filesystem */
#define VIRTIO_ID_PMEM 27 /* virtio pmem */
#endif /* _LINUX_VIRTIO_IDS_H */
--
2.21.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH 2/2] virtio: add vhost-user-fs-pci device
2019-08-16 14:33 [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) Dr. David Alan Gilbert (git)
2019-08-16 14:33 ` [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device Dr. David Alan Gilbert (git)
@ 2019-08-16 14:33 ` Dr. David Alan Gilbert (git)
2019-08-16 18:38 ` [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) no-reply
2 siblings, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-08-16 14:33 UTC (permalink / raw)
To: qemu-devel, mst; +Cc: vgoyal, stefanha
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Add the PCI version of vhost-user-fs.
Launch QEMU like this:
qemu -chardev socket,path=/tmp/vhost-fs.sock,id=chr0
-device x-vhost-user-fs-pci,tag=myfs,chardev=chr0
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
hw/virtio/Makefile.objs | 1 +
hw/virtio/vhost-user-fs-pci.c | 79 +++++++++++++++++++++++++++++++++++
2 files changed, 80 insertions(+)
create mode 100644 hw/virtio/vhost-user-fs-pci.c
diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 47ffbf22c4..e2f70fbb89 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -15,6 +15,7 @@ obj-$(CONFIG_VHOST_USER_FS) += vhost-user-fs.o
obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-pci.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio-pmem.o
common-obj-$(call land,$(CONFIG_VIRTIO_PMEM),$(CONFIG_VIRTIO_PCI)) += virtio-pmem-pci.o
+obj-$(call land,$(CONFIG_VHOST_USER_FS),$(CONFIG_VIRTIO_PCI)) += vhost-user-fs-pci.o
obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
ifeq ($(CONFIG_VIRTIO_PCI),y)
diff --git a/hw/virtio/vhost-user-fs-pci.c b/hw/virtio/vhost-user-fs-pci.c
new file mode 100644
index 0000000000..07e295fd44
--- /dev/null
+++ b/hw/virtio/vhost-user-fs-pci.c
@@ -0,0 +1,79 @@
+/*
+ * Vhost-user filesystem virtio device PCI glue
+ *
+ * Copyright 2018-2019 Red Hat, Inc.
+ *
+ * Authors:
+ * Dr. David Alan Gilbert <dgilbert@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * (at your option) any later version. See the COPYING file in the
+ * top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/virtio/vhost-user-fs.h"
+#include "virtio-pci.h"
+
+struct VHostUserFSPCI {
+ VirtIOPCIProxy parent_obj;
+ VHostUserFS vdev;
+};
+
+typedef struct VHostUserFSPCI VHostUserFSPCI;
+
+#define TYPE_VHOST_USER_FS_PCI "vhost-user-fs-pci-base"
+
+#define VHOST_USER_FS_PCI(obj) \
+ OBJECT_CHECK(VHostUserFSPCI, (obj), TYPE_VHOST_USER_FS_PCI)
+
+static Property vhost_user_fs_pci_properties[] = {
+ DEFINE_PROP_UINT32("vectors", VirtIOPCIProxy, nvectors, 4),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void vhost_user_fs_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
+{
+ VHostUserFSPCI *dev = VHOST_USER_FS_PCI(vpci_dev);
+ DeviceState *vdev = DEVICE(&dev->vdev);
+
+ qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+ object_property_set_bool(OBJECT(vdev), true, "realized", errp);
+}
+
+static void vhost_user_fs_pci_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
+ PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
+ k->realize = vhost_user_fs_pci_realize;
+ set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
+ dc->props = vhost_user_fs_pci_properties;
+ pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
+ pcidev_k->device_id = 0; /* Set by virtio-pci based on virtio id */
+ pcidev_k->revision = 0x00;
+ pcidev_k->class_id = PCI_CLASS_STORAGE_OTHER;
+}
+
+static void vhost_user_fs_pci_instance_init(Object *obj)
+{
+ VHostUserFSPCI *dev = VHOST_USER_FS_PCI(obj);
+
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+ TYPE_VHOST_USER_FS);
+}
+
+static const VirtioPCIDeviceTypeInfo vhost_user_fs_pci_info = {
+ .base_name = TYPE_VHOST_USER_FS_PCI,
+ .non_transitional_name = "x-vhost-user-fs-pci",
+ .instance_size = sizeof(VHostUserFSPCI),
+ .instance_init = vhost_user_fs_pci_instance_init,
+ .class_init = vhost_user_fs_pci_class_init,
+};
+
+static void vhost_user_fs_pci_register(void)
+{
+ virtio_pci_types_register(&vhost_user_fs_pci_info);
+}
+
+type_init(vhost_user_fs_pci_register);
--
2.21.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental)
2019-08-16 14:33 [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) Dr. David Alan Gilbert (git)
2019-08-16 14:33 ` [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device Dr. David Alan Gilbert (git)
2019-08-16 14:33 ` [Qemu-devel] [PATCH 2/2] virtio: add vhost-user-fs-pci device Dr. David Alan Gilbert (git)
@ 2019-08-16 18:38 ` no-reply
2 siblings, 0 replies; 13+ messages in thread
From: no-reply @ 2019-08-16 18:38 UTC (permalink / raw)
To: dgilbert; +Cc: stefanha, qemu-devel, vgoyal, mst
Patchew URL: https://patchew.org/QEMU/20190816143321.20903-1-dgilbert@redhat.com/
Hi,
This series failed build test on s390x host. Please find the details below.
=== TEST SCRIPT BEGIN ===
#!/bin/bash
# Testing script will be invoked under the git checkout with
# HEAD pointing to a commit that has the patches applied on top of "base"
# branch
set -e
echo
echo "=== ENV ==="
env
echo
echo "=== PACKAGES ==="
rpm -qa
echo
echo "=== UNAME ==="
uname -a
CC=$HOME/bin/cc
INSTALL=$PWD/install
BUILD=$PWD/build
mkdir -p $BUILD $INSTALL
SRC=$PWD
cd $BUILD
$SRC/configure --cc=$CC --prefix=$INSTALL
make -j4
# XXX: we need reliable clean up
# make check -j4 V=1
make install
=== TEST SCRIPT END ===
The full log is available at
http://patchew.org/logs/20190816143321.20903-1-dgilbert@redhat.com/testing.s390x/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-16 14:33 ` [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device Dr. David Alan Gilbert (git)
@ 2019-08-18 11:08 ` Michael S. Tsirkin
2019-08-20 12:24 ` Cornelia Huck
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Michael S. Tsirkin @ 2019-08-18 11:08 UTC (permalink / raw)
To: Dr. David Alan Gilbert (git); +Cc: qemu-devel, stefanha, vgoyal
On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
>
> The virtio-fs virtio device provides shared file system access using
> the FUSE protocol carried ovew virtio.
> The actual file server is implemented in an external vhost-user-fs device
> backend process.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
> configure | 13 +
> hw/virtio/Makefile.objs | 1 +
> hw/virtio/vhost-user-fs.c | 297 ++++++++++++++++++++
> include/hw/virtio/vhost-user-fs.h | 45 +++
> include/standard-headers/linux/virtio_fs.h | 41 +++
> include/standard-headers/linux/virtio_ids.h | 1 +
> 6 files changed, 398 insertions(+)
> create mode 100644 hw/virtio/vhost-user-fs.c
> create mode 100644 include/hw/virtio/vhost-user-fs.h
> create mode 100644 include/standard-headers/linux/virtio_fs.h
>
> diff --git a/configure b/configure
> index 714e7fb6a1..e7e33ee783 100755
> --- a/configure
> +++ b/configure
> @@ -382,6 +382,7 @@ vhost_crypto=""
> vhost_scsi=""
> vhost_vsock=""
> vhost_user=""
> +vhost_user_fs=""
> kvm="no"
> hax="no"
> hvf="no"
> @@ -1316,6 +1317,10 @@ for opt do
> ;;
> --enable-vhost-vsock) vhost_vsock="yes"
> ;;
> + --disable-vhost-user-fs) vhost_user_fs="no"
> + ;;
> + --enable-vhost-user-fs) vhost_user_fs="yes"
> + ;;
> --disable-opengl) opengl="no"
> ;;
> --enable-opengl) opengl="yes"
> @@ -2269,6 +2274,10 @@ test "$vhost_crypto" = "" && vhost_crypto=$vhost_user
> if test "$vhost_crypto" = "yes" && test "$vhost_user" = "no"; then
> error_exit "--enable-vhost-crypto requires --enable-vhost-user"
> fi
> +test "$vhost_user_fs" = "" && vhost_user_fs=$vhost_user
> +if test "$vhost_user_fs" = "yes" && test "$vhost_user" = "no"; then
> + error_exit "--enable-vhost-user-fs requires --enable-vhost-user"
> +fi
>
> # OR the vhost-kernel and vhost-user values for simplicity
> if test "$vhost_net" = ""; then
> @@ -6425,6 +6434,7 @@ echo "vhost-crypto support $vhost_crypto"
> echo "vhost-scsi support $vhost_scsi"
> echo "vhost-vsock support $vhost_vsock"
> echo "vhost-user support $vhost_user"
> +echo "vhost-user-fs support $vhost_user_fs"
> echo "Trace backends $trace_backends"
> if have_backend "simple"; then
> echo "Trace output file $trace_file-<pid>"
> @@ -6921,6 +6931,9 @@ fi
> if test "$vhost_user" = "yes" ; then
> echo "CONFIG_VHOST_USER=y" >> $config_host_mak
> fi
> +if test "$vhost_user_fs" = "yes" ; then
> + echo "CONFIG_VHOST_USER_FS=y" >> $config_host_mak
> +fi
> if test "$blobs" = "yes" ; then
> echo "INSTALL_BLOBS=yes" >> $config_host_mak
> fi
> diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> index 964ce78607..47ffbf22c4 100644
> --- a/hw/virtio/Makefile.objs
> +++ b/hw/virtio/Makefile.objs
> @@ -11,6 +11,7 @@ common-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
> common-obj-$(CONFIG_VIRTIO_MMIO) += virtio-mmio.o
> obj-$(CONFIG_VIRTIO_BALLOON) += virtio-balloon.o
> obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> +obj-$(CONFIG_VHOST_USER_FS) += vhost-user-fs.o
> obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-pci.o
> obj-$(CONFIG_VIRTIO_PMEM) += virtio-pmem.o
> common-obj-$(call land,$(CONFIG_VIRTIO_PMEM),$(CONFIG_VIRTIO_PCI)) += virtio-pmem-pci.o
> diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> new file mode 100644
> index 0000000000..2753c2c07a
> --- /dev/null
> +++ b/hw/virtio/vhost-user-fs.c
> @@ -0,0 +1,297 @@
> +/*
> + * Vhost-user filesystem virtio device
> + *
> + * Copyright 2018 Red Hat, Inc.
> + *
> + * Authors:
> + * Stefan Hajnoczi <stefanha@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * (at your option) any later version. See the COPYING file in the
> + * top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include <sys/ioctl.h>
> +#include "standard-headers/linux/virtio_fs.h"
> +#include "qapi/error.h"
> +#include "hw/virtio/virtio-bus.h"
> +#include "hw/virtio/virtio-access.h"
> +#include "qemu/error-report.h"
> +#include "hw/virtio/vhost-user-fs.h"
> +#include "monitor/monitor.h"
> +
> +static void vuf_get_config(VirtIODevice *vdev, uint8_t *config)
> +{
> + VHostUserFS *fs = VHOST_USER_FS(vdev);
> + struct virtio_fs_config fscfg = {};
> +
> + memcpy((char *)fscfg.tag, fs->conf.tag,
> + MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag)));
> +
> + virtio_stl_p(vdev, &fscfg.num_queues, fs->conf.num_queues);
> +
> + memcpy(config, &fscfg, sizeof(fscfg));
> +}
> +
> +static void vuf_start(VirtIODevice *vdev)
> +{
> + VHostUserFS *fs = VHOST_USER_FS(vdev);
> + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> + int ret;
> + int i;
> +
> + if (!k->set_guest_notifiers) {
> + error_report("binding does not support guest notifiers");
> + return;
> + }
> +
> + ret = vhost_dev_enable_notifiers(&fs->vhost_dev, vdev);
> + if (ret < 0) {
> + error_report("Error enabling host notifiers: %d", -ret);
> + return;
> + }
> +
> + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, true);
> + if (ret < 0) {
> + error_report("Error binding guest notifier: %d", -ret);
> + goto err_host_notifiers;
> + }
> +
> + fs->vhost_dev.acked_features = vdev->guest_features;
> + ret = vhost_dev_start(&fs->vhost_dev, vdev);
> + if (ret < 0) {
> + error_report("Error starting vhost: %d", -ret);
> + goto err_guest_notifiers;
> + }
> +
> + /*
> + * guest_notifier_mask/pending not used yet, so just unmask
> + * everything here. virtio-pci will do the right thing by
> + * enabling/disabling irqfd.
> + */
> + for (i = 0; i < fs->vhost_dev.nvqs; i++) {
> + vhost_virtqueue_mask(&fs->vhost_dev, vdev, i, false);
> + }
> +
> + return;
> +
> +err_guest_notifiers:
> + k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> +err_host_notifiers:
> + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> +}
> +
> +static void vuf_stop(VirtIODevice *vdev)
> +{
> + VHostUserFS *fs = VHOST_USER_FS(vdev);
> + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> + int ret;
> +
> + if (!k->set_guest_notifiers) {
> + return;
> + }
> +
> + vhost_dev_stop(&fs->vhost_dev, vdev);
> +
> + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> + if (ret < 0) {
> + error_report("vhost guest notifier cleanup failed: %d", ret);
> + return;
> + }
> +
> + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> +}
> +
> +static void vuf_set_status(VirtIODevice *vdev, uint8_t status)
> +{
> + VHostUserFS *fs = VHOST_USER_FS(vdev);
> + bool should_start = status & VIRTIO_CONFIG_S_DRIVER_OK;
> +
> + if (!vdev->vm_running) {
> + should_start = false;
> + }
> +
> + if (fs->vhost_dev.started == should_start) {
> + return;
> + }
> +
> + if (should_start) {
> + vuf_start(vdev);
> + } else {
> + vuf_stop(vdev);
> + }
> +}
> +
> +static uint64_t vuf_get_features(VirtIODevice *vdev,
> + uint64_t requested_features,
> + Error **errp)
> +{
> + /* No feature bits used yet */
> + return requested_features;
> +}
> +
> +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> +{
> + /* Do nothing */
Why is this safe? Is this because this never triggers? assert(0) then?
If it triggers then backend won't be notified, which might
cause it to get stuck.
> +}
> +
> +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> + bool mask)
> +{
> + VHostUserFS *fs = VHOST_USER_FS(vdev);
> +
> + vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> +}
> +
> +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> +{
> + VHostUserFS *fs = VHOST_USER_FS(vdev);
> +
> + return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> +}
> +
> +static void vuf_device_realize(DeviceState *dev, Error **errp)
> +{
> + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> + VHostUserFS *fs = VHOST_USER_FS(dev);
> + unsigned int i;
> + size_t len;
> + int ret;
> +
> + if (!fs->conf.chardev.chr) {
> + error_setg(errp, "missing chardev");
> + return;
> + }
> +
> + if (!fs->conf.tag) {
> + error_setg(errp, "missing tag property");
> + return;
> + }
> + len = strlen(fs->conf.tag);
> + if (len == 0) {
> + error_setg(errp, "tag property cannot be empty");
> + return;
> + }
> + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> + error_setg(errp, "tag property must be %zu bytes or less",
> + sizeof_field(struct virtio_fs_config, tag));
> + return;
> + }
> +
> + if (fs->conf.num_queues == 0) {
> + error_setg(errp, "num-queues property must be larger than 0");
> + return;
> + }
The strange thing is that actual # of queues is this number + 2.
And this affects an optimal number of vectors (see patch 2).
Not sure what a good solution is - include the
mandatory queues in the number?
Needs to be documented in some way.
> +
> + if (!is_power_of_2(fs->conf.queue_size)) {
> + error_setg(errp, "queue-size property must be a power of 2");
> + return;
> + }
Hmm packed ring allows non power of 2 ...
We need to look into a generic helper to support VQ
size checks.
> +
> + if (fs->conf.queue_size > VIRTQUEUE_MAX_SIZE) {
> + error_setg(errp, "queue-size property must be %u or smaller",
> + VIRTQUEUE_MAX_SIZE);
> + return;
> + }
> +
> + if (!vhost_user_init(&fs->vhost_user, &fs->conf.chardev, errp)) {
> + return;
> + }
> +
> + virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS,
> + sizeof(struct virtio_fs_config));
> +
> + /* Notifications queue */
> + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> +
> + /* Hiprio queue */
> + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
>
Weird, spec patch v6 says:
+\item[0] hiprio
+\item[1\ldots n] request queues
where's the Notifications queue coming from?
> + /* Request queues */
> + for (i = 0; i < fs->conf.num_queues; i++) {
> + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> + }
> +
> + /* 1 high prio queue, plus the number configured */
> + fs->vhost_dev.nvqs = 1 + fs->conf.num_queues;
> + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs);
> + ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user,
> + VHOST_BACKEND_TYPE_USER, 0);
> + if (ret < 0) {
> + error_setg_errno(errp, -ret, "vhost_dev_init failed");
> + goto err_virtio;
> + }
> +
> + return;
> +
> +err_virtio:
> + vhost_user_cleanup(&fs->vhost_user);
> + virtio_cleanup(vdev);
> + g_free(fs->vhost_dev.vqs);
> + return;
> +}
> +
> +static void vuf_device_unrealize(DeviceState *dev, Error **errp)
> +{
> + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> + VHostUserFS *fs = VHOST_USER_FS(dev);
> +
> + /* This will stop vhost backend if appropriate. */
> + vuf_set_status(vdev, 0);
> +
> + vhost_dev_cleanup(&fs->vhost_dev);
> +
> + vhost_user_cleanup(&fs->vhost_user);
> +
> + virtio_cleanup(vdev);
> + g_free(fs->vhost_dev.vqs);
> + fs->vhost_dev.vqs = NULL;
> +}
> +
> +static const VMStateDescription vuf_vmstate = {
> + .name = "vhost-user-fs",
> + .unmigratable = 1,
> +};
> +
> +static Property vuf_properties[] = {
> + DEFINE_PROP_CHR("chardev", VHostUserFS, conf.chardev),
> + DEFINE_PROP_STRING("tag", VHostUserFS, conf.tag),
> + DEFINE_PROP_UINT16("num-queues", VHostUserFS, conf.num_queues, 1),
> + DEFINE_PROP_UINT16("queue-size", VHostUserFS, conf.queue_size, 128),
> + DEFINE_PROP_STRING("vhostfd", VHostUserFS, conf.vhostfd),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> +static void vuf_class_init(ObjectClass *klass, void *data)
> +{
> + DeviceClass *dc = DEVICE_CLASS(klass);
> + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> +
> + dc->props = vuf_properties;
> + dc->vmsd = &vuf_vmstate;
> + set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> + vdc->realize = vuf_device_realize;
> + vdc->unrealize = vuf_device_unrealize;
> + vdc->get_features = vuf_get_features;
> + vdc->get_config = vuf_get_config;
> + vdc->set_status = vuf_set_status;
> + vdc->guest_notifier_mask = vuf_guest_notifier_mask;
> + vdc->guest_notifier_pending = vuf_guest_notifier_pending;
> +}
> +
> +static const TypeInfo vuf_info = {
> + .name = TYPE_VHOST_USER_FS,
> + .parent = TYPE_VIRTIO_DEVICE,
> + .instance_size = sizeof(VHostUserFS),
> + .class_init = vuf_class_init,
> +};
> +
> +static void vuf_register_types(void)
> +{
> + type_register_static(&vuf_info);
> +}
> +
> +type_init(vuf_register_types)
> diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-user-fs.h
> new file mode 100644
> index 0000000000..d07ab134b9
> --- /dev/null
> +++ b/include/hw/virtio/vhost-user-fs.h
> @@ -0,0 +1,45 @@
> +/*
> + * Vhost-user filesystem virtio device
> + *
> + * Copyright 2018 Red Hat, Inc.
> + *
> + * Authors:
> + * Stefan Hajnoczi <stefanha@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * (at your option) any later version. See the COPYING file in the
> + * top-level directory.
> + */
> +
> +#ifndef _QEMU_VHOST_USER_FS_H
> +#define _QEMU_VHOST_USER_FS_H
> +
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/vhost.h"
> +#include "hw/virtio/vhost-user.h"
> +#include "chardev/char-fe.h"
> +
> +#define TYPE_VHOST_USER_FS "x-vhost-user-fs-device"
> +#define VHOST_USER_FS(obj) \
> + OBJECT_CHECK(VHostUserFS, (obj), TYPE_VHOST_USER_FS)
> +
> +typedef struct {
> + CharBackend chardev;
> + char *tag;
> + uint16_t num_queues;
> + uint16_t queue_size;
> + char *vhostfd;
> +} VHostUserFSConf;
> +
> +typedef struct {
> + /*< private >*/
> + VirtIODevice parent;
> + VHostUserFSConf conf;
> + struct vhost_virtqueue *vhost_vqs;
> + struct vhost_dev vhost_dev;
> + VhostUserState vhost_user;
> +
> + /*< public >*/
> +} VHostUserFS;
> +
> +#endif /* _QEMU_VHOST_USER_FS_H */
> diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-headers/linux/virtio_fs.h
> new file mode 100644
> index 0000000000..4f811a0b70
> --- /dev/null
> +++ b/include/standard-headers/linux/virtio_fs.h
> @@ -0,0 +1,41 @@
> +#ifndef _LINUX_VIRTIO_FS_H
> +#define _LINUX_VIRTIO_FS_H
> +/* This header is BSD licensed so anyone can use the definitions to implement
> + * compatible drivers/servers.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in the
> + * documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of IBM nor the names of its contributors
> + * may be used to endorse or promote products derived from this software
> + * without specific prior written permission.
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE. */
> +#include "standard-headers/linux/types.h"
> +#include "standard-headers/linux/virtio_ids.h"
> +#include "standard-headers/linux/virtio_config.h"
> +#include "standard-headers/linux/virtio_types.h"
> +
> +struct virtio_fs_config {
> + /* Filesystem name (UTF-8, not NUL-terminated, padded with NULs) */
> + uint8_t tag[36];
> +
> + /* Number of request queues */
> + uint32_t num_queues;
> +} QEMU_PACKED;
> +
> +#endif /* _LINUX_VIRTIO_FS_H */
> diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
> index 32b2f94d1f..73fc004807 100644
> --- a/include/standard-headers/linux/virtio_ids.h
> +++ b/include/standard-headers/linux/virtio_ids.h
> @@ -43,6 +43,7 @@
> #define VIRTIO_ID_INPUT 18 /* virtio input */
> #define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */
> #define VIRTIO_ID_CRYPTO 20 /* virtio crypto */
> +#define VIRTIO_ID_FS 26 /* virtio filesystem */
> #define VIRTIO_ID_PMEM 27 /* virtio pmem */
>
> #endif /* _LINUX_VIRTIO_IDS_H */
> --
> 2.21.0
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-18 11:08 ` Michael S. Tsirkin
@ 2019-08-20 12:24 ` Cornelia Huck
2019-08-20 13:39 ` Dr. David Alan Gilbert
2019-08-21 17:52 ` Dr. David Alan Gilbert
2019-08-21 19:11 ` Dr. David Alan Gilbert
2 siblings, 1 reply; 13+ messages in thread
From: Cornelia Huck @ 2019-08-20 12:24 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: vgoyal, Dr. David Alan Gilbert (git), stefanha, qemu-devel
On Sun, 18 Aug 2019 07:08:31 -0400
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> >
> > The virtio-fs virtio device provides shared file system access using
> > the FUSE protocol carried ovew virtio.
> > The actual file server is implemented in an external vhost-user-fs device
> > backend process.
> >
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> > configure | 13 +
> > hw/virtio/Makefile.objs | 1 +
> > hw/virtio/vhost-user-fs.c | 297 ++++++++++++++++++++
> > include/hw/virtio/vhost-user-fs.h | 45 +++
> > include/standard-headers/linux/virtio_fs.h | 41 +++
> > include/standard-headers/linux/virtio_ids.h | 1 +
> > 6 files changed, 398 insertions(+)
> > create mode 100644 hw/virtio/vhost-user-fs.c
> > create mode 100644 include/hw/virtio/vhost-user-fs.h
> > create mode 100644 include/standard-headers/linux/virtio_fs.h
> > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> > new file mode 100644
> > index 0000000000..2753c2c07a
> > --- /dev/null
> > +++ b/hw/virtio/vhost-user-fs.c
> > @@ -0,0 +1,297 @@
> > +/*
> > + * Vhost-user filesystem virtio device
> > + *
> > + * Copyright 2018 Red Hat, Inc.
> > + *
> > + * Authors:
> > + * Stefan Hajnoczi <stefanha@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > + * (at your option) any later version. See the COPYING file in the
> > + * top-level directory.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include <sys/ioctl.h>
> > +#include "standard-headers/linux/virtio_fs.h"
> > +#include "qapi/error.h"
> > +#include "hw/virtio/virtio-bus.h"
> > +#include "hw/virtio/virtio-access.h"
> > +#include "qemu/error-report.h"
> > +#include "hw/virtio/vhost-user-fs.h"
> > +#include "monitor/monitor.h"
JFYI, this needs to include hw/qdev-properties.h as well on the latest
code level. (As does the pci part.)
> > +
> > +static void vuf_get_config(VirtIODevice *vdev, uint8_t *config)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + struct virtio_fs_config fscfg = {};
> > +
> > + memcpy((char *)fscfg.tag, fs->conf.tag,
> > + MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag)));
> > +
> > + virtio_stl_p(vdev, &fscfg.num_queues, fs->conf.num_queues);
> > +
> > + memcpy(config, &fscfg, sizeof(fscfg));
> > +}
> > +
> > +static void vuf_start(VirtIODevice *vdev)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > + int ret;
> > + int i;
> > +
> > + if (!k->set_guest_notifiers) {
> > + error_report("binding does not support guest notifiers");
> > + return;
> > + }
> > +
> > + ret = vhost_dev_enable_notifiers(&fs->vhost_dev, vdev);
> > + if (ret < 0) {
> > + error_report("Error enabling host notifiers: %d", -ret);
> > + return;
> > + }
> > +
> > + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, true);
> > + if (ret < 0) {
> > + error_report("Error binding guest notifier: %d", -ret);
> > + goto err_host_notifiers;
> > + }
> > +
> > + fs->vhost_dev.acked_features = vdev->guest_features;
> > + ret = vhost_dev_start(&fs->vhost_dev, vdev);
> > + if (ret < 0) {
> > + error_report("Error starting vhost: %d", -ret);
> > + goto err_guest_notifiers;
> > + }
> > +
> > + /*
> > + * guest_notifier_mask/pending not used yet, so just unmask
> > + * everything here. virtio-pci will do the right thing by
> > + * enabling/disabling irqfd.
Referring to 'virtio-pci' seems a bit suspicious :) Should that be 'the
transport'? (And 'the right thing' is not really self-explanatory...)
(I have wired it up for virtio-ccw, but have not actually tried it out.
Will send it out once I did.)
> > + */
> > + for (i = 0; i < fs->vhost_dev.nvqs; i++) {
> > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, i, false);
> > + }
> > +
> > + return;
> > +
> > +err_guest_notifiers:
> > + k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> > +err_host_notifiers:
> > + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> > +}
(...)
> > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > +{
> > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > + unsigned int i;
> > + size_t len;
> > + int ret;
> > +
> > + if (!fs->conf.chardev.chr) {
> > + error_setg(errp, "missing chardev");
> > + return;
> > + }
> > +
> > + if (!fs->conf.tag) {
> > + error_setg(errp, "missing tag property");
> > + return;
> > + }
> > + len = strlen(fs->conf.tag);
> > + if (len == 0) {
> > + error_setg(errp, "tag property cannot be empty");
> > + return;
> > + }
> > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > + error_setg(errp, "tag property must be %zu bytes or less",
> > + sizeof_field(struct virtio_fs_config, tag));
> > + return;
> > + }
> > +
> > + if (fs->conf.num_queues == 0) {
> > + error_setg(errp, "num-queues property must be larger than 0");
> > + return;
> > + }
>
> The strange thing is that actual # of queues is this number + 2.
> And this affects an optimal number of vectors (see patch 2).
> Not sure what a good solution is - include the
> mandatory queues in the number?
> Needs to be documented in some way.
I think the spec states that num_queues in the config space is the
number of request queues only. Can we rename to num_request_queues? The
hiprio queue is not really configurable, anyway.
>
> > +
> > + if (!is_power_of_2(fs->conf.queue_size)) {
> > + error_setg(errp, "queue-size property must be a power of 2");
> > + return;
> > + }
>
> Hmm packed ring allows non power of 2 ...
> We need to look into a generic helper to support VQ
> size checks.
Huh, I didn't notice before that there are several devices which
already allow to configure the queue size... looking, there seem to be
the following cases:
- bound checks and checks for power of 2 (blk, net)
- no checks (scsi) -- isn't that dangerous, as virtio_add_queue() will
abort() for a too large value?
Anyway, if we have a non power of 2 size and the driver does not
negotiate packed, we can just fail setting FEATURES_OK, so dropping the
power of 2 check should be fine, at least when we add packed support.
>
> > +
> > + if (fs->conf.queue_size > VIRTQUEUE_MAX_SIZE) {
> > + error_setg(errp, "queue-size property must be %u or smaller",
> > + VIRTQUEUE_MAX_SIZE);
> > + return;
> > + }
> > +
> > + if (!vhost_user_init(&fs->vhost_user, &fs->conf.chardev, errp)) {
> > + return;
> > + }
> > +
> > + virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS,
> > + sizeof(struct virtio_fs_config));
> > +
> > + /* Notifications queue */
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > +
> > + /* Hiprio queue */
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> >
>
> Weird, spec patch v6 says:
>
> +\item[0] hiprio
> +\item[1\ldots n] request queues
>
> where's the Notifications queue coming from?
Maybe an old name of the hiprio queue?
>
> > + /* Request queues */
> > + for (i = 0; i < fs->conf.num_queues; i++) {
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > + }
> > +
> > + /* 1 high prio queue, plus the number configured */
> > + fs->vhost_dev.nvqs = 1 + fs->conf.num_queues;
Anyway, the notifications queue needs to go, or this is wrong :)
> > + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs);
> > + ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user,
> > + VHOST_BACKEND_TYPE_USER, 0);
> > + if (ret < 0) {
> > + error_setg_errno(errp, -ret, "vhost_dev_init failed");
> > + goto err_virtio;
> > + }
> > +
> > + return;
> > +
> > +err_virtio:
> > + vhost_user_cleanup(&fs->vhost_user);
> > + virtio_cleanup(vdev);
> > + g_free(fs->vhost_dev.vqs);
> > + return;
> > +}
(...)
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-20 12:24 ` Cornelia Huck
@ 2019-08-20 13:39 ` Dr. David Alan Gilbert
0 siblings, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2019-08-20 13:39 UTC (permalink / raw)
To: Cornelia Huck; +Cc: vgoyal, qemu-devel, stefanha, Michael S. Tsirkin
* Cornelia Huck (cohuck@redhat.com) wrote:
> On Sun, 18 Aug 2019 07:08:31 -0400
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>
> > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > >
> > > The virtio-fs virtio device provides shared file system access using
> > > the FUSE protocol carried ovew virtio.
> > > The actual file server is implemented in an external vhost-user-fs device
> > > backend process.
> > >
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > ---
> > > configure | 13 +
> > > hw/virtio/Makefile.objs | 1 +
> > > hw/virtio/vhost-user-fs.c | 297 ++++++++++++++++++++
> > > include/hw/virtio/vhost-user-fs.h | 45 +++
> > > include/standard-headers/linux/virtio_fs.h | 41 +++
> > > include/standard-headers/linux/virtio_ids.h | 1 +
> > > 6 files changed, 398 insertions(+)
> > > create mode 100644 hw/virtio/vhost-user-fs.c
> > > create mode 100644 include/hw/virtio/vhost-user-fs.h
> > > create mode 100644 include/standard-headers/linux/virtio_fs.h
>
> > > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> > > new file mode 100644
> > > index 0000000000..2753c2c07a
> > > --- /dev/null
> > > +++ b/hw/virtio/vhost-user-fs.c
> > > @@ -0,0 +1,297 @@
> > > +/*
> > > + * Vhost-user filesystem virtio device
> > > + *
> > > + * Copyright 2018 Red Hat, Inc.
> > > + *
> > > + * Authors:
> > > + * Stefan Hajnoczi <stefanha@redhat.com>
> > > + *
> > > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > > + * (at your option) any later version. See the COPYING file in the
> > > + * top-level directory.
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include <sys/ioctl.h>
> > > +#include "standard-headers/linux/virtio_fs.h"
> > > +#include "qapi/error.h"
> > > +#include "hw/virtio/virtio-bus.h"
> > > +#include "hw/virtio/virtio-access.h"
> > > +#include "qemu/error-report.h"
> > > +#include "hw/virtio/vhost-user-fs.h"
> > > +#include "monitor/monitor.h"
>
> JFYI, this needs to include hw/qdev-properties.h as well on the latest
> code level. (As does the pci part.)
Thanks! Updated my local version.
Dave
> > > +
> > > +static void vuf_get_config(VirtIODevice *vdev, uint8_t *config)
> > > +{
> > > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > + struct virtio_fs_config fscfg = {};
> > > +
> > > + memcpy((char *)fscfg.tag, fs->conf.tag,
> > > + MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag)));
> > > +
> > > + virtio_stl_p(vdev, &fscfg.num_queues, fs->conf.num_queues);
> > > +
> > > + memcpy(config, &fscfg, sizeof(fscfg));
> > > +}
> > > +
> > > +static void vuf_start(VirtIODevice *vdev)
> > > +{
> > > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> > > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > > + int ret;
> > > + int i;
> > > +
> > > + if (!k->set_guest_notifiers) {
> > > + error_report("binding does not support guest notifiers");
> > > + return;
> > > + }
> > > +
> > > + ret = vhost_dev_enable_notifiers(&fs->vhost_dev, vdev);
> > > + if (ret < 0) {
> > > + error_report("Error enabling host notifiers: %d", -ret);
> > > + return;
> > > + }
> > > +
> > > + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, true);
> > > + if (ret < 0) {
> > > + error_report("Error binding guest notifier: %d", -ret);
> > > + goto err_host_notifiers;
> > > + }
> > > +
> > > + fs->vhost_dev.acked_features = vdev->guest_features;
> > > + ret = vhost_dev_start(&fs->vhost_dev, vdev);
> > > + if (ret < 0) {
> > > + error_report("Error starting vhost: %d", -ret);
> > > + goto err_guest_notifiers;
> > > + }
> > > +
> > > + /*
> > > + * guest_notifier_mask/pending not used yet, so just unmask
> > > + * everything here. virtio-pci will do the right thing by
> > > + * enabling/disabling irqfd.
>
> Referring to 'virtio-pci' seems a bit suspicious :) Should that be 'the
> transport'? (And 'the right thing' is not really self-explanatory...)
>
> (I have wired it up for virtio-ccw, but have not actually tried it out.
> Will send it out once I did.)
>
> > > + */
> > > + for (i = 0; i < fs->vhost_dev.nvqs; i++) {
> > > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, i, false);
> > > + }
> > > +
> > > + return;
> > > +
> > > +err_guest_notifiers:
> > > + k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> > > +err_host_notifiers:
> > > + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> > > +}
>
> (...)
>
> > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > +{
> > > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > > + unsigned int i;
> > > + size_t len;
> > > + int ret;
> > > +
> > > + if (!fs->conf.chardev.chr) {
> > > + error_setg(errp, "missing chardev");
> > > + return;
> > > + }
> > > +
> > > + if (!fs->conf.tag) {
> > > + error_setg(errp, "missing tag property");
> > > + return;
> > > + }
> > > + len = strlen(fs->conf.tag);
> > > + if (len == 0) {
> > > + error_setg(errp, "tag property cannot be empty");
> > > + return;
> > > + }
> > > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > + error_setg(errp, "tag property must be %zu bytes or less",
> > > + sizeof_field(struct virtio_fs_config, tag));
> > > + return;
> > > + }
> > > +
> > > + if (fs->conf.num_queues == 0) {
> > > + error_setg(errp, "num-queues property must be larger than 0");
> > > + return;
> > > + }
> >
> > The strange thing is that actual # of queues is this number + 2.
> > And this affects an optimal number of vectors (see patch 2).
> > Not sure what a good solution is - include the
> > mandatory queues in the number?
> > Needs to be documented in some way.
>
> I think the spec states that num_queues in the config space is the
> number of request queues only. Can we rename to num_request_queues? The
> hiprio queue is not really configurable, anyway.
>
> >
> > > +
> > > + if (!is_power_of_2(fs->conf.queue_size)) {
> > > + error_setg(errp, "queue-size property must be a power of 2");
> > > + return;
> > > + }
> >
> > Hmm packed ring allows non power of 2 ...
> > We need to look into a generic helper to support VQ
> > size checks.
>
> Huh, I didn't notice before that there are several devices which
> already allow to configure the queue size... looking, there seem to be
> the following cases:
>
> - bound checks and checks for power of 2 (blk, net)
> - no checks (scsi) -- isn't that dangerous, as virtio_add_queue() will
> abort() for a too large value?
>
> Anyway, if we have a non power of 2 size and the driver does not
> negotiate packed, we can just fail setting FEATURES_OK, so dropping the
> power of 2 check should be fine, at least when we add packed support.
>
> >
> > > +
> > > + if (fs->conf.queue_size > VIRTQUEUE_MAX_SIZE) {
> > > + error_setg(errp, "queue-size property must be %u or smaller",
> > > + VIRTQUEUE_MAX_SIZE);
> > > + return;
> > > + }
> > > +
> > > + if (!vhost_user_init(&fs->vhost_user, &fs->conf.chardev, errp)) {
> > > + return;
> > > + }
> > > +
> > > + virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS,
> > > + sizeof(struct virtio_fs_config));
> > > +
> > > + /* Notifications queue */
> > > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > > +
> > > + /* Hiprio queue */
> > > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > >
> >
> > Weird, spec patch v6 says:
> >
> > +\item[0] hiprio
> > +\item[1\ldots n] request queues
> >
> > where's the Notifications queue coming from?
>
> Maybe an old name of the hiprio queue?
>
> >
> > > + /* Request queues */
> > > + for (i = 0; i < fs->conf.num_queues; i++) {
> > > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > > + }
> > > +
> > > + /* 1 high prio queue, plus the number configured */
> > > + fs->vhost_dev.nvqs = 1 + fs->conf.num_queues;
>
> Anyway, the notifications queue needs to go, or this is wrong :)
>
> > > + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs);
> > > + ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user,
> > > + VHOST_BACKEND_TYPE_USER, 0);
> > > + if (ret < 0) {
> > > + error_setg_errno(errp, -ret, "vhost_dev_init failed");
> > > + goto err_virtio;
> > > + }
> > > +
> > > + return;
> > > +
> > > +err_virtio:
> > > + vhost_user_cleanup(&fs->vhost_user);
> > > + virtio_cleanup(vdev);
> > > + g_free(fs->vhost_dev.vqs);
> > > + return;
> > > +}
>
> (...)
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-18 11:08 ` Michael S. Tsirkin
2019-08-20 12:24 ` Cornelia Huck
@ 2019-08-21 17:52 ` Dr. David Alan Gilbert
2019-08-21 19:11 ` Dr. David Alan Gilbert
2 siblings, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2019-08-21 17:52 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: qemu-devel, stefanha, vgoyal
* Michael S. Tsirkin (mst@redhat.com) wrote:
> On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > + /* Hiprio queue */
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> >
>
> Weird, spec patch v6 says:
>
> +\item[0] hiprio
> +\item[1\ldots n] request queues
>
> where's the Notifications queue coming from?
Oops, that's a left over from when we used to have a notification queue;
all the other parts of it are gone.
Dave
> > + /* Request queues */
> > + for (i = 0; i < fs->conf.num_queues; i++) {
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > + }
> > +
> > + /* 1 high prio queue, plus the number configured */
> > + fs->vhost_dev.nvqs = 1 + fs->conf.num_queues;
> > + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs);
> > + ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user,
> > + VHOST_BACKEND_TYPE_USER, 0);
> > + if (ret < 0) {
> > + error_setg_errno(errp, -ret, "vhost_dev_init failed");
> > + goto err_virtio;
> > + }
> > +
> > + return;
> > +
> > +err_virtio:
> > + vhost_user_cleanup(&fs->vhost_user);
> > + virtio_cleanup(vdev);
> > + g_free(fs->vhost_dev.vqs);
> > + return;
> > +}
> > +
> > +static void vuf_device_unrealize(DeviceState *dev, Error **errp)
> > +{
> > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > +
> > + /* This will stop vhost backend if appropriate. */
> > + vuf_set_status(vdev, 0);
> > +
> > + vhost_dev_cleanup(&fs->vhost_dev);
> > +
> > + vhost_user_cleanup(&fs->vhost_user);
> > +
> > + virtio_cleanup(vdev);
> > + g_free(fs->vhost_dev.vqs);
> > + fs->vhost_dev.vqs = NULL;
> > +}
> > +
> > +static const VMStateDescription vuf_vmstate = {
> > + .name = "vhost-user-fs",
> > + .unmigratable = 1,
> > +};
> > +
> > +static Property vuf_properties[] = {
> > + DEFINE_PROP_CHR("chardev", VHostUserFS, conf.chardev),
> > + DEFINE_PROP_STRING("tag", VHostUserFS, conf.tag),
> > + DEFINE_PROP_UINT16("num-queues", VHostUserFS, conf.num_queues, 1),
> > + DEFINE_PROP_UINT16("queue-size", VHostUserFS, conf.queue_size, 128),
> > + DEFINE_PROP_STRING("vhostfd", VHostUserFS, conf.vhostfd),
> > + DEFINE_PROP_END_OF_LIST(),
> > +};
> > +
> > +static void vuf_class_init(ObjectClass *klass, void *data)
> > +{
> > + DeviceClass *dc = DEVICE_CLASS(klass);
> > + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> > +
> > + dc->props = vuf_properties;
> > + dc->vmsd = &vuf_vmstate;
> > + set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > + vdc->realize = vuf_device_realize;
> > + vdc->unrealize = vuf_device_unrealize;
> > + vdc->get_features = vuf_get_features;
> > + vdc->get_config = vuf_get_config;
> > + vdc->set_status = vuf_set_status;
> > + vdc->guest_notifier_mask = vuf_guest_notifier_mask;
> > + vdc->guest_notifier_pending = vuf_guest_notifier_pending;
> > +}
> > +
> > +static const TypeInfo vuf_info = {
> > + .name = TYPE_VHOST_USER_FS,
> > + .parent = TYPE_VIRTIO_DEVICE,
> > + .instance_size = sizeof(VHostUserFS),
> > + .class_init = vuf_class_init,
> > +};
> > +
> > +static void vuf_register_types(void)
> > +{
> > + type_register_static(&vuf_info);
> > +}
> > +
> > +type_init(vuf_register_types)
> > diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-user-fs.h
> > new file mode 100644
> > index 0000000000..d07ab134b9
> > --- /dev/null
> > +++ b/include/hw/virtio/vhost-user-fs.h
> > @@ -0,0 +1,45 @@
> > +/*
> > + * Vhost-user filesystem virtio device
> > + *
> > + * Copyright 2018 Red Hat, Inc.
> > + *
> > + * Authors:
> > + * Stefan Hajnoczi <stefanha@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > + * (at your option) any later version. See the COPYING file in the
> > + * top-level directory.
> > + */
> > +
> > +#ifndef _QEMU_VHOST_USER_FS_H
> > +#define _QEMU_VHOST_USER_FS_H
> > +
> > +#include "hw/virtio/virtio.h"
> > +#include "hw/virtio/vhost.h"
> > +#include "hw/virtio/vhost-user.h"
> > +#include "chardev/char-fe.h"
> > +
> > +#define TYPE_VHOST_USER_FS "x-vhost-user-fs-device"
> > +#define VHOST_USER_FS(obj) \
> > + OBJECT_CHECK(VHostUserFS, (obj), TYPE_VHOST_USER_FS)
> > +
> > +typedef struct {
> > + CharBackend chardev;
> > + char *tag;
> > + uint16_t num_queues;
> > + uint16_t queue_size;
> > + char *vhostfd;
> > +} VHostUserFSConf;
> > +
> > +typedef struct {
> > + /*< private >*/
> > + VirtIODevice parent;
> > + VHostUserFSConf conf;
> > + struct vhost_virtqueue *vhost_vqs;
> > + struct vhost_dev vhost_dev;
> > + VhostUserState vhost_user;
> > +
> > + /*< public >*/
> > +} VHostUserFS;
> > +
> > +#endif /* _QEMU_VHOST_USER_FS_H */
> > diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-headers/linux/virtio_fs.h
> > new file mode 100644
> > index 0000000000..4f811a0b70
> > --- /dev/null
> > +++ b/include/standard-headers/linux/virtio_fs.h
> > @@ -0,0 +1,41 @@
> > +#ifndef _LINUX_VIRTIO_FS_H
> > +#define _LINUX_VIRTIO_FS_H
> > +/* This header is BSD licensed so anyone can use the definitions to implement
> > + * compatible drivers/servers.
> > + *
> > + * Redistribution and use in source and binary forms, with or without
> > + * modification, are permitted provided that the following conditions
> > + * are met:
> > + * 1. Redistributions of source code must retain the above copyright
> > + * notice, this list of conditions and the following disclaimer.
> > + * 2. Redistributions in binary form must reproduce the above copyright
> > + * notice, this list of conditions and the following disclaimer in the
> > + * documentation and/or other materials provided with the distribution.
> > + * 3. Neither the name of IBM nor the names of its contributors
> > + * may be used to endorse or promote products derived from this software
> > + * without specific prior written permission.
> > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND
> > + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> > + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> > + * SUCH DAMAGE. */
> > +#include "standard-headers/linux/types.h"
> > +#include "standard-headers/linux/virtio_ids.h"
> > +#include "standard-headers/linux/virtio_config.h"
> > +#include "standard-headers/linux/virtio_types.h"
> > +
> > +struct virtio_fs_config {
> > + /* Filesystem name (UTF-8, not NUL-terminated, padded with NULs) */
> > + uint8_t tag[36];
> > +
> > + /* Number of request queues */
> > + uint32_t num_queues;
> > +} QEMU_PACKED;
> > +
> > +#endif /* _LINUX_VIRTIO_FS_H */
> > diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
> > index 32b2f94d1f..73fc004807 100644
> > --- a/include/standard-headers/linux/virtio_ids.h
> > +++ b/include/standard-headers/linux/virtio_ids.h
> > @@ -43,6 +43,7 @@
> > #define VIRTIO_ID_INPUT 18 /* virtio input */
> > #define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */
> > #define VIRTIO_ID_CRYPTO 20 /* virtio crypto */
> > +#define VIRTIO_ID_FS 26 /* virtio filesystem */
> > #define VIRTIO_ID_PMEM 27 /* virtio pmem */
> >
> > #endif /* _LINUX_VIRTIO_IDS_H */
> > --
> > 2.21.0
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-18 11:08 ` Michael S. Tsirkin
2019-08-20 12:24 ` Cornelia Huck
2019-08-21 17:52 ` Dr. David Alan Gilbert
@ 2019-08-21 19:11 ` Dr. David Alan Gilbert
2019-08-22 8:52 ` Stefan Hajnoczi
2 siblings, 1 reply; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2019-08-21 19:11 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: qemu-devel, stefanha, vgoyal
* Michael S. Tsirkin (mst@redhat.com) wrote:
> On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> >
> > The virtio-fs virtio device provides shared file system access using
> > the FUSE protocol carried ovew virtio.
> > The actual file server is implemented in an external vhost-user-fs device
> > backend process.
> >
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> > configure | 13 +
> > hw/virtio/Makefile.objs | 1 +
> > hw/virtio/vhost-user-fs.c | 297 ++++++++++++++++++++
> > include/hw/virtio/vhost-user-fs.h | 45 +++
> > include/standard-headers/linux/virtio_fs.h | 41 +++
> > include/standard-headers/linux/virtio_ids.h | 1 +
> > 6 files changed, 398 insertions(+)
> > create mode 100644 hw/virtio/vhost-user-fs.c
> > create mode 100644 include/hw/virtio/vhost-user-fs.h
> > create mode 100644 include/standard-headers/linux/virtio_fs.h
> >
> > diff --git a/configure b/configure
> > index 714e7fb6a1..e7e33ee783 100755
> > --- a/configure
> > +++ b/configure
> > @@ -382,6 +382,7 @@ vhost_crypto=""
> > vhost_scsi=""
> > vhost_vsock=""
> > vhost_user=""
> > +vhost_user_fs=""
> > kvm="no"
> > hax="no"
> > hvf="no"
> > @@ -1316,6 +1317,10 @@ for opt do
> > ;;
> > --enable-vhost-vsock) vhost_vsock="yes"
> > ;;
> > + --disable-vhost-user-fs) vhost_user_fs="no"
> > + ;;
> > + --enable-vhost-user-fs) vhost_user_fs="yes"
> > + ;;
> > --disable-opengl) opengl="no"
> > ;;
> > --enable-opengl) opengl="yes"
> > @@ -2269,6 +2274,10 @@ test "$vhost_crypto" = "" && vhost_crypto=$vhost_user
> > if test "$vhost_crypto" = "yes" && test "$vhost_user" = "no"; then
> > error_exit "--enable-vhost-crypto requires --enable-vhost-user"
> > fi
> > +test "$vhost_user_fs" = "" && vhost_user_fs=$vhost_user
> > +if test "$vhost_user_fs" = "yes" && test "$vhost_user" = "no"; then
> > + error_exit "--enable-vhost-user-fs requires --enable-vhost-user"
> > +fi
> >
> > # OR the vhost-kernel and vhost-user values for simplicity
> > if test "$vhost_net" = ""; then
> > @@ -6425,6 +6434,7 @@ echo "vhost-crypto support $vhost_crypto"
> > echo "vhost-scsi support $vhost_scsi"
> > echo "vhost-vsock support $vhost_vsock"
> > echo "vhost-user support $vhost_user"
> > +echo "vhost-user-fs support $vhost_user_fs"
> > echo "Trace backends $trace_backends"
> > if have_backend "simple"; then
> > echo "Trace output file $trace_file-<pid>"
> > @@ -6921,6 +6931,9 @@ fi
> > if test "$vhost_user" = "yes" ; then
> > echo "CONFIG_VHOST_USER=y" >> $config_host_mak
> > fi
> > +if test "$vhost_user_fs" = "yes" ; then
> > + echo "CONFIG_VHOST_USER_FS=y" >> $config_host_mak
> > +fi
> > if test "$blobs" = "yes" ; then
> > echo "INSTALL_BLOBS=yes" >> $config_host_mak
> > fi
> > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > index 964ce78607..47ffbf22c4 100644
> > --- a/hw/virtio/Makefile.objs
> > +++ b/hw/virtio/Makefile.objs
> > @@ -11,6 +11,7 @@ common-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
> > common-obj-$(CONFIG_VIRTIO_MMIO) += virtio-mmio.o
> > obj-$(CONFIG_VIRTIO_BALLOON) += virtio-balloon.o
> > obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> > +obj-$(CONFIG_VHOST_USER_FS) += vhost-user-fs.o
> > obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-pci.o
> > obj-$(CONFIG_VIRTIO_PMEM) += virtio-pmem.o
> > common-obj-$(call land,$(CONFIG_VIRTIO_PMEM),$(CONFIG_VIRTIO_PCI)) += virtio-pmem-pci.o
> > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> > new file mode 100644
> > index 0000000000..2753c2c07a
> > --- /dev/null
> > +++ b/hw/virtio/vhost-user-fs.c
> > @@ -0,0 +1,297 @@
> > +/*
> > + * Vhost-user filesystem virtio device
> > + *
> > + * Copyright 2018 Red Hat, Inc.
> > + *
> > + * Authors:
> > + * Stefan Hajnoczi <stefanha@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > + * (at your option) any later version. See the COPYING file in the
> > + * top-level directory.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include <sys/ioctl.h>
> > +#include "standard-headers/linux/virtio_fs.h"
> > +#include "qapi/error.h"
> > +#include "hw/virtio/virtio-bus.h"
> > +#include "hw/virtio/virtio-access.h"
> > +#include "qemu/error-report.h"
> > +#include "hw/virtio/vhost-user-fs.h"
> > +#include "monitor/monitor.h"
> > +
> > +static void vuf_get_config(VirtIODevice *vdev, uint8_t *config)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + struct virtio_fs_config fscfg = {};
> > +
> > + memcpy((char *)fscfg.tag, fs->conf.tag,
> > + MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag)));
> > +
> > + virtio_stl_p(vdev, &fscfg.num_queues, fs->conf.num_queues);
> > +
> > + memcpy(config, &fscfg, sizeof(fscfg));
> > +}
> > +
> > +static void vuf_start(VirtIODevice *vdev)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > + int ret;
> > + int i;
> > +
> > + if (!k->set_guest_notifiers) {
> > + error_report("binding does not support guest notifiers");
> > + return;
> > + }
> > +
> > + ret = vhost_dev_enable_notifiers(&fs->vhost_dev, vdev);
> > + if (ret < 0) {
> > + error_report("Error enabling host notifiers: %d", -ret);
> > + return;
> > + }
> > +
> > + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, true);
> > + if (ret < 0) {
> > + error_report("Error binding guest notifier: %d", -ret);
> > + goto err_host_notifiers;
> > + }
> > +
> > + fs->vhost_dev.acked_features = vdev->guest_features;
> > + ret = vhost_dev_start(&fs->vhost_dev, vdev);
> > + if (ret < 0) {
> > + error_report("Error starting vhost: %d", -ret);
> > + goto err_guest_notifiers;
> > + }
> > +
> > + /*
> > + * guest_notifier_mask/pending not used yet, so just unmask
> > + * everything here. virtio-pci will do the right thing by
> > + * enabling/disabling irqfd.
> > + */
> > + for (i = 0; i < fs->vhost_dev.nvqs; i++) {
> > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, i, false);
> > + }
> > +
> > + return;
> > +
> > +err_guest_notifiers:
> > + k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> > +err_host_notifiers:
> > + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> > +}
> > +
> > +static void vuf_stop(VirtIODevice *vdev)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > + int ret;
> > +
> > + if (!k->set_guest_notifiers) {
> > + return;
> > + }
> > +
> > + vhost_dev_stop(&fs->vhost_dev, vdev);
> > +
> > + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> > + if (ret < 0) {
> > + error_report("vhost guest notifier cleanup failed: %d", ret);
> > + return;
> > + }
> > +
> > + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> > +}
> > +
> > +static void vuf_set_status(VirtIODevice *vdev, uint8_t status)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + bool should_start = status & VIRTIO_CONFIG_S_DRIVER_OK;
> > +
> > + if (!vdev->vm_running) {
> > + should_start = false;
> > + }
> > +
> > + if (fs->vhost_dev.started == should_start) {
> > + return;
> > + }
> > +
> > + if (should_start) {
> > + vuf_start(vdev);
> > + } else {
> > + vuf_stop(vdev);
> > + }
> > +}
> > +
> > +static uint64_t vuf_get_features(VirtIODevice *vdev,
> > + uint64_t requested_features,
> > + Error **errp)
> > +{
> > + /* No feature bits used yet */
> > + return requested_features;
> > +}
> > +
> > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> > +{
> > + /* Do nothing */
>
> Why is this safe? Is this because this never triggers? assert(0) then?
> If it triggers then backend won't be notified, which might
> cause it to get stuck.
We never process these queues in qemu - always in the guest; so am I
correct in thinking those shouldn't be used?
> > +}
> > +
> > +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > + bool mask)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > +
> > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> > +}
> > +
> > +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > +
> > + return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> > +}
> > +
> > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > +{
> > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > + unsigned int i;
> > + size_t len;
> > + int ret;
> > +
> > + if (!fs->conf.chardev.chr) {
> > + error_setg(errp, "missing chardev");
> > + return;
> > + }
> > +
> > + if (!fs->conf.tag) {
> > + error_setg(errp, "missing tag property");
> > + return;
> > + }
> > + len = strlen(fs->conf.tag);
> > + if (len == 0) {
> > + error_setg(errp, "tag property cannot be empty");
> > + return;
> > + }
> > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > + error_setg(errp, "tag property must be %zu bytes or less",
> > + sizeof_field(struct virtio_fs_config, tag));
> > + return;
> > + }
> > +
> > + if (fs->conf.num_queues == 0) {
> > + error_setg(errp, "num-queues property must be larger than 0");
> > + return;
> > + }
>
> The strange thing is that actual # of queues is this number + 2.
> And this affects an optimal number of vectors (see patch 2).
> Not sure what a good solution is - include the
> mandatory queues in the number?
> Needs to be documented in some way.
Should we be doing nvectors the same way virtio-scsi-pci does it;
with a magic 'unspecified' default where it sets the nvectors based on
the number of queues?
I think my preference is not to show the users the mandatory queues.
> > +
> > + if (!is_power_of_2(fs->conf.queue_size)) {
> > + error_setg(errp, "queue-size property must be a power of 2");
> > + return;
> > + }
>
> Hmm packed ring allows non power of 2 ...
> We need to look into a generic helper to support VQ
> size checks.
Which would also have to include the negotiation of where it's doing
packaged ring?
<snip>
Dave
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-21 19:11 ` Dr. David Alan Gilbert
@ 2019-08-22 8:52 ` Stefan Hajnoczi
2019-08-22 9:19 ` Cornelia Huck
2019-09-17 9:21 ` Dr. David Alan Gilbert
0 siblings, 2 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2019-08-22 8:52 UTC (permalink / raw)
To: Dr. David Alan Gilbert; +Cc: qemu-devel, vgoyal, Michael S. Tsirkin
[-- Attachment #1: Type: text/plain, Size: 3725 bytes --]
On Wed, Aug 21, 2019 at 08:11:18PM +0100, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (mst@redhat.com) wrote:
> > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> > > +{
> > > + /* Do nothing */
> >
> > Why is this safe? Is this because this never triggers? assert(0) then?
> > If it triggers then backend won't be notified, which might
> > cause it to get stuck.
>
> We never process these queues in qemu - always in the guest; so am I
> correct in thinking those shouldn't be used?
s/guest/vhost-user backend process/
vuf_handle_output() should never be called.
> > > +}
> > > +
> > > +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > + bool mask)
> > > +{
> > > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > +
> > > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> > > +}
> > > +
> > > +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> > > +{
> > > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > +
> > > + return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> > > +}
> > > +
> > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > +{
> > > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > > + unsigned int i;
> > > + size_t len;
> > > + int ret;
> > > +
> > > + if (!fs->conf.chardev.chr) {
> > > + error_setg(errp, "missing chardev");
> > > + return;
> > > + }
> > > +
> > > + if (!fs->conf.tag) {
> > > + error_setg(errp, "missing tag property");
> > > + return;
> > > + }
> > > + len = strlen(fs->conf.tag);
> > > + if (len == 0) {
> > > + error_setg(errp, "tag property cannot be empty");
> > > + return;
> > > + }
> > > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > + error_setg(errp, "tag property must be %zu bytes or less",
> > > + sizeof_field(struct virtio_fs_config, tag));
> > > + return;
> > > + }
> > > +
> > > + if (fs->conf.num_queues == 0) {
> > > + error_setg(errp, "num-queues property must be larger than 0");
> > > + return;
> > > + }
> >
> > The strange thing is that actual # of queues is this number + 2.
> > And this affects an optimal number of vectors (see patch 2).
> > Not sure what a good solution is - include the
> > mandatory queues in the number?
> > Needs to be documented in some way.
>
> Should we be doing nvectors the same way virtio-scsi-pci does it;
> with a magic 'unspecified' default where it sets the nvectors based on
> the number of queues?
>
> I think my preference is not to show the users the mandatory queues.
I agree. Users want to control multiqueue, not on the absolute number
of virtqueues including mandatory queues.
> > > +
> > > + if (!is_power_of_2(fs->conf.queue_size)) {
> > > + error_setg(errp, "queue-size property must be a power of 2");
> > > + return;
> > > + }
> >
> > Hmm packed ring allows non power of 2 ...
> > We need to look into a generic helper to support VQ
> > size checks.
>
> Which would also have to include the negotiation of where it's doing
> packaged ring?
It's impossible to perform this check at .realize() time since the
packed virtqueue layout is negotiated via a VIRTIO feature bit. This
puts us in the awkward position of either failing when the guest has
already booted or rounding up the queue size for split ring layouts
(with a warning message?).
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-22 8:52 ` Stefan Hajnoczi
@ 2019-08-22 9:19 ` Cornelia Huck
2019-08-23 9:25 ` Stefan Hajnoczi
2019-09-17 9:21 ` Dr. David Alan Gilbert
1 sibling, 1 reply; 13+ messages in thread
From: Cornelia Huck @ 2019-08-22 9:19 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Michael S. Tsirkin, Dr. David Alan Gilbert, vgoyal, qemu-devel
On Thu, 22 Aug 2019 09:52:37 +0100
Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Wed, Aug 21, 2019 at 08:11:18PM +0100, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (mst@redhat.com) wrote:
> > > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > > +{
> > > > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > > > + unsigned int i;
> > > > + size_t len;
> > > > + int ret;
> > > > +
> > > > + if (!fs->conf.chardev.chr) {
> > > > + error_setg(errp, "missing chardev");
> > > > + return;
> > > > + }
> > > > +
> > > > + if (!fs->conf.tag) {
> > > > + error_setg(errp, "missing tag property");
> > > > + return;
> > > > + }
> > > > + len = strlen(fs->conf.tag);
> > > > + if (len == 0) {
> > > > + error_setg(errp, "tag property cannot be empty");
> > > > + return;
> > > > + }
> > > > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > > + error_setg(errp, "tag property must be %zu bytes or less",
> > > > + sizeof_field(struct virtio_fs_config, tag));
> > > > + return;
> > > > + }
> > > > +
> > > > + if (fs->conf.num_queues == 0) {
> > > > + error_setg(errp, "num-queues property must be larger than 0");
> > > > + return;
> > > > + }
> > >
> > > The strange thing is that actual # of queues is this number + 2.
> > > And this affects an optimal number of vectors (see patch 2).
> > > Not sure what a good solution is - include the
> > > mandatory queues in the number?
> > > Needs to be documented in some way.
> >
> > Should we be doing nvectors the same way virtio-scsi-pci does it;
> > with a magic 'unspecified' default where it sets the nvectors based on
> > the number of queues?
> >
> > I think my preference is not to show the users the mandatory queues.
>
> I agree. Users want to control multiqueue, not on the absolute number
> of virtqueues including mandatory queues.
I agree as well, but let me advocate again for renaming this to
'num_request_queues' or similar to make it more obvious what this number
actually means.
>
> > > > +
> > > > + if (!is_power_of_2(fs->conf.queue_size)) {
> > > > + error_setg(errp, "queue-size property must be a power of 2");
> > > > + return;
> > > > + }
> > >
> > > Hmm packed ring allows non power of 2 ...
> > > We need to look into a generic helper to support VQ
> > > size checks.
> >
> > Which would also have to include the negotiation of where it's doing
> > packaged ring?
>
> It's impossible to perform this check at .realize() time since the
> packed virtqueue layout is negotiated via a VIRTIO feature bit. This
> puts us in the awkward position of either failing when the guest has
> already booted or rounding up the queue size for split ring layouts
> (with a warning message?).
I fear that is always going to be awkward if you allow to specify the
queue size via a property. Basically, you can do two things: fail to
accept FEATURES_OK if the queue size is not a power of 2 and the guest
did not negotiate packed ring, or disallow to set a non power of 2
value here, which is what the other devices with such a property
currently do (see also my other mail.) Would probably be good if all
devices used the same approach (when we introduced packed ring support.)
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-22 9:19 ` Cornelia Huck
@ 2019-08-23 9:25 ` Stefan Hajnoczi
0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2019-08-23 9:25 UTC (permalink / raw)
To: Cornelia Huck
Cc: qemu-devel, vgoyal, Dr. David Alan Gilbert, Stefan Hajnoczi,
Michael S. Tsirkin
[-- Attachment #1: Type: text/plain, Size: 2520 bytes --]
On Thu, Aug 22, 2019 at 11:19:16AM +0200, Cornelia Huck wrote:
> On Thu, 22 Aug 2019 09:52:37 +0100
> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> > On Wed, Aug 21, 2019 at 08:11:18PM +0100, Dr. David Alan Gilbert wrote:
> > > * Michael S. Tsirkin (mst@redhat.com) wrote:
> > > > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
>
> > > > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > > > +{
> > > > > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > > > > + unsigned int i;
> > > > > + size_t len;
> > > > > + int ret;
> > > > > +
> > > > > + if (!fs->conf.chardev.chr) {
> > > > > + error_setg(errp, "missing chardev");
> > > > > + return;
> > > > > + }
> > > > > +
> > > > > + if (!fs->conf.tag) {
> > > > > + error_setg(errp, "missing tag property");
> > > > > + return;
> > > > > + }
> > > > > + len = strlen(fs->conf.tag);
> > > > > + if (len == 0) {
> > > > > + error_setg(errp, "tag property cannot be empty");
> > > > > + return;
> > > > > + }
> > > > > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > > > + error_setg(errp, "tag property must be %zu bytes or less",
> > > > > + sizeof_field(struct virtio_fs_config, tag));
> > > > > + return;
> > > > > + }
> > > > > +
> > > > > + if (fs->conf.num_queues == 0) {
> > > > > + error_setg(errp, "num-queues property must be larger than 0");
> > > > > + return;
> > > > > + }
> > > >
> > > > The strange thing is that actual # of queues is this number + 2.
> > > > And this affects an optimal number of vectors (see patch 2).
> > > > Not sure what a good solution is - include the
> > > > mandatory queues in the number?
> > > > Needs to be documented in some way.
> > >
> > > Should we be doing nvectors the same way virtio-scsi-pci does it;
> > > with a magic 'unspecified' default where it sets the nvectors based on
> > > the number of queues?
> > >
> > > I think my preference is not to show the users the mandatory queues.
> >
> > I agree. Users want to control multiqueue, not on the absolute number
> > of virtqueues including mandatory queues.
>
> I agree as well, but let me advocate again for renaming this to
> 'num_request_queues' or similar to make it more obvious what this number
> actually means.
Good idea.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
2019-08-22 8:52 ` Stefan Hajnoczi
2019-08-22 9:19 ` Cornelia Huck
@ 2019-09-17 9:21 ` Dr. David Alan Gilbert
1 sibling, 0 replies; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2019-09-17 9:21 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, vgoyal, Michael S. Tsirkin
* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> On Wed, Aug 21, 2019 at 08:11:18PM +0100, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (mst@redhat.com) wrote:
> > > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > > > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> > > > +{
> > > > + /* Do nothing */
> > >
> > > Why is this safe? Is this because this never triggers? assert(0) then?
> > > If it triggers then backend won't be notified, which might
> > > cause it to get stuck.
> >
> > We never process these queues in qemu - always in the guest; so am I
> > correct in thinking those shouldn't be used?
>
> s/guest/vhost-user backend process/
>
> vuf_handle_output() should never be called.
It turns out it does get called in one case during cleanup, in the case
where the daemon died before qemu, virtio_bus_cleanup_host_notifier goes
around the notifiers and calls all the ones where there's anything left
in the eventfd.
Dave
> > > > +}
> > > > +
> > > > +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > > + bool mask)
> > > > +{
> > > > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > > +
> > > > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> > > > +}
> > > > +
> > > > +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> > > > +{
> > > > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > > +
> > > > + return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> > > > +}
> > > > +
> > > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > > +{
> > > > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > > > + unsigned int i;
> > > > + size_t len;
> > > > + int ret;
> > > > +
> > > > + if (!fs->conf.chardev.chr) {
> > > > + error_setg(errp, "missing chardev");
> > > > + return;
> > > > + }
> > > > +
> > > > + if (!fs->conf.tag) {
> > > > + error_setg(errp, "missing tag property");
> > > > + return;
> > > > + }
> > > > + len = strlen(fs->conf.tag);
> > > > + if (len == 0) {
> > > > + error_setg(errp, "tag property cannot be empty");
> > > > + return;
> > > > + }
> > > > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > > + error_setg(errp, "tag property must be %zu bytes or less",
> > > > + sizeof_field(struct virtio_fs_config, tag));
> > > > + return;
> > > > + }
> > > > +
> > > > + if (fs->conf.num_queues == 0) {
> > > > + error_setg(errp, "num-queues property must be larger than 0");
> > > > + return;
> > > > + }
> > >
> > > The strange thing is that actual # of queues is this number + 2.
> > > And this affects an optimal number of vectors (see patch 2).
> > > Not sure what a good solution is - include the
> > > mandatory queues in the number?
> > > Needs to be documented in some way.
> >
> > Should we be doing nvectors the same way virtio-scsi-pci does it;
> > with a magic 'unspecified' default where it sets the nvectors based on
> > the number of queues?
> >
> > I think my preference is not to show the users the mandatory queues.
>
> I agree. Users want to control multiqueue, not on the absolute number
> of virtqueues including mandatory queues.
>
> > > > +
> > > > + if (!is_power_of_2(fs->conf.queue_size)) {
> > > > + error_setg(errp, "queue-size property must be a power of 2");
> > > > + return;
> > > > + }
> > >
> > > Hmm packed ring allows non power of 2 ...
> > > We need to look into a generic helper to support VQ
> > > size checks.
> >
> > Which would also have to include the negotiation of where it's doing
> > packaged ring?
>
> It's impossible to perform this check at .realize() time since the
> packed virtqueue layout is negotiated via a VIRTIO feature bit. This
> puts us in the awkward position of either failing when the guest has
> already booted or rounding up the queue size for split ring layouts
> (with a warning message?).
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2019-09-17 9:29 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-16 14:33 [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) Dr. David Alan Gilbert (git)
2019-08-16 14:33 ` [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device Dr. David Alan Gilbert (git)
2019-08-18 11:08 ` Michael S. Tsirkin
2019-08-20 12:24 ` Cornelia Huck
2019-08-20 13:39 ` Dr. David Alan Gilbert
2019-08-21 17:52 ` Dr. David Alan Gilbert
2019-08-21 19:11 ` Dr. David Alan Gilbert
2019-08-22 8:52 ` Stefan Hajnoczi
2019-08-22 9:19 ` Cornelia Huck
2019-08-23 9:25 ` Stefan Hajnoczi
2019-09-17 9:21 ` Dr. David Alan Gilbert
2019-08-16 14:33 ` [Qemu-devel] [PATCH 2/2] virtio: add vhost-user-fs-pci device Dr. David Alan Gilbert (git)
2019-08-16 18:38 ` [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) no-reply
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).