* [PATCH 0/2] Enable virtio-fs on s390x @ 2020-07-30 14:07 Marc Hartmayer 2020-07-30 14:07 ` [PATCH 1/2] virtio: add vhost-user-fs-ccw device Marc Hartmayer ` (3 more replies) 0 siblings, 4 replies; 12+ messages in thread From: Marc Hartmayer @ 2020-07-30 14:07 UTC (permalink / raw) To: qemu-devel Cc: Daniel P. Berrangé, Michael S. Tsirkin, Cornelia Huck, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau This patch series is about enabling virtio-fs on s390x. For that we need + some shim code (first patch), and we need + libvhost-user to deal with virtio endiannes for non-legacy virtio devices as mandated by the spec. How to use? For general instructions how to use virtio-fs (on x86) please have a look at https://virtio-fs.gitlab.io/howto-qemu.html. Most of the instructions can also be applied on s390x. In short: 1. Install self-compiled QEMU with this patch series applied 2. Prepare host and guest kernel so they support virtio-fs Start virtiofsd on the host $ virtiofsd -f --socket-path=/tmp/vhostqemu -o source=/tmp/shared Now you can start QEMU in a separate shell on the host: $ qemu-system-s390x -machine type=s390-ccw-virtio,accel=kvm,memory-backend=mem \ -object memory-backend-file,id=mem,size=2G,mem-path=/dev/shm/virtiofs,share=on,prealloc=on,prealloc-threads=1 \ -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-ccw,queue-size=1024,chardev=char0,tag=myfs \ -drive if=virtio,file=disk.qcow2 \ -m 2G -smp 2 -nographic Log into the guest and mount it $ mount -t virtiofs myfs /mnt Changelog: RFC v2 -> v1: - patch 1: + Added `force_revision_1 = true` (Conny) + I didn't add the r-b from Stefan Hajnoczi as I've added the changes suggested by Conny - squashed patches 2 and 3: + removed assertion in performance critical code path (Stefan) + dropped all dead code (Stefan) + removed libvhost-access.h RFC v1 -> RFC v2: + rebased + drop patch "libvhost-user: print invalid address on vu_panic" as it's not related to this series + drop patch "[RFC 4/4] HACK: Hard-code the libvhost-user.o-cflags for s390x" + patch "virtio: add vhost-user-fs-ccw device": replace qdev_set_parent_bus and object_property_set_bool by qdev_realize + patch "libvhost-user: handle endianness as mandated by the spec": Drop support for legacy virtio devices + add patch to fence legacy virtio devices *** BLURB HERE *** Halil Pasic (1): virtio: add vhost-user-fs-ccw device Marc Hartmayer (1): libvhost-user: handle endianness as mandated by the spec contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ hw/s390x/Makefile.objs | 1 + hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++ 3 files changed, 119 insertions(+), 34 deletions(-) create mode 100644 hw/s390x/vhost-user-fs-ccw.c -- 2.25.4 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/2] virtio: add vhost-user-fs-ccw device 2020-07-30 14:07 [PATCH 0/2] Enable virtio-fs on s390x Marc Hartmayer @ 2020-07-30 14:07 ` Marc Hartmayer 2020-07-30 15:19 ` Cornelia Huck 2020-07-30 14:07 ` [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec Marc Hartmayer ` (2 subsequent siblings) 3 siblings, 1 reply; 12+ messages in thread From: Marc Hartmayer @ 2020-07-30 14:07 UTC (permalink / raw) To: qemu-devel Cc: Daniel P. Berrangé, Michael S. Tsirkin, Cornelia Huck, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau From: Halil Pasic <pasic@linux.ibm.com> Wire up the CCW device for vhost-user-fs. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> --- hw/s390x/Makefile.objs | 1 + hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+) create mode 100644 hw/s390x/vhost-user-fs-ccw.c diff --git a/hw/s390x/Makefile.objs b/hw/s390x/Makefile.objs index a46a1c7894e0..c4086ec3171e 100644 --- a/hw/s390x/Makefile.objs +++ b/hw/s390x/Makefile.objs @@ -20,6 +20,7 @@ obj-$(CONFIG_VIRTIO_NET) += virtio-ccw-net.o obj-$(CONFIG_VIRTIO_BLK) += virtio-ccw-blk.o obj-$(call land,$(CONFIG_VIRTIO_9P),$(CONFIG_VIRTFS)) += virtio-ccw-9p.o obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock-ccw.o +obj-$(CONFIG_VHOST_USER_FS) += vhost-user-fs-ccw.o endif obj-y += css-bridge.o obj-y += ccw-device.o diff --git a/hw/s390x/vhost-user-fs-ccw.c b/hw/s390x/vhost-user-fs-ccw.c new file mode 100644 index 000000000000..6c6f26929301 --- /dev/null +++ b/hw/s390x/vhost-user-fs-ccw.c @@ -0,0 +1,75 @@ +/* + * virtio ccw vhost-user-fs implementation + * + * Copyright 2020 IBM Corp. + * + * This work is licensed under the terms of the GNU GPL, version 2 or (at + * your option) any later version. See the COPYING file in the top-level + * directory. + */ +#include "qemu/osdep.h" +#include "hw/qdev-properties.h" +#include "qapi/error.h" +#include "hw/virtio/vhost-user-fs.h" +#include "virtio-ccw.h" + +typedef struct VHostUserFSCcw { + VirtioCcwDevice parent_obj; + VHostUserFS vdev; +} VHostUserFSCcw; + +#define TYPE_VHOST_USER_FS_CCW "vhost-user-fs-ccw" +#define VHOST_USER_FS_CCW(obj) \ + OBJECT_CHECK(VHostUserFSCcw, (obj), TYPE_VHOST_USER_FS_CCW) + + +static Property vhost_user_fs_ccw_properties[] = { + DEFINE_PROP_BIT("ioeventfd", VirtioCcwDevice, flags, + VIRTIO_CCW_FLAG_USE_IOEVENTFD_BIT, true), + DEFINE_PROP_UINT32("max_revision", VirtioCcwDevice, max_rev, + VIRTIO_CCW_MAX_REV), + DEFINE_PROP_END_OF_LIST(), +}; + +static void vhost_user_fs_ccw_realize(VirtioCcwDevice *ccw_dev, Error **errp) +{ + VHostUserFSCcw *dev = VHOST_USER_FS_CCW(ccw_dev); + DeviceState *vdev = DEVICE(&dev->vdev); + + qdev_realize(vdev, BUS(&ccw_dev->bus), errp); +} + +static void vhost_user_fs_ccw_instance_init(Object *obj) +{ + VHostUserFSCcw *dev = VHOST_USER_FS_CCW(obj); + VirtioCcwDevice *ccw_dev = VIRTIO_CCW_DEVICE(obj); + + ccw_dev->force_revision_1 = true; + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev), + TYPE_VHOST_USER_FS); +} + +static void vhost_user_fs_ccw_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + VirtIOCCWDeviceClass *k = VIRTIO_CCW_DEVICE_CLASS(klass); + + k->realize = vhost_user_fs_ccw_realize; + device_class_set_props(dc, vhost_user_fs_ccw_properties); + set_bit(DEVICE_CATEGORY_STORAGE, dc->categories); +} + +static const TypeInfo vhost_user_fs_ccw = { + .name = TYPE_VHOST_USER_FS_CCW, + .parent = TYPE_VIRTIO_CCW_DEVICE, + .instance_size = sizeof(VHostUserFSCcw), + .instance_init = vhost_user_fs_ccw_instance_init, + .class_init = vhost_user_fs_ccw_class_init, +}; + +static void vhost_user_fs_ccw_register(void) +{ + type_register_static(&vhost_user_fs_ccw); +} + +type_init(vhost_user_fs_ccw_register) -- 2.25.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] virtio: add vhost-user-fs-ccw device 2020-07-30 14:07 ` [PATCH 1/2] virtio: add vhost-user-fs-ccw device Marc Hartmayer @ 2020-07-30 15:19 ` Cornelia Huck 0 siblings, 0 replies; 12+ messages in thread From: Cornelia Huck @ 2020-07-30 15:19 UTC (permalink / raw) To: Marc Hartmayer Cc: Daniel P. Berrangé, Michael S. Tsirkin, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Thu, 30 Jul 2020 16:07:30 +0200 Marc Hartmayer <mhartmay@linux.ibm.com> wrote: > From: Halil Pasic <pasic@linux.ibm.com> > > Wire up the CCW device for vhost-user-fs. > > Signed-off-by: Halil Pasic <pasic@linux.ibm.com> > --- > hw/s390x/Makefile.objs | 1 + > hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++++++++++++ > 2 files changed, 76 insertions(+) > create mode 100644 hw/s390x/vhost-user-fs-ccw.c Reviewed-by: Cornelia Huck <cohuck@redhat.com> ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec 2020-07-30 14:07 [PATCH 0/2] Enable virtio-fs on s390x Marc Hartmayer 2020-07-30 14:07 ` [PATCH 1/2] virtio: add vhost-user-fs-ccw device Marc Hartmayer @ 2020-07-30 14:07 ` Marc Hartmayer 2020-08-02 5:13 ` Michael S. Tsirkin 2020-08-03 9:26 ` Cornelia Huck 2020-08-02 5:13 ` [PATCH 0/2] Enable virtio-fs on s390x Michael S. Tsirkin 2020-08-27 12:19 ` Michael S. Tsirkin 3 siblings, 2 replies; 12+ messages in thread From: Marc Hartmayer @ 2020-07-30 14:07 UTC (permalink / raw) To: qemu-devel Cc: Daniel P. Berrangé, Michael S. Tsirkin, Cornelia Huck, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau Since virtio existed even before it got standardized, the virtio standard defines the following types of virtio devices: + legacy device (pre-virtio 1.0) + non-legacy or VIRTIO 1.0 device + transitional device (which can act both as legacy and non-legacy) Virtio 1.0 defines the fields of the virtqueues as little endian, while legacy uses guest's native endian [1]. Currently libvhost-user does not handle virtio endianness at all, i.e. it works only if the native endianness matches with whatever is actually needed. That means things break spectacularly on big-endian targets. Let us handle virtio endianness for non-legacy as required by the virtio specification [1]. The fencing of legacy virtio devices is done in `vu_set_features_exec`. [1] https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-210003 Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> --- contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ 1 file changed, 43 insertions(+), 34 deletions(-) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 53f16bdf082c..e2238a0400c9 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -42,6 +42,7 @@ #include "qemu/atomic.h" #include "qemu/osdep.h" +#include "qemu/bswap.h" #include "qemu/memfd.h" #include "libvhost-user.h" @@ -539,6 +540,14 @@ vu_set_features_exec(VuDev *dev, VhostUserMsg *vmsg) DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64); dev->features = vmsg->payload.u64; + if (!vu_has_feature(dev, VIRTIO_F_VERSION_1)) { + /* + * We only support devices conforming to VIRTIO 1.0 or + * later + */ + vu_panic(dev, "virtio legacy devices aren't supported by libvhost-user"); + return false; + } if (!(dev->features & VHOST_USER_F_PROTOCOL_FEATURES)) { vu_set_enable_all_rings(dev, true); @@ -1074,7 +1083,7 @@ vu_set_vring_addr_exec(VuDev *dev, VhostUserMsg *vmsg) return false; } - vq->used_idx = vq->vring.used->idx; + vq->used_idx = lduw_le_p(&vq->vring.used->idx); if (vq->last_avail_idx != vq->used_idx) { bool resume = dev->iface->queue_is_processed_in_order && @@ -1191,7 +1200,7 @@ vu_check_queue_inflights(VuDev *dev, VuVirtq *vq) return 0; } - vq->used_idx = vq->vring.used->idx; + vq->used_idx = lduw_le_p(&vq->vring.used->idx); vq->resubmit_num = 0; vq->resubmit_list = NULL; vq->counter = 0; @@ -2021,13 +2030,13 @@ vu_queue_started(const VuDev *dev, const VuVirtq *vq) static inline uint16_t vring_avail_flags(VuVirtq *vq) { - return vq->vring.avail->flags; + return lduw_le_p(&vq->vring.avail->flags); } static inline uint16_t vring_avail_idx(VuVirtq *vq) { - vq->shadow_avail_idx = vq->vring.avail->idx; + vq->shadow_avail_idx = lduw_le_p(&vq->vring.avail->idx); return vq->shadow_avail_idx; } @@ -2035,7 +2044,7 @@ vring_avail_idx(VuVirtq *vq) static inline uint16_t vring_avail_ring(VuVirtq *vq, int i) { - return vq->vring.avail->ring[i]; + return lduw_le_p(&vq->vring.avail->ring[i]); } static inline uint16_t @@ -2123,12 +2132,12 @@ virtqueue_read_next_desc(VuDev *dev, struct vring_desc *desc, int i, unsigned int max, unsigned int *next) { /* If this descriptor says it doesn't chain, we're done. */ - if (!(desc[i].flags & VRING_DESC_F_NEXT)) { + if (!(lduw_le_p(&desc[i].flags) & VRING_DESC_F_NEXT)) { return VIRTQUEUE_READ_DESC_DONE; } /* Check they're not leading us off end of descriptors. */ - *next = desc[i].next; + *next = lduw_le_p(&desc[i].next); /* Make sure compiler knows to grab that: we don't want it changing! */ smp_wmb(); @@ -2171,8 +2180,8 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, } desc = vq->vring.desc; - if (desc[i].flags & VRING_DESC_F_INDIRECT) { - if (desc[i].len % sizeof(struct vring_desc)) { + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) { + if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) { vu_panic(dev, "Invalid size for indirect buffer table"); goto err; } @@ -2185,8 +2194,8 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, /* loop over the indirect descriptor table */ indirect = 1; - desc_addr = desc[i].addr; - desc_len = desc[i].len; + desc_addr = ldq_le_p(&desc[i].addr); + desc_len = ldl_le_p(&desc[i].len); max = desc_len / sizeof(struct vring_desc); read_len = desc_len; desc = vu_gpa_to_va(dev, &read_len, desc_addr); @@ -2213,10 +2222,10 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, goto err; } - if (desc[i].flags & VRING_DESC_F_WRITE) { - in_total += desc[i].len; + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) { + in_total += ldl_le_p(&desc[i].len); } else { - out_total += desc[i].len; + out_total += ldl_le_p(&desc[i].len); } if (in_total >= max_in_bytes && out_total >= max_out_bytes) { goto done; @@ -2367,7 +2376,7 @@ vring_used_flags_set_bit(VuVirtq *vq, int mask) flags = (uint16_t *)((char*)vq->vring.used + offsetof(struct vring_used, flags)); - *flags |= mask; + stw_le_p(flags, lduw_le_p(flags) | mask); } static inline void @@ -2377,7 +2386,7 @@ vring_used_flags_unset_bit(VuVirtq *vq, int mask) flags = (uint16_t *)((char*)vq->vring.used + offsetof(struct vring_used, flags)); - *flags &= ~mask; + stw_le_p(flags, lduw_le_p(flags) & ~mask); } static inline void @@ -2387,7 +2396,7 @@ vring_set_avail_event(VuVirtq *vq, uint16_t val) return; } - *((uint16_t *) &vq->vring.used->ring[vq->vring.num]) = val; + stw_le_p(&vq->vring.used->ring[vq->vring.num], val); } void @@ -2476,14 +2485,14 @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz) struct vring_desc desc_buf[VIRTQUEUE_MAX_SIZE]; int rc; - if (desc[i].flags & VRING_DESC_F_INDIRECT) { - if (desc[i].len % sizeof(struct vring_desc)) { + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) { + if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) { vu_panic(dev, "Invalid size for indirect buffer table"); } /* loop over the indirect descriptor table */ - desc_addr = desc[i].addr; - desc_len = desc[i].len; + desc_addr = ldq_le_p(&desc[i].addr); + desc_len = ldl_le_p(&desc[i].len); max = desc_len / sizeof(struct vring_desc); read_len = desc_len; desc = vu_gpa_to_va(dev, &read_len, desc_addr); @@ -2505,10 +2514,10 @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz) /* Collect all the descriptors */ do { - if (desc[i].flags & VRING_DESC_F_WRITE) { + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) { virtqueue_map_desc(dev, &in_num, iov + out_num, VIRTQUEUE_MAX_SIZE - out_num, true, - desc[i].addr, desc[i].len); + ldq_le_p(&desc[i].addr), ldl_le_p(&desc[i].len)); } else { if (in_num) { vu_panic(dev, "Incorrect order for descriptors"); @@ -2516,7 +2525,7 @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz) } virtqueue_map_desc(dev, &out_num, iov, VIRTQUEUE_MAX_SIZE, false, - desc[i].addr, desc[i].len); + ldq_le_p(&desc[i].addr), ldl_le_p(&desc[i].len)); } /* If we've got too many, that implies a descriptor loop. */ @@ -2712,14 +2721,14 @@ vu_log_queue_fill(VuDev *dev, VuVirtq *vq, max = vq->vring.num; i = elem->index; - if (desc[i].flags & VRING_DESC_F_INDIRECT) { - if (desc[i].len % sizeof(struct vring_desc)) { + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) { + if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) { vu_panic(dev, "Invalid size for indirect buffer table"); } /* loop over the indirect descriptor table */ - desc_addr = desc[i].addr; - desc_len = desc[i].len; + desc_addr = ldq_le_p(&desc[i].addr); + desc_len = ldl_le_p(&desc[i].len); max = desc_len / sizeof(struct vring_desc); read_len = desc_len; desc = vu_gpa_to_va(dev, &read_len, desc_addr); @@ -2745,9 +2754,9 @@ vu_log_queue_fill(VuDev *dev, VuVirtq *vq, return; } - if (desc[i].flags & VRING_DESC_F_WRITE) { - min = MIN(desc[i].len, len); - vu_log_write(dev, desc[i].addr, min); + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) { + min = MIN(ldl_le_p(&desc[i].len), len); + vu_log_write(dev, ldq_le_p(&desc[i].addr), min); len -= min; } @@ -2772,15 +2781,15 @@ vu_queue_fill(VuDev *dev, VuVirtq *vq, idx = (idx + vq->used_idx) % vq->vring.num; - uelem.id = elem->index; - uelem.len = len; + stl_le_p(&uelem.id, elem->index); + stl_le_p(&uelem.len, len); vring_used_write(dev, vq, &uelem, idx); } static inline void vring_used_idx_set(VuDev *dev, VuVirtq *vq, uint16_t val) { - vq->vring.used->idx = val; + stw_le_p(&vq->vring.used->idx, val); vu_log_write(dev, vq->vring.log_guest_addr + offsetof(struct vring_used, idx), sizeof(vq->vring.used->idx)); -- 2.25.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec 2020-07-30 14:07 ` [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec Marc Hartmayer @ 2020-08-02 5:13 ` Michael S. Tsirkin 2020-08-03 14:17 ` Marc Hartmayer 2020-08-03 9:26 ` Cornelia Huck 1 sibling, 1 reply; 12+ messages in thread From: Michael S. Tsirkin @ 2020-08-02 5:13 UTC (permalink / raw) To: Marc Hartmayer Cc: Daniel P. Berrangé, Cornelia Huck, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Thu, Jul 30, 2020 at 04:07:31PM +0200, Marc Hartmayer wrote: > Since virtio existed even before it got standardized, the virtio > standard defines the following types of virtio devices: > > + legacy device (pre-virtio 1.0) > + non-legacy or VIRTIO 1.0 device > + transitional device (which can act both as legacy and non-legacy) > > Virtio 1.0 defines the fields of the virtqueues as little endian, > while legacy uses guest's native endian [1]. Currently libvhost-user > does not handle virtio endianness at all, i.e. it works only if the > native endianness matches with whatever is actually needed. That means > things break spectacularly on big-endian targets. Let us handle virtio > endianness for non-legacy as required by the virtio specification > [1]. The fencing of legacy virtio devices is done in > `vu_set_features_exec`. > > [1] https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-210003 > > Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> > --- > contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ > 1 file changed, 43 insertions(+), 34 deletions(-) > > diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c > index 53f16bdf082c..e2238a0400c9 100644 > --- a/contrib/libvhost-user/libvhost-user.c > +++ b/contrib/libvhost-user/libvhost-user.c > @@ -42,6 +42,7 @@ > > #include "qemu/atomic.h" > #include "qemu/osdep.h" > +#include "qemu/bswap.h" > #include "qemu/memfd.h" > > #include "libvhost-user.h" > @@ -539,6 +540,14 @@ vu_set_features_exec(VuDev *dev, VhostUserMsg *vmsg) > DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64); > > dev->features = vmsg->payload.u64; > + if (!vu_has_feature(dev, VIRTIO_F_VERSION_1)) { > + /* > + * We only support devices conforming to VIRTIO 1.0 or > + * later > + */ > + vu_panic(dev, "virtio legacy devices aren't supported by libvhost-user"); > + return false; > + } > > if (!(dev->features & VHOST_USER_F_PROTOCOL_FEATURES)) { > vu_set_enable_all_rings(dev, true); > @@ -1074,7 +1083,7 @@ vu_set_vring_addr_exec(VuDev *dev, VhostUserMsg *vmsg) > return false; > } > > - vq->used_idx = vq->vring.used->idx; > + vq->used_idx = lduw_le_p(&vq->vring.used->idx); > > if (vq->last_avail_idx != vq->used_idx) { > bool resume = dev->iface->queue_is_processed_in_order && > @@ -1191,7 +1200,7 @@ vu_check_queue_inflights(VuDev *dev, VuVirtq *vq) > return 0; > } > > - vq->used_idx = vq->vring.used->idx; > + vq->used_idx = lduw_le_p(&vq->vring.used->idx); > vq->resubmit_num = 0; > vq->resubmit_list = NULL; > vq->counter = 0; > @@ -2021,13 +2030,13 @@ vu_queue_started(const VuDev *dev, const VuVirtq *vq) > static inline uint16_t > vring_avail_flags(VuVirtq *vq) > { > - return vq->vring.avail->flags; > + return lduw_le_p(&vq->vring.avail->flags); > } > > static inline uint16_t > vring_avail_idx(VuVirtq *vq) > { > - vq->shadow_avail_idx = vq->vring.avail->idx; > + vq->shadow_avail_idx = lduw_le_p(&vq->vring.avail->idx); > > return vq->shadow_avail_idx; > } > @@ -2035,7 +2044,7 @@ vring_avail_idx(VuVirtq *vq) > static inline uint16_t > vring_avail_ring(VuVirtq *vq, int i) > { > - return vq->vring.avail->ring[i]; > + return lduw_le_p(&vq->vring.avail->ring[i]); > } > > static inline uint16_t > @@ -2123,12 +2132,12 @@ virtqueue_read_next_desc(VuDev *dev, struct vring_desc *desc, > int i, unsigned int max, unsigned int *next) > { > /* If this descriptor says it doesn't chain, we're done. */ > - if (!(desc[i].flags & VRING_DESC_F_NEXT)) { > + if (!(lduw_le_p(&desc[i].flags) & VRING_DESC_F_NEXT)) { > return VIRTQUEUE_READ_DESC_DONE; > } > > /* Check they're not leading us off end of descriptors. */ > - *next = desc[i].next; > + *next = lduw_le_p(&desc[i].next); > /* Make sure compiler knows to grab that: we don't want it changing! */ > smp_wmb(); > > @@ -2171,8 +2180,8 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, > } > desc = vq->vring.desc; > > - if (desc[i].flags & VRING_DESC_F_INDIRECT) { > - if (desc[i].len % sizeof(struct vring_desc)) { > + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) { > + if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) { > vu_panic(dev, "Invalid size for indirect buffer table"); > goto err; > } > @@ -2185,8 +2194,8 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, > > /* loop over the indirect descriptor table */ > indirect = 1; > - desc_addr = desc[i].addr; > - desc_len = desc[i].len; > + desc_addr = ldq_le_p(&desc[i].addr); > + desc_len = ldl_le_p(&desc[i].len); > max = desc_len / sizeof(struct vring_desc); > read_len = desc_len; > desc = vu_gpa_to_va(dev, &read_len, desc_addr); > @@ -2213,10 +2222,10 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, > goto err; > } > > - if (desc[i].flags & VRING_DESC_F_WRITE) { > - in_total += desc[i].len; > + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) { > + in_total += ldl_le_p(&desc[i].len); > } else { > - out_total += desc[i].len; > + out_total += ldl_le_p(&desc[i].len); > } > if (in_total >= max_in_bytes && out_total >= max_out_bytes) { > goto done; > @@ -2367,7 +2376,7 @@ vring_used_flags_set_bit(VuVirtq *vq, int mask) > > flags = (uint16_t *)((char*)vq->vring.used + > offsetof(struct vring_used, flags)); > - *flags |= mask; > + stw_le_p(flags, lduw_le_p(flags) | mask); > } > > static inline void > @@ -2377,7 +2386,7 @@ vring_used_flags_unset_bit(VuVirtq *vq, int mask) > > flags = (uint16_t *)((char*)vq->vring.used + > offsetof(struct vring_used, flags)); > - *flags &= ~mask; > + stw_le_p(flags, lduw_le_p(flags) & ~mask); > } > > static inline void > @@ -2387,7 +2396,7 @@ vring_set_avail_event(VuVirtq *vq, uint16_t val) > return; > } > > - *((uint16_t *) &vq->vring.used->ring[vq->vring.num]) = val; > + stw_le_p(&vq->vring.used->ring[vq->vring.num], val); > } > > void > @@ -2476,14 +2485,14 @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz) > struct vring_desc desc_buf[VIRTQUEUE_MAX_SIZE]; > int rc; > > - if (desc[i].flags & VRING_DESC_F_INDIRECT) { > - if (desc[i].len % sizeof(struct vring_desc)) { > + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) { > + if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) { > vu_panic(dev, "Invalid size for indirect buffer table"); > } > > /* loop over the indirect descriptor table */ > - desc_addr = desc[i].addr; > - desc_len = desc[i].len; > + desc_addr = ldq_le_p(&desc[i].addr); > + desc_len = ldl_le_p(&desc[i].len); > max = desc_len / sizeof(struct vring_desc); > read_len = desc_len; > desc = vu_gpa_to_va(dev, &read_len, desc_addr); > @@ -2505,10 +2514,10 @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz) > > /* Collect all the descriptors */ > do { > - if (desc[i].flags & VRING_DESC_F_WRITE) { > + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) { > virtqueue_map_desc(dev, &in_num, iov + out_num, > VIRTQUEUE_MAX_SIZE - out_num, true, > - desc[i].addr, desc[i].len); > + ldq_le_p(&desc[i].addr), ldl_le_p(&desc[i].len)); > } else { > if (in_num) { > vu_panic(dev, "Incorrect order for descriptors"); > @@ -2516,7 +2525,7 @@ vu_queue_map_desc(VuDev *dev, VuVirtq *vq, unsigned int idx, size_t sz) > } > virtqueue_map_desc(dev, &out_num, iov, > VIRTQUEUE_MAX_SIZE, false, > - desc[i].addr, desc[i].len); > + ldq_le_p(&desc[i].addr), ldl_le_p(&desc[i].len)); > } > > /* If we've got too many, that implies a descriptor loop. */ > @@ -2712,14 +2721,14 @@ vu_log_queue_fill(VuDev *dev, VuVirtq *vq, > max = vq->vring.num; > i = elem->index; > > - if (desc[i].flags & VRING_DESC_F_INDIRECT) { > - if (desc[i].len % sizeof(struct vring_desc)) { > + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_INDIRECT) { > + if (ldl_le_p(&desc[i].len) % sizeof(struct vring_desc)) { > vu_panic(dev, "Invalid size for indirect buffer table"); > } > > /* loop over the indirect descriptor table */ > - desc_addr = desc[i].addr; > - desc_len = desc[i].len; > + desc_addr = ldq_le_p(&desc[i].addr); > + desc_len = ldl_le_p(&desc[i].len); > max = desc_len / sizeof(struct vring_desc); > read_len = desc_len; > desc = vu_gpa_to_va(dev, &read_len, desc_addr); > @@ -2745,9 +2754,9 @@ vu_log_queue_fill(VuDev *dev, VuVirtq *vq, > return; > } > > - if (desc[i].flags & VRING_DESC_F_WRITE) { > - min = MIN(desc[i].len, len); > - vu_log_write(dev, desc[i].addr, min); > + if (lduw_le_p(&desc[i].flags) & VRING_DESC_F_WRITE) { > + min = MIN(ldl_le_p(&desc[i].len), len); > + vu_log_write(dev, ldq_le_p(&desc[i].addr), min); > len -= min; > } > > @@ -2772,15 +2781,15 @@ vu_queue_fill(VuDev *dev, VuVirtq *vq, > > idx = (idx + vq->used_idx) % vq->vring.num; > > - uelem.id = elem->index; > - uelem.len = len; > + stl_le_p(&uelem.id, elem->index); > + stl_le_p(&uelem.len, len); > vring_used_write(dev, vq, &uelem, idx); > } > > static inline > void vring_used_idx_set(VuDev *dev, VuVirtq *vq, uint16_t val) > { > - vq->vring.used->idx = val; > + stw_le_p(&vq->vring.used->idx, val); > vu_log_write(dev, > vq->vring.log_guest_addr + offsetof(struct vring_used, idx), > sizeof(vq->vring.used->idx)); > -- > 2.25.4 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec 2020-08-02 5:13 ` Michael S. Tsirkin @ 2020-08-03 14:17 ` Marc Hartmayer 0 siblings, 0 replies; 12+ messages in thread From: Marc Hartmayer @ 2020-08-03 14:17 UTC (permalink / raw) To: Michael S. Tsirkin, Marc Hartmayer Cc: Daniel P. Berrangé, Cornelia Huck, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Sun, Aug 02, 2020 at 01:13 AM -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Thu, Jul 30, 2020 at 04:07:31PM +0200, Marc Hartmayer wrote: >> Since virtio existed even before it got standardized, the virtio >> standard defines the following types of virtio devices: >> >> + legacy device (pre-virtio 1.0) >> + non-legacy or VIRTIO 1.0 device >> + transitional device (which can act both as legacy and non-legacy) >> >> Virtio 1.0 defines the fields of the virtqueues as little endian, >> while legacy uses guest's native endian [1]. Currently libvhost-user >> does not handle virtio endianness at all, i.e. it works only if the >> native endianness matches with whatever is actually needed. That means >> things break spectacularly on big-endian targets. Let us handle virtio >> endianness for non-legacy as required by the virtio specification >> [1]. The fencing of legacy virtio devices is done in >> `vu_set_features_exec`. >> >> [1] https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-210003 >> >> Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> > > > Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Thanks. […snip] -- Kind regards / Beste Grüße Marc Hartmayer IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Gregor Pillen Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec 2020-07-30 14:07 ` [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec Marc Hartmayer 2020-08-02 5:13 ` Michael S. Tsirkin @ 2020-08-03 9:26 ` Cornelia Huck 2020-08-21 8:50 ` Marc Hartmayer 1 sibling, 1 reply; 12+ messages in thread From: Cornelia Huck @ 2020-08-03 9:26 UTC (permalink / raw) To: Marc Hartmayer Cc: Daniel P. Berrangé, Michael S. Tsirkin, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Thu, 30 Jul 2020 16:07:31 +0200 Marc Hartmayer <mhartmay@linux.ibm.com> wrote: > Since virtio existed even before it got standardized, the virtio > standard defines the following types of virtio devices: > > + legacy device (pre-virtio 1.0) > + non-legacy or VIRTIO 1.0 device > + transitional device (which can act both as legacy and non-legacy) > > Virtio 1.0 defines the fields of the virtqueues as little endian, > while legacy uses guest's native endian [1]. Currently libvhost-user > does not handle virtio endianness at all, i.e. it works only if the > native endianness matches with whatever is actually needed. That means > things break spectacularly on big-endian targets. Let us handle virtio > endianness for non-legacy as required by the virtio specification > [1]. Maybe add "and fence legacy virtio, as there is no safe way to figure out the needed endianness conversions for all cases." > The fencing of legacy virtio devices is done in > `vu_set_features_exec`. Not that I disagree with fencing legacy virtio, but looking at some vhost-user* drivers, I'm not sure everything will work as desired for those (I might be missing something, though.) - vhost-user-blk lists VERSION_1 in the supported features, but vhost-user-scsi doesn't... is there some inheritance going on that I'm missing? - vhost-user-gpu-pci inherits from virtio-gpu-pci, so I guess it's fine - vhost-user-input should also always have been virtio-1 So, has anybody been using vhost-user-scsi and can confirm that it still works, or at least can be made to work? > > [1] https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-210003 > > Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> > --- > contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ > 1 file changed, 43 insertions(+), 34 deletions(-) The code change per se LGTM. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec 2020-08-03 9:26 ` Cornelia Huck @ 2020-08-21 8:50 ` Marc Hartmayer 0 siblings, 0 replies; 12+ messages in thread From: Marc Hartmayer @ 2020-08-21 8:50 UTC (permalink / raw) To: Cornelia Huck, Marc Hartmayer Cc: Daniel P. Berrangé, Michael S. Tsirkin, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Mon, Aug 03, 2020 at 11:26 AM +0200, Cornelia Huck <cohuck@redhat.com> wrote: > On Thu, 30 Jul 2020 16:07:31 +0200 > Marc Hartmayer <mhartmay@linux.ibm.com> wrote: > >> Since virtio existed even before it got standardized, the virtio >> standard defines the following types of virtio devices: >> >> + legacy device (pre-virtio 1.0) >> + non-legacy or VIRTIO 1.0 device >> + transitional device (which can act both as legacy and non-legacy) >> >> Virtio 1.0 defines the fields of the virtqueues as little endian, >> while legacy uses guest's native endian [1]. Currently libvhost-user >> does not handle virtio endianness at all, i.e. it works only if the >> native endianness matches with whatever is actually needed. That means >> things break spectacularly on big-endian targets. Let us handle virtio >> endianness for non-legacy as required by the virtio specification >> [1]. > > Maybe add > > "and fence legacy virtio, as there is no safe way to figure out the > needed endianness conversions for all cases." Okay. > >> The fencing of legacy virtio devices is done in >> `vu_set_features_exec`. > > Not that I disagree with fencing legacy virtio, but looking at some > vhost-user* drivers, I'm not sure everything will work as desired for > those (I might be missing something, though.) > > - vhost-user-blk lists VERSION_1 in the supported features, but > vhost-user-scsi doesn't... is there some inheritance going on that > I'm missing? > - vhost-user-gpu-pci inherits from virtio-gpu-pci, so I guess it's fine > - vhost-user-input should also always have been virtio-1 > > So, has anybody been using vhost-user-scsi and can confirm that it > still works, or at least can be made to work? Unfortunately, I don’t have the required hardware :/ Can please anybody verify this? > >> >> [1] https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-210003 >> >> Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> >> --- >> contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ >> 1 file changed, 43 insertions(+), 34 deletions(-) > > The code change per se LGTM. Thanks for the feedback! > -- Kind regards / Beste Grüße Marc Hartmayer IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Gregor Pillen Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Enable virtio-fs on s390x 2020-07-30 14:07 [PATCH 0/2] Enable virtio-fs on s390x Marc Hartmayer 2020-07-30 14:07 ` [PATCH 1/2] virtio: add vhost-user-fs-ccw device Marc Hartmayer 2020-07-30 14:07 ` [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec Marc Hartmayer @ 2020-08-02 5:13 ` Michael S. Tsirkin 2020-08-04 11:29 ` Dr. David Alan Gilbert 2020-08-27 12:19 ` Michael S. Tsirkin 3 siblings, 1 reply; 12+ messages in thread From: Michael S. Tsirkin @ 2020-08-02 5:13 UTC (permalink / raw) To: Marc Hartmayer Cc: Daniel P. Berrangé, Cornelia Huck, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Thu, Jul 30, 2020 at 04:07:29PM +0200, Marc Hartmayer wrote: > This patch series is about enabling virtio-fs on s390x. For that we need > + some shim code (first patch), and we need > + libvhost-user to deal with virtio endiannes for non-legacy virtio > devices as mandated by the spec. > > How to use? > > For general instructions how to use virtio-fs (on x86) please have a > look at https://virtio-fs.gitlab.io/howto-qemu.html. Most of the > instructions can also be applied on s390x. > > In short: > > 1. Install self-compiled QEMU with this patch series applied > 2. Prepare host and guest kernel so they support virtio-fs > > Start virtiofsd on the host > > $ virtiofsd -f --socket-path=/tmp/vhostqemu -o source=/tmp/shared > > Now you can start QEMU in a separate shell on the host: > > $ qemu-system-s390x -machine type=s390-ccw-virtio,accel=kvm,memory-backend=mem \ > -object memory-backend-file,id=mem,size=2G,mem-path=/dev/shm/virtiofs,share=on,prealloc=on,prealloc-threads=1 \ > -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-ccw,queue-size=1024,chardev=char0,tag=myfs \ > -drive if=virtio,file=disk.qcow2 \ > -m 2G -smp 2 -nographic > > Log into the guest and mount it > > $ mount -t virtiofs myfs /mnt Who's merging this? My tree? > Changelog: > RFC v2 -> v1: > - patch 1: > + Added `force_revision_1 = true` (Conny) > + I didn't add the r-b from Stefan Hajnoczi as I've added the > changes suggested by Conny > - squashed patches 2 and 3: > + removed assertion in performance critical code path (Stefan) > + dropped all dead code (Stefan) > + removed libvhost-access.h > > RFC v1 -> RFC v2: > + rebased > + drop patch "libvhost-user: print invalid address on vu_panic" as it's not related to this series > + drop patch "[RFC 4/4] HACK: Hard-code the libvhost-user.o-cflags for s390x" > + patch "virtio: add vhost-user-fs-ccw device": replace qdev_set_parent_bus and object_property_set_bool by qdev_realize > + patch "libvhost-user: handle endianness as mandated by the spec": > Drop support for legacy virtio devices > + add patch to fence legacy virtio devices > *** BLURB HERE *** > > Halil Pasic (1): > virtio: add vhost-user-fs-ccw device > > Marc Hartmayer (1): > libvhost-user: handle endianness as mandated by the spec > > contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ > hw/s390x/Makefile.objs | 1 + > hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++ > 3 files changed, 119 insertions(+), 34 deletions(-) > create mode 100644 hw/s390x/vhost-user-fs-ccw.c > > -- > 2.25.4 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Enable virtio-fs on s390x 2020-08-02 5:13 ` [PATCH 0/2] Enable virtio-fs on s390x Michael S. Tsirkin @ 2020-08-04 11:29 ` Dr. David Alan Gilbert 0 siblings, 0 replies; 12+ messages in thread From: Dr. David Alan Gilbert @ 2020-08-04 11:29 UTC (permalink / raw) To: Michael S. Tsirkin Cc: Daniel P. Berrangé, Cornelia Huck, qemu-devel, Halil Pasic, Marc Hartmayer, Stefan Hajnoczi, Marc-André Lureau * Michael S. Tsirkin (mst@redhat.com) wrote: > On Thu, Jul 30, 2020 at 04:07:29PM +0200, Marc Hartmayer wrote: > > This patch series is about enabling virtio-fs on s390x. For that we need > > + some shim code (first patch), and we need > > + libvhost-user to deal with virtio endiannes for non-legacy virtio > > devices as mandated by the spec. > > > > How to use? > > > > For general instructions how to use virtio-fs (on x86) please have a > > look at https://virtio-fs.gitlab.io/howto-qemu.html. Most of the > > instructions can also be applied on s390x. > > > > In short: > > > > 1. Install self-compiled QEMU with this patch series applied > > 2. Prepare host and guest kernel so they support virtio-fs > > > > Start virtiofsd on the host > > > > $ virtiofsd -f --socket-path=/tmp/vhostqemu -o source=/tmp/shared > > > > Now you can start QEMU in a separate shell on the host: > > > > $ qemu-system-s390x -machine type=s390-ccw-virtio,accel=kvm,memory-backend=mem \ > > -object memory-backend-file,id=mem,size=2G,mem-path=/dev/shm/virtiofs,share=on,prealloc=on,prealloc-threads=1 \ > > -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-ccw,queue-size=1024,chardev=char0,tag=myfs \ > > -drive if=virtio,file=disk.qcow2 \ > > -m 2G -smp 2 -nographic > > > > Log into the guest and mount it > > > > $ mount -t virtiofs myfs /mnt > > Who's merging this? My tree? I think so; it seems either generic virtio or s390 more than actually virtiofs specific in most of it. Dave > > Changelog: > > RFC v2 -> v1: > > - patch 1: > > + Added `force_revision_1 = true` (Conny) > > + I didn't add the r-b from Stefan Hajnoczi as I've added the > > changes suggested by Conny > > - squashed patches 2 and 3: > > + removed assertion in performance critical code path (Stefan) > > + dropped all dead code (Stefan) > > + removed libvhost-access.h > > > > RFC v1 -> RFC v2: > > + rebased > > + drop patch "libvhost-user: print invalid address on vu_panic" as it's not related to this series > > + drop patch "[RFC 4/4] HACK: Hard-code the libvhost-user.o-cflags for s390x" > > + patch "virtio: add vhost-user-fs-ccw device": replace qdev_set_parent_bus and object_property_set_bool by qdev_realize > > + patch "libvhost-user: handle endianness as mandated by the spec": > > Drop support for legacy virtio devices > > + add patch to fence legacy virtio devices > > *** BLURB HERE *** > > > > Halil Pasic (1): > > virtio: add vhost-user-fs-ccw device > > > > Marc Hartmayer (1): > > libvhost-user: handle endianness as mandated by the spec > > > > contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ > > hw/s390x/Makefile.objs | 1 + > > hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++ > > 3 files changed, 119 insertions(+), 34 deletions(-) > > create mode 100644 hw/s390x/vhost-user-fs-ccw.c > > > > -- > > 2.25.4 > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Enable virtio-fs on s390x 2020-07-30 14:07 [PATCH 0/2] Enable virtio-fs on s390x Marc Hartmayer ` (2 preceding siblings ...) 2020-08-02 5:13 ` [PATCH 0/2] Enable virtio-fs on s390x Michael S. Tsirkin @ 2020-08-27 12:19 ` Michael S. Tsirkin 2020-08-27 12:26 ` Cornelia Huck 3 siblings, 1 reply; 12+ messages in thread From: Michael S. Tsirkin @ 2020-08-27 12:19 UTC (permalink / raw) To: Marc Hartmayer Cc: Daniel P. Berrangé, Cornelia Huck, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Stefan Hajnoczi, Marc-André Lureau On Thu, Jul 30, 2020 at 04:07:29PM +0200, Marc Hartmayer wrote: > This patch series is about enabling virtio-fs on s390x. For that we need > + some shim code (first patch), and we need > + libvhost-user to deal with virtio endiannes for non-legacy virtio > devices as mandated by the spec. Please rebase, address Cornelia's minor comments and repost. Thanks! > How to use? > > For general instructions how to use virtio-fs (on x86) please have a > look at https://virtio-fs.gitlab.io/howto-qemu.html. Most of the > instructions can also be applied on s390x. > > In short: > > 1. Install self-compiled QEMU with this patch series applied > 2. Prepare host and guest kernel so they support virtio-fs > > Start virtiofsd on the host > > $ virtiofsd -f --socket-path=/tmp/vhostqemu -o source=/tmp/shared > > Now you can start QEMU in a separate shell on the host: > > $ qemu-system-s390x -machine type=s390-ccw-virtio,accel=kvm,memory-backend=mem \ > -object memory-backend-file,id=mem,size=2G,mem-path=/dev/shm/virtiofs,share=on,prealloc=on,prealloc-threads=1 \ > -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-ccw,queue-size=1024,chardev=char0,tag=myfs \ > -drive if=virtio,file=disk.qcow2 \ > -m 2G -smp 2 -nographic > > Log into the guest and mount it > > $ mount -t virtiofs myfs /mnt > > Changelog: > RFC v2 -> v1: > - patch 1: > + Added `force_revision_1 = true` (Conny) > + I didn't add the r-b from Stefan Hajnoczi as I've added the > changes suggested by Conny > - squashed patches 2 and 3: > + removed assertion in performance critical code path (Stefan) > + dropped all dead code (Stefan) > + removed libvhost-access.h > > RFC v1 -> RFC v2: > + rebased > + drop patch "libvhost-user: print invalid address on vu_panic" as it's not related to this series > + drop patch "[RFC 4/4] HACK: Hard-code the libvhost-user.o-cflags for s390x" > + patch "virtio: add vhost-user-fs-ccw device": replace qdev_set_parent_bus and object_property_set_bool by qdev_realize > + patch "libvhost-user: handle endianness as mandated by the spec": > Drop support for legacy virtio devices > + add patch to fence legacy virtio devices > *** BLURB HERE *** > > Halil Pasic (1): > virtio: add vhost-user-fs-ccw device > > Marc Hartmayer (1): > libvhost-user: handle endianness as mandated by the spec > > contrib/libvhost-user/libvhost-user.c | 77 +++++++++++++++------------ > hw/s390x/Makefile.objs | 1 + > hw/s390x/vhost-user-fs-ccw.c | 75 ++++++++++++++++++++++++++ > 3 files changed, 119 insertions(+), 34 deletions(-) > create mode 100644 hw/s390x/vhost-user-fs-ccw.c > > -- > 2.25.4 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/2] Enable virtio-fs on s390x 2020-08-27 12:19 ` Michael S. Tsirkin @ 2020-08-27 12:26 ` Cornelia Huck 0 siblings, 0 replies; 12+ messages in thread From: Cornelia Huck @ 2020-08-27 12:26 UTC (permalink / raw) To: Michael S. Tsirkin Cc: Daniel P. Berrangé, qemu-devel, Dr. David Alan Gilbert, Halil Pasic, Marc Hartmayer, Stefan Hajnoczi, Marc-André Lureau On Thu, 27 Aug 2020 08:19:44 -0400 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Thu, Jul 30, 2020 at 04:07:29PM +0200, Marc Hartmayer wrote: > > This patch series is about enabling virtio-fs on s390x. For that we need > > + some shim code (first patch), and we need > > + libvhost-user to deal with virtio endiannes for non-legacy virtio > > devices as mandated by the spec. > > > Please rebase, address Cornelia's minor comments and repost. I think we're still waiting for someone to confirm the status of vhost-user-scsi? ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2020-08-27 12:28 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-07-30 14:07 [PATCH 0/2] Enable virtio-fs on s390x Marc Hartmayer 2020-07-30 14:07 ` [PATCH 1/2] virtio: add vhost-user-fs-ccw device Marc Hartmayer 2020-07-30 15:19 ` Cornelia Huck 2020-07-30 14:07 ` [PATCH 2/2] libvhost-user: handle endianness as mandated by the spec Marc Hartmayer 2020-08-02 5:13 ` Michael S. Tsirkin 2020-08-03 14:17 ` Marc Hartmayer 2020-08-03 9:26 ` Cornelia Huck 2020-08-21 8:50 ` Marc Hartmayer 2020-08-02 5:13 ` [PATCH 0/2] Enable virtio-fs on s390x Michael S. Tsirkin 2020-08-04 11:29 ` Dr. David Alan Gilbert 2020-08-27 12:19 ` Michael S. Tsirkin 2020-08-27 12:26 ` Cornelia Huck
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).