* [PATCH 0/2] vhost-user: Support vhost_dev_start @ 2022-06-29 2:25 Yajun Wu 2022-06-29 2:25 ` [PATCH 1/2] vhost: Change the sequence of device start Yajun Wu ` (4 more replies) 0 siblings, 5 replies; 18+ messages in thread From: Yajun Wu @ 2022-06-29 2:25 UTC (permalink / raw) To: qemu-devel, mst, alex.bennee, yajunw The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. Another change is to move the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction Yajun Wu (2): vhost: Change the sequence of device start vhost-user: Support vhost_dev_start hw/block/vhost-user-blk.c | 18 +++++++----- hw/net/vhost_net.c | 12 ++++---- hw/virtio/vhost-user.c | 58 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 75 insertions(+), 13 deletions(-) -- 2.30.2 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/2] vhost: Change the sequence of device start 2022-06-29 2:25 [PATCH 0/2] vhost-user: Support vhost_dev_start Yajun Wu @ 2022-06-29 2:25 ` Yajun Wu 2022-06-29 2:25 ` [PATCH 2/2] vhost-user: Support vhost_dev_start Yajun Wu ` (3 subsequent siblings) 4 siblings, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-06-29 2:25 UTC (permalink / raw) To: qemu-devel, mst, alex.bennee, yajunw; +Cc: Parav Pandit This patch is part of adding vhost-user vhost_dev_start support. The motivation is to improve backend configuration speed and reduce live migration VM downtime. Moving the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". Following patch will add vhost-user vhost_dev_start support. Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/block/vhost-user-blk.c | 18 +++++++++++------- hw/net/vhost_net.c | 12 ++++++------ 2 files changed, 17 insertions(+), 13 deletions(-) diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index 9117222456..972ef46365 100644 --- a/hw/block/vhost-user-blk.c +++ b/hw/block/vhost-user-blk.c @@ -163,13 +163,6 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) goto err_guest_notifiers; } - ret = vhost_dev_start(&s->dev, vdev); - if (ret < 0) { - error_setg_errno(errp, -ret, "Error starting vhost"); - goto err_guest_notifiers; - } - s->started_vu = true; - /* guest_notifier_mask/pending not used yet, so just unmask * everything here. virtio-pci will do the right thing by * enabling/disabling irqfd. @@ -178,9 +171,20 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) vhost_virtqueue_mask(&s->dev, vdev, i, false); } + s->dev.vq_index_end = s->dev.nvqs; + ret = vhost_dev_start(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error starting vhost"); + goto err_guest_notifiers; + } + s->started_vu = true; + return ret; err_guest_notifiers: + for (i = 0; i < s->dev.nvqs; i++) { + vhost_virtqueue_mask(&s->dev, vdev, i, true); + } k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); err_host_notifiers: vhost_dev_disable_notifiers(&s->dev, vdev); diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index ccac5b7a64..6a8a6082a2 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -370,21 +370,21 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, } else { peer = qemu_get_peer(ncs, n->max_queue_pairs); } - r = vhost_net_start_one(get_vhost_net(peer), dev); - - if (r < 0) { - goto err_start; - } if (peer->vring_enable) { /* restore vring enable state */ r = vhost_set_vring_enable(peer, peer->vring_enable); if (r < 0) { - vhost_net_stop_one(get_vhost_net(peer), dev); goto err_start; } } + + r = vhost_net_start_one(get_vhost_net(peer), dev); + if (r < 0) { + vhost_net_stop_one(get_vhost_net(peer), dev); + goto err_start; + } } return 0; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/2] vhost-user: Support vhost_dev_start 2022-06-29 2:25 [PATCH 0/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-06-29 2:25 ` [PATCH 1/2] vhost: Change the sequence of device start Yajun Wu @ 2022-06-29 2:25 ` Yajun Wu 2022-07-12 10:54 ` [PATCH v2 0/2] " Yajun Wu ` (2 subsequent siblings) 4 siblings, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-06-29 2:25 UTC (permalink / raw) To: qemu-devel, mst, alex.bennee, yajunw; +Cc: Parav Pandit The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/virtio/vhost-user.c | 58 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 4b9be26e84..9f75c51dc2 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -81,6 +81,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13, /* Feature 14 reserved for VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS. */ VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, + VHOST_USER_PROTOCOL_F_STATUS = 16, VHOST_USER_PROTOCOL_F_MAX }; @@ -126,6 +127,8 @@ typedef enum VhostUserRequest { VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_ADD_MEM_REG = 37, VHOST_USER_REM_MEM_REG = 38, + VHOST_USER_SET_STATUS = 39, + VHOST_USER_GET_STATUS = 40, VHOST_USER_MAX } VhostUserRequest; @@ -1446,6 +1449,39 @@ static int vhost_user_set_u64(struct vhost_dev *dev, int request, uint64_t u64, return 0; } +static int vhost_user_set_status(struct vhost_dev *dev, uint8_t status) +{ + return vhost_user_set_u64(dev, VHOST_USER_SET_STATUS, status, false); +} + +static int vhost_user_get_status(struct vhost_dev *dev, uint8_t *status) +{ + uint64_t value; + int ret; + + ret = vhost_user_get_u64(dev, VHOST_USER_GET_STATUS, &value); + if (ret < 0) { + return ret; + } + *status = value; + + return 0; +} + +static int vhost_user_add_status(struct vhost_dev *dev, uint8_t status) +{ + uint8_t s; + int ret; + + ret = vhost_user_get_status(dev, &s); + if (ret < 0) { + return ret; + } + s |= status; + + return vhost_user_set_status(dev, s); +} + static int vhost_user_set_features(struct vhost_dev *dev, uint64_t features) { @@ -2602,6 +2638,27 @@ void vhost_user_cleanup(VhostUserState *user) user->chr = NULL; } +static int vhost_user_dev_start(struct vhost_dev *dev, bool started) +{ + if (!virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + return 0; + } + + /* Set device status only for last queue pair */ + if (dev->vq_index + dev->nvqs != dev->vq_index_end) { + return 0; + } + + if (started) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE | + VIRTIO_CONFIG_S_DRIVER | + VIRTIO_CONFIG_S_DRIVER_OK); + } else { + return vhost_user_set_status(dev, 0); + } +} + const VhostOps user_ops = { .backend_type = VHOST_BACKEND_TYPE_USER, .vhost_backend_init = vhost_user_backend_init, @@ -2635,4 +2692,5 @@ const VhostOps user_ops = { .vhost_backend_mem_section_filter = vhost_user_mem_section_filter, .vhost_get_inflight_fd = vhost_user_get_inflight_fd, .vhost_set_inflight_fd = vhost_user_set_inflight_fd, + .vhost_dev_start = vhost_user_dev_start, }; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 0/2] vhost-user: Support vhost_dev_start 2022-06-29 2:25 [PATCH 0/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-06-29 2:25 ` [PATCH 1/2] vhost: Change the sequence of device start Yajun Wu 2022-06-29 2:25 ` [PATCH 2/2] vhost-user: Support vhost_dev_start Yajun Wu @ 2022-07-12 10:54 ` Yajun Wu 2022-07-12 10:55 ` [PATCH v2 1/2] vhost: Change the sequence of device start Yajun Wu ` (2 more replies) 2022-10-17 6:44 ` [PATCH v3 " Yajun Wu 2022-11-07 4:08 ` [PATCH v4 " Yajun Wu 4 siblings, 3 replies; 18+ messages in thread From: Yajun Wu @ 2022-07-12 10:54 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. Another change is to move the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction v2: - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK - avoid adding status bits already set Yajun Wu (2): vhost: Change the sequence of device start vhost-user: Support vhost_dev_start hw/block/vhost-user-blk.c | 18 ++++++---- hw/net/vhost_net.c | 12 +++---- hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 90 insertions(+), 14 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 1/2] vhost: Change the sequence of device start 2022-07-12 10:54 ` [PATCH v2 0/2] " Yajun Wu @ 2022-07-12 10:55 ` Yajun Wu 2022-07-12 10:55 ` [PATCH v2 2/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-07-26 14:56 ` [PATCH v2 0/2] " Michael S. Tsirkin 2 siblings, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-07-12 10:55 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav This patch is part of adding vhost-user vhost_dev_start support. The motivation is to improve backend configuration speed and reduce live migration VM downtime. Moving the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". Following patch will add vhost-user vhost_dev_start support. Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/block/vhost-user-blk.c | 18 +++++++++++------- hw/net/vhost_net.c | 12 ++++++------ 2 files changed, 17 insertions(+), 13 deletions(-) diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index 9117222456..972ef46365 100644 --- a/hw/block/vhost-user-blk.c +++ b/hw/block/vhost-user-blk.c @@ -163,13 +163,6 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) goto err_guest_notifiers; } - ret = vhost_dev_start(&s->dev, vdev); - if (ret < 0) { - error_setg_errno(errp, -ret, "Error starting vhost"); - goto err_guest_notifiers; - } - s->started_vu = true; - /* guest_notifier_mask/pending not used yet, so just unmask * everything here. virtio-pci will do the right thing by * enabling/disabling irqfd. @@ -178,9 +171,20 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) vhost_virtqueue_mask(&s->dev, vdev, i, false); } + s->dev.vq_index_end = s->dev.nvqs; + ret = vhost_dev_start(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error starting vhost"); + goto err_guest_notifiers; + } + s->started_vu = true; + return ret; err_guest_notifiers: + for (i = 0; i < s->dev.nvqs; i++) { + vhost_virtqueue_mask(&s->dev, vdev, i, true); + } k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); err_host_notifiers: vhost_dev_disable_notifiers(&s->dev, vdev); diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index ccac5b7a64..6a8a6082a2 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -370,21 +370,21 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, } else { peer = qemu_get_peer(ncs, n->max_queue_pairs); } - r = vhost_net_start_one(get_vhost_net(peer), dev); - - if (r < 0) { - goto err_start; - } if (peer->vring_enable) { /* restore vring enable state */ r = vhost_set_vring_enable(peer, peer->vring_enable); if (r < 0) { - vhost_net_stop_one(get_vhost_net(peer), dev); goto err_start; } } + + r = vhost_net_start_one(get_vhost_net(peer), dev); + if (r < 0) { + vhost_net_stop_one(get_vhost_net(peer), dev); + goto err_start; + } } return 0; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 2/2] vhost-user: Support vhost_dev_start 2022-07-12 10:54 ` [PATCH v2 0/2] " Yajun Wu 2022-07-12 10:55 ` [PATCH v2 1/2] vhost: Change the sequence of device start Yajun Wu @ 2022-07-12 10:55 ` Yajun Wu 2022-07-26 14:56 ` [PATCH v2 0/2] " Michael S. Tsirkin 2 siblings, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-07-12 10:55 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/virtio/vhost-user.c | 74 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 73 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 75b8df21a4..e6ad4c05b8 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -81,6 +81,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13, /* Feature 14 reserved for VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS. */ VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, + VHOST_USER_PROTOCOL_F_STATUS = 16, VHOST_USER_PROTOCOL_F_MAX }; @@ -126,6 +127,8 @@ typedef enum VhostUserRequest { VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_ADD_MEM_REG = 37, VHOST_USER_REM_MEM_REG = 38, + VHOST_USER_SET_STATUS = 39, + VHOST_USER_GET_STATUS = 40, VHOST_USER_MAX } VhostUserRequest; @@ -1451,6 +1454,43 @@ static int vhost_user_set_u64(struct vhost_dev *dev, int request, uint64_t u64, return 0; } +static int vhost_user_set_status(struct vhost_dev *dev, uint8_t status) +{ + return vhost_user_set_u64(dev, VHOST_USER_SET_STATUS, status, false); +} + +static int vhost_user_get_status(struct vhost_dev *dev, uint8_t *status) +{ + uint64_t value; + int ret; + + ret = vhost_user_get_u64(dev, VHOST_USER_GET_STATUS, &value); + if (ret < 0) { + return ret; + } + *status = value; + + return 0; +} + +static int vhost_user_add_status(struct vhost_dev *dev, uint8_t status) +{ + uint8_t s; + int ret; + + ret = vhost_user_get_status(dev, &s); + if (ret < 0) { + return ret; + } + + if ((s & status) == status) { + return 0; + } + s |= status; + + return vhost_user_set_status(dev, s); +} + static int vhost_user_set_features(struct vhost_dev *dev, uint64_t features) { @@ -1459,9 +1499,19 @@ static int vhost_user_set_features(struct vhost_dev *dev, * backend is actually logging changes */ bool log_enabled = features & (0x1ULL << VHOST_F_LOG_ALL); + int ret; - return vhost_user_set_u64(dev, VHOST_USER_SET_FEATURES, features, + ret = vhost_user_set_u64(dev, VHOST_USER_SET_FEATURES, features, log_enabled); + + if (virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + if (!ret) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_FEATURES_OK); + } + } + + return ret; } static int vhost_user_set_protocol_features(struct vhost_dev *dev, @@ -2607,6 +2657,27 @@ void vhost_user_cleanup(VhostUserState *user) user->chr = NULL; } +static int vhost_user_dev_start(struct vhost_dev *dev, bool started) +{ + if (!virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + return 0; + } + + /* Set device status only for last queue pair */ + if (dev->vq_index + dev->nvqs != dev->vq_index_end) { + return 0; + } + + if (started) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE | + VIRTIO_CONFIG_S_DRIVER | + VIRTIO_CONFIG_S_DRIVER_OK); + } else { + return vhost_user_set_status(dev, 0); + } +} + const VhostOps user_ops = { .backend_type = VHOST_BACKEND_TYPE_USER, .vhost_backend_init = vhost_user_backend_init, @@ -2641,4 +2712,5 @@ const VhostOps user_ops = { .vhost_backend_mem_section_filter = vhost_user_mem_section_filter, .vhost_get_inflight_fd = vhost_user_get_inflight_fd, .vhost_set_inflight_fd = vhost_user_set_inflight_fd, + .vhost_dev_start = vhost_user_dev_start, }; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 0/2] vhost-user: Support vhost_dev_start 2022-07-12 10:54 ` [PATCH v2 0/2] " Yajun Wu 2022-07-12 10:55 ` [PATCH v2 1/2] vhost: Change the sequence of device start Yajun Wu 2022-07-12 10:55 ` [PATCH v2 2/2] vhost-user: Support vhost_dev_start Yajun Wu @ 2022-07-26 14:56 ` Michael S. Tsirkin 2022-09-05 4:58 ` Yajun Wu 2 siblings, 1 reply; 18+ messages in thread From: Michael S. Tsirkin @ 2022-07-26 14:56 UTC (permalink / raw) To: Yajun Wu; +Cc: qemu-devel, parav On Tue, Jul 12, 2022 at 01:54:59PM +0300, Yajun Wu wrote: > The motivation of adding vhost-user vhost_dev_start support is to > improve backend configuration speed and reduce live migration VM > downtime. > > Today VQ configuration is issued one by one. For virtio net with > multi-queue support, backend needs to update RSS (Receive side > scaling) on every rx queue enable. Updating RSS is time-consuming > (typical time like 7ms). > > Implement already defined vhost status and message in the vhost > specification [1]. > (a) VHOST_USER_PROTOCOL_F_STATUS > (b) VHOST_USER_SET_STATUS > (c) VHOST_USER_GET_STATUS > > Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for > device start and reset(0) for device stop. > > On reception of the DRIVER_OK message, backend can apply the needed setting > only once (instead of incremental) and also utilize parallelism on enabling > queues. > > This improves QEMU's live migration downtime with vhost user backend > implementation by great margin, specially for the large number of VQs of 64 > from 800 msec to 250 msec. > > Another change is to move the device start routines after finishing all the > necessary device and VQ configuration, further aligning to the virtio > specification for "device initialization sequence". I think it's best to merge this after the 7.1 release. I've tagged this but pls ping me after the release just to make sure it's not lost. Thanks! > [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction > > v2: > - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK > - avoid adding status bits already set > > Yajun Wu (2): > vhost: Change the sequence of device start > vhost-user: Support vhost_dev_start > > hw/block/vhost-user-blk.c | 18 ++++++---- > hw/net/vhost_net.c | 12 +++---- > hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- > 3 files changed, 90 insertions(+), 14 deletions(-) > > -- > 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH v2 0/2] vhost-user: Support vhost_dev_start 2022-07-26 14:56 ` [PATCH v2 0/2] " Michael S. Tsirkin @ 2022-09-05 4:58 ` Yajun Wu 2022-10-12 10:10 ` Yajun Wu 0 siblings, 1 reply; 18+ messages in thread From: Yajun Wu @ 2022-09-05 4:58 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: qemu-devel, Parav Pandit Michael, 7.1 released, please merge. -----Original Message----- From: Michael S. Tsirkin <mst@redhat.com> Sent: Tuesday, July 26, 2022 10:56 PM To: Yajun Wu <yajunw@nvidia.com> Cc: qemu-devel@nongnu.org; Parav Pandit <parav@nvidia.com> Subject: Re: [PATCH v2 0/2] vhost-user: Support vhost_dev_start External email: Use caution opening links or attachments On Tue, Jul 12, 2022 at 01:54:59PM +0300, Yajun Wu wrote: > The motivation of adding vhost-user vhost_dev_start support is to > improve backend configuration speed and reduce live migration VM > downtime. > > Today VQ configuration is issued one by one. For virtio net with > multi-queue support, backend needs to update RSS (Receive side > scaling) on every rx queue enable. Updating RSS is time-consuming > (typical time like 7ms). > > Implement already defined vhost status and message in the vhost > specification [1]. > (a) VHOST_USER_PROTOCOL_F_STATUS > (b) VHOST_USER_SET_STATUS > (c) VHOST_USER_GET_STATUS > > Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for > device start and reset(0) for device stop. > > On reception of the DRIVER_OK message, backend can apply the needed > setting only once (instead of incremental) and also utilize > parallelism on enabling queues. > > This improves QEMU's live migration downtime with vhost user backend > implementation by great margin, specially for the large number of VQs > of 64 from 800 msec to 250 msec. > > Another change is to move the device start routines after finishing > all the necessary device and VQ configuration, further aligning to the > virtio specification for "device initialization sequence". I think it's best to merge this after the 7.1 release. I've tagged this but pls ping me after the release just to make sure it's not lost. Thanks! > [1] > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fqemu > -project.gitlab.io%2Fqemu%2Finterop%2Fvhost-user.html%23introduction&a > mp;data=05%7C01%7Cyajunw%40nvidia.com%7Cb526f8237f7a4531d6eb08da6f16fc > e9%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637944441784266470%7CU > nknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha > WwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zQ5%2F6pYP0riRzArdOWyaARNn > 6s9NC8vIeIgj%2BB03EAM%3D&reserved=0 > > v2: > - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK > - avoid adding status bits already set > > Yajun Wu (2): > vhost: Change the sequence of device start > vhost-user: Support vhost_dev_start > > hw/block/vhost-user-blk.c | 18 ++++++---- > hw/net/vhost_net.c | 12 +++---- > hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- > 3 files changed, 90 insertions(+), 14 deletions(-) > > -- > 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH v2 0/2] vhost-user: Support vhost_dev_start 2022-09-05 4:58 ` Yajun Wu @ 2022-10-12 10:10 ` Yajun Wu 0 siblings, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-10-12 10:10 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: qemu-devel, Parav Pandit Michael, Don't forget to merge. Thanks! -----Original Message----- From: Yajun Wu Sent: Monday, September 5, 2022 12:58 PM To: Michael S. Tsirkin <mst@redhat.com> Cc: qemu-devel@nongnu.org; Parav Pandit <parav@nvidia.com> Subject: RE: [PATCH v2 0/2] vhost-user: Support vhost_dev_start Michael, 7.1 released, please merge. -----Original Message----- From: Michael S. Tsirkin <mst@redhat.com> Sent: Tuesday, July 26, 2022 10:56 PM To: Yajun Wu <yajunw@nvidia.com> Cc: qemu-devel@nongnu.org; Parav Pandit <parav@nvidia.com> Subject: Re: [PATCH v2 0/2] vhost-user: Support vhost_dev_start External email: Use caution opening links or attachments On Tue, Jul 12, 2022 at 01:54:59PM +0300, Yajun Wu wrote: > The motivation of adding vhost-user vhost_dev_start support is to > improve backend configuration speed and reduce live migration VM > downtime. > > Today VQ configuration is issued one by one. For virtio net with > multi-queue support, backend needs to update RSS (Receive side > scaling) on every rx queue enable. Updating RSS is time-consuming > (typical time like 7ms). > > Implement already defined vhost status and message in the vhost > specification [1]. > (a) VHOST_USER_PROTOCOL_F_STATUS > (b) VHOST_USER_SET_STATUS > (c) VHOST_USER_GET_STATUS > > Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for > device start and reset(0) for device stop. > > On reception of the DRIVER_OK message, backend can apply the needed > setting only once (instead of incremental) and also utilize > parallelism on enabling queues. > > This improves QEMU's live migration downtime with vhost user backend > implementation by great margin, specially for the large number of VQs > of 64 from 800 msec to 250 msec. > > Another change is to move the device start routines after finishing > all the necessary device and VQ configuration, further aligning to the > virtio specification for "device initialization sequence". I think it's best to merge this after the 7.1 release. I've tagged this but pls ping me after the release just to make sure it's not lost. Thanks! > [1] > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fqemu > -project.gitlab.io%2Fqemu%2Finterop%2Fvhost-user.html%23introduction&a > mp;data=05%7C01%7Cyajunw%40nvidia.com%7Cb526f8237f7a4531d6eb08da6f16fc > e9%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637944441784266470%7CU > nknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha > WwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zQ5%2F6pYP0riRzArdOWyaARNn > 6s9NC8vIeIgj%2BB03EAM%3D&reserved=0 > > v2: > - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK > - avoid adding status bits already set > > Yajun Wu (2): > vhost: Change the sequence of device start > vhost-user: Support vhost_dev_start > > hw/block/vhost-user-blk.c | 18 ++++++---- > hw/net/vhost_net.c | 12 +++---- > hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- > 3 files changed, 90 insertions(+), 14 deletions(-) > > -- > 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 0/2] vhost-user: Support vhost_dev_start 2022-06-29 2:25 [PATCH 0/2] vhost-user: Support vhost_dev_start Yajun Wu ` (2 preceding siblings ...) 2022-07-12 10:54 ` [PATCH v2 0/2] " Yajun Wu @ 2022-10-17 6:44 ` Yajun Wu 2022-10-17 6:44 ` [PATCH v3 1/2] vhost: Change the sequence of device start Yajun Wu ` (2 more replies) 2022-11-07 4:08 ` [PATCH v4 " Yajun Wu 4 siblings, 3 replies; 18+ messages in thread From: Yajun Wu @ 2022-10-17 6:44 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. Another change is to move the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction v3: - rebase v2: - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK - avoid adding status bits already set Yajun Wu (2): vhost: Change the sequence of device start vhost-user: Support vhost_dev_start hw/block/vhost-user-blk.c | 18 ++++++---- hw/net/vhost_net.c | 12 +++---- hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 90 insertions(+), 14 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 1/2] vhost: Change the sequence of device start 2022-10-17 6:44 ` [PATCH v3 " Yajun Wu @ 2022-10-17 6:44 ` Yajun Wu 2022-11-05 16:43 ` Michael S. Tsirkin 2022-10-17 6:44 ` [PATCH v3 2/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-11-04 7:11 ` [PATCH v3 0/2] " Michael S. Tsirkin 2 siblings, 1 reply; 18+ messages in thread From: Yajun Wu @ 2022-10-17 6:44 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav This patch is part of adding vhost-user vhost_dev_start support. The motivation is to improve backend configuration speed and reduce live migration VM downtime. Moving the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". Following patch will add vhost-user vhost_dev_start support. Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/block/vhost-user-blk.c | 18 +++++++++++------- hw/net/vhost_net.c | 12 ++++++------ 2 files changed, 17 insertions(+), 13 deletions(-) diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index 84902dde17..f4deb8cd5d 100644 --- a/hw/block/vhost-user-blk.c +++ b/hw/block/vhost-user-blk.c @@ -164,13 +164,6 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) goto err_guest_notifiers; } - ret = vhost_dev_start(&s->dev, vdev); - if (ret < 0) { - error_setg_errno(errp, -ret, "Error starting vhost"); - goto err_guest_notifiers; - } - s->started_vu = true; - /* guest_notifier_mask/pending not used yet, so just unmask * everything here. virtio-pci will do the right thing by * enabling/disabling irqfd. @@ -179,9 +172,20 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) vhost_virtqueue_mask(&s->dev, vdev, i, false); } + s->dev.vq_index_end = s->dev.nvqs; + ret = vhost_dev_start(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error starting vhost"); + goto err_guest_notifiers; + } + s->started_vu = true; + return ret; err_guest_notifiers: + for (i = 0; i < s->dev.nvqs; i++) { + vhost_virtqueue_mask(&s->dev, vdev, i, true); + } k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); err_host_notifiers: vhost_dev_disable_notifiers(&s->dev, vdev); diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index d28f8b974b..d6924f5e57 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -387,21 +387,21 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, } else { peer = qemu_get_peer(ncs, n->max_queue_pairs); } - r = vhost_net_start_one(get_vhost_net(peer), dev); - - if (r < 0) { - goto err_start; - } if (peer->vring_enable) { /* restore vring enable state */ r = vhost_set_vring_enable(peer, peer->vring_enable); if (r < 0) { - vhost_net_stop_one(get_vhost_net(peer), dev); goto err_start; } } + + r = vhost_net_start_one(get_vhost_net(peer), dev); + if (r < 0) { + vhost_net_stop_one(get_vhost_net(peer), dev); + goto err_start; + } } return 0; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v3 1/2] vhost: Change the sequence of device start 2022-10-17 6:44 ` [PATCH v3 1/2] vhost: Change the sequence of device start Yajun Wu @ 2022-11-05 16:43 ` Michael S. Tsirkin 0 siblings, 0 replies; 18+ messages in thread From: Michael S. Tsirkin @ 2022-11-05 16:43 UTC (permalink / raw) To: Yajun Wu; +Cc: qemu-devel, parav On Mon, Oct 17, 2022 at 02:44:51PM +0800, Yajun Wu wrote: > This patch is part of adding vhost-user vhost_dev_start support. The > motivation is to improve backend configuration speed and reduce live > migration VM downtime. > > Moving the device start routines after finishing all the necessary device > and VQ configuration, further aligning to the virtio specification for > "device initialization sequence". > > Following patch will add vhost-user vhost_dev_start support. > > Signed-off-by: Yajun Wu <yajunw@nvidia.com> > Acked-by: Parav Pandit <parav@nvidia.com> > > --- > hw/block/vhost-user-blk.c | 18 +++++++++++------- > hw/net/vhost_net.c | 12 ++++++------ > 2 files changed, 17 insertions(+), 13 deletions(-) > > diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c > index 84902dde17..f4deb8cd5d 100644 > --- a/hw/block/vhost-user-blk.c > +++ b/hw/block/vhost-user-blk.c > @@ -164,13 +164,6 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) > goto err_guest_notifiers; > } > > - ret = vhost_dev_start(&s->dev, vdev); > - if (ret < 0) { > - error_setg_errno(errp, -ret, "Error starting vhost"); > - goto err_guest_notifiers; > - } > - s->started_vu = true; > - > /* guest_notifier_mask/pending not used yet, so just unmask > * everything here. virtio-pci will do the right thing by > * enabling/disabling irqfd. > @@ -179,9 +172,20 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) > vhost_virtqueue_mask(&s->dev, vdev, i, false); > } > > + s->dev.vq_index_end = s->dev.nvqs; > + ret = vhost_dev_start(&s->dev, vdev); > + if (ret < 0) { > + error_setg_errno(errp, -ret, "Error starting vhost"); > + goto err_guest_notifiers; > + } > + s->started_vu = true; > + > return ret; > > err_guest_notifiers: > + for (i = 0; i < s->dev.nvqs; i++) { > + vhost_virtqueue_mask(&s->dev, vdev, i, true); > + } > k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); > err_host_notifiers: > vhost_dev_disable_notifiers(&s->dev, vdev); > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c > index d28f8b974b..d6924f5e57 100644 > --- a/hw/net/vhost_net.c > +++ b/hw/net/vhost_net.c > @@ -387,21 +387,21 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, > } else { > peer = qemu_get_peer(ncs, n->max_queue_pairs); > } > - r = vhost_net_start_one(get_vhost_net(peer), dev); > - > - if (r < 0) { > - goto err_start; > - } > > if (peer->vring_enable) { > /* restore vring enable state */ > r = vhost_set_vring_enable(peer, peer->vring_enable); > > if (r < 0) { > - vhost_net_stop_one(get_vhost_net(peer), dev); > goto err_start; > } > } > + > + r = vhost_net_start_one(get_vhost_net(peer), dev); > + if (r < 0) { > + vhost_net_stop_one(get_vhost_net(peer), dev); Error handling broken here. Corrupts memory if triggered. I fixed it up when applying just because of the freeze but I won't do this generally. > + goto err_start; > + } > } > > return 0; > -- > 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 2/2] vhost-user: Support vhost_dev_start 2022-10-17 6:44 ` [PATCH v3 " Yajun Wu 2022-10-17 6:44 ` [PATCH v3 1/2] vhost: Change the sequence of device start Yajun Wu @ 2022-10-17 6:44 ` Yajun Wu 2022-11-04 7:11 ` [PATCH v3 0/2] " Michael S. Tsirkin 2 siblings, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-10-17 6:44 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/virtio/vhost-user.c | 74 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 73 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 03415b6c95..bb5164b753 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -81,6 +81,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13, /* Feature 14 reserved for VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS. */ VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, + VHOST_USER_PROTOCOL_F_STATUS = 16, VHOST_USER_PROTOCOL_F_MAX }; @@ -126,6 +127,8 @@ typedef enum VhostUserRequest { VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_ADD_MEM_REG = 37, VHOST_USER_REM_MEM_REG = 38, + VHOST_USER_SET_STATUS = 39, + VHOST_USER_GET_STATUS = 40, VHOST_USER_MAX } VhostUserRequest; @@ -1452,6 +1455,43 @@ static int vhost_user_set_u64(struct vhost_dev *dev, int request, uint64_t u64, return 0; } +static int vhost_user_set_status(struct vhost_dev *dev, uint8_t status) +{ + return vhost_user_set_u64(dev, VHOST_USER_SET_STATUS, status, false); +} + +static int vhost_user_get_status(struct vhost_dev *dev, uint8_t *status) +{ + uint64_t value; + int ret; + + ret = vhost_user_get_u64(dev, VHOST_USER_GET_STATUS, &value); + if (ret < 0) { + return ret; + } + *status = value; + + return 0; +} + +static int vhost_user_add_status(struct vhost_dev *dev, uint8_t status) +{ + uint8_t s; + int ret; + + ret = vhost_user_get_status(dev, &s); + if (ret < 0) { + return ret; + } + + if ((s & status) == status) { + return 0; + } + s |= status; + + return vhost_user_set_status(dev, s); +} + static int vhost_user_set_features(struct vhost_dev *dev, uint64_t features) { @@ -1460,6 +1500,7 @@ static int vhost_user_set_features(struct vhost_dev *dev, * backend is actually logging changes */ bool log_enabled = features & (0x1ULL << VHOST_F_LOG_ALL); + int ret; /* * We need to include any extra backend only feature bits that @@ -1467,9 +1508,18 @@ static int vhost_user_set_features(struct vhost_dev *dev, * VHOST_USER_F_PROTOCOL_FEATURES bit for enabling protocol * features. */ - return vhost_user_set_u64(dev, VHOST_USER_SET_FEATURES, + ret = vhost_user_set_u64(dev, VHOST_USER_SET_FEATURES, features | dev->backend_features, log_enabled); + + if (virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + if (!ret) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_FEATURES_OK); + } + } + + return ret; } static int vhost_user_set_protocol_features(struct vhost_dev *dev, @@ -2615,6 +2665,27 @@ void vhost_user_cleanup(VhostUserState *user) user->chr = NULL; } +static int vhost_user_dev_start(struct vhost_dev *dev, bool started) +{ + if (!virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + return 0; + } + + /* Set device status only for last queue pair */ + if (dev->vq_index + dev->nvqs != dev->vq_index_end) { + return 0; + } + + if (started) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE | + VIRTIO_CONFIG_S_DRIVER | + VIRTIO_CONFIG_S_DRIVER_OK); + } else { + return vhost_user_set_status(dev, 0); + } +} + const VhostOps user_ops = { .backend_type = VHOST_BACKEND_TYPE_USER, .vhost_backend_init = vhost_user_backend_init, @@ -2649,4 +2720,5 @@ const VhostOps user_ops = { .vhost_backend_mem_section_filter = vhost_user_mem_section_filter, .vhost_get_inflight_fd = vhost_user_get_inflight_fd, .vhost_set_inflight_fd = vhost_user_set_inflight_fd, + .vhost_dev_start = vhost_user_dev_start, }; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/2] vhost-user: Support vhost_dev_start 2022-10-17 6:44 ` [PATCH v3 " Yajun Wu 2022-10-17 6:44 ` [PATCH v3 1/2] vhost: Change the sequence of device start Yajun Wu 2022-10-17 6:44 ` [PATCH v3 2/2] vhost-user: Support vhost_dev_start Yajun Wu @ 2022-11-04 7:11 ` Michael S. Tsirkin 2022-11-04 7:58 ` Michael S. Tsirkin 2 siblings, 1 reply; 18+ messages in thread From: Michael S. Tsirkin @ 2022-11-04 7:11 UTC (permalink / raw) To: Yajun Wu; +Cc: qemu-devel, parav On Mon, Oct 17, 2022 at 02:44:50PM +0800, Yajun Wu wrote: > The motivation of adding vhost-user vhost_dev_start support is to > improve backend configuration speed and reduce live migration VM > downtime. > > Today VQ configuration is issued one by one. For virtio net with > multi-queue support, backend needs to update RSS (Receive side > scaling) on every rx queue enable. Updating RSS is time-consuming > (typical time like 7ms). > > Implement already defined vhost status and message in the vhost > specification [1]. > (a) VHOST_USER_PROTOCOL_F_STATUS > (b) VHOST_USER_SET_STATUS > (c) VHOST_USER_GET_STATUS > > Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for > device start and reset(0) for device stop. > > On reception of the DRIVER_OK message, backend can apply the needed setting > only once (instead of incremental) and also utilize parallelism on enabling > queues. > > This improves QEMU's live migration downtime with vhost user backend > implementation by great margin, specially for the large number of VQs of 64 > from 800 msec to 250 msec. > > Another change is to move the device start routines after finishing all the > necessary device and VQ configuration, further aligning to the virtio > specification for "device initialization sequence". > > [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction This patchset seems to trip up ubsan. https://gitlab.com/mitsirkin/qemu/-/pipelines/684763327/failures specifially: passes before this patchset: https://gitlab.com/mitsirkin/qemu/-/jobs/3269302594 fails with this patchset: https://gitlab.com/mitsirkin/qemu/-/pipelines/684763327/failures (there's one patch on top but it seems unrelated) Seems hard to debug, only reproduced under gitlab though Stefan reports reproducing this locally. I'm thinking of dropping this patchset for now, deferring to next release - thoughts? > v3: > - rebase > > v2: > - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK > - avoid adding status bits already set > > Yajun Wu (2): > vhost: Change the sequence of device start > vhost-user: Support vhost_dev_start > > hw/block/vhost-user-blk.c | 18 ++++++---- > hw/net/vhost_net.c | 12 +++---- > hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- > 3 files changed, 90 insertions(+), 14 deletions(-) > > -- > 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/2] vhost-user: Support vhost_dev_start 2022-11-04 7:11 ` [PATCH v3 0/2] " Michael S. Tsirkin @ 2022-11-04 7:58 ` Michael S. Tsirkin 0 siblings, 0 replies; 18+ messages in thread From: Michael S. Tsirkin @ 2022-11-04 7:58 UTC (permalink / raw) To: Yajun Wu; +Cc: qemu-devel, parav On Fri, Nov 04, 2022 at 03:11:44AM -0400, Michael S. Tsirkin wrote: > On Mon, Oct 17, 2022 at 02:44:50PM +0800, Yajun Wu wrote: > > The motivation of adding vhost-user vhost_dev_start support is to > > improve backend configuration speed and reduce live migration VM > > downtime. > > > > Today VQ configuration is issued one by one. For virtio net with > > multi-queue support, backend needs to update RSS (Receive side > > scaling) on every rx queue enable. Updating RSS is time-consuming > > (typical time like 7ms). > > > > Implement already defined vhost status and message in the vhost > > specification [1]. > > (a) VHOST_USER_PROTOCOL_F_STATUS > > (b) VHOST_USER_SET_STATUS > > (c) VHOST_USER_GET_STATUS > > > > Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for > > device start and reset(0) for device stop. > > > > On reception of the DRIVER_OK message, backend can apply the needed setting > > only once (instead of incremental) and also utilize parallelism on enabling > > queues. > > > > This improves QEMU's live migration downtime with vhost user backend > > implementation by great margin, specially for the large number of VQs of 64 > > from 800 msec to 250 msec. > > > > Another change is to move the device start routines after finishing all the > > necessary device and VQ configuration, further aligning to the virtio > > specification for "device initialization sequence". > > > > [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction > > > This patchset seems to trip up ubsan. > https://gitlab.com/mitsirkin/qemu/-/pipelines/684763327/failures > > > specifially: > > passes before this patchset: > https://gitlab.com/mitsirkin/qemu/-/jobs/3269302594 > > fails with this patchset: > https://gitlab.com/mitsirkin/qemu/-/pipelines/684763327/failures > > (there's one patch on top but it seems unrelated) > And this is with just this patchset: https://gitlab.com/mitsirkin/qemu/-/pipelines/685419082/failures in fact it's just the 1st patch: https://gitlab.com/mitsirkin/qemu/-/pipelines/685417959/failures log here: https://gitlab.com/mitsirkin/qemu/-/jobs/3273601845 emu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-U5G0U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-U5G0U1/reconnect.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-CN7ZU1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-CN7ZU1/connect-fail.sock,server=on qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-CN7ZU1/connect-fail.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-4KRYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-4KRYU1/flags-mismatch.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) UndefinedBehaviorSanitizer:DEADLYSIGNAL ==8621==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55c91aca0620 bp 0x000000000000 sp 0x7ffd150b8600 T8621) ==8621==The signal is caused by a READ memory access. ==8621==Hint: address points to the zero page. #0 0x55c91aca0620 in ldl_he_p /builds/mitsirkin/qemu/include/qemu/bswap.h:301:5 #1 0x55c91aca0620 in ldn_he_p /builds/mitsirkin/qemu/include/qemu/bswap.h:440:1 #2 0x55c91aca0620 in flatview_write_continue /builds/mitsirkin/qemu/build/../softmmu/physmem.c:2824:19 #3 0x55c91ac9da91 in flatview_write /builds/mitsirkin/qemu/build/../softmmu/physmem.c:2867:12 #4 0x55c91ac9da91 in address_space_write /builds/mitsirkin/qemu/build/../softmmu/physmem.c:2963:18 #5 0x55c91ac9e857 in address_space_unmap /builds/mitsirkin/qemu/build/../softmmu/physmem.c:3306:9 #6 0x55c91ac461ec in vhost_memory_unmap /builds/mitsirkin/qemu/build/../hw/virtio/vhost.c:342:9 #7 0x55c91ac461ec in vhost_virtqueue_stop /builds/mitsirkin/qemu/build/../hw/virtio/vhost.c:1242:5 #8 0x55c91ac46484 in vhost_dev_stop /builds/mitsirkin/qemu/build/../hw/virtio/vhost.c:1882:9 #9 0x55c91a664cc4 in vhost_net_stop_one /builds/mitsirkin/qemu/build/../hw/net/vhost_net.c:329:5 #10 0x55c91a6646a6 in vhost_net_start /builds/mitsirkin/qemu/build/../hw/net/vhost_net.c:402:13 #11 0x55c91abdfda6 in virtio_net_vhost_status /builds/mitsirkin/qemu/build/../hw/net/virtio-net.c:297:13 #12 0x55c91abdfda6 in virtio_net_set_status /builds/mitsirkin/qemu/build/../hw/net/virtio-net.c:378:5 #13 0x55c91ac322a9 in virtio_set_status /builds/mitsirkin/qemu/build/../hw/virtio/virtio.c:2442:9 #14 0x55c91a7f69a0 in virtio_mmio_write /builds/mitsirkin/qemu/build/../hw/virtio/virtio-mmio.c:428:9 #15 0x55c91ac882b6 in memory_region_write_accessor /builds/mitsirkin/qemu/build/../softmmu/memory.c:492:5 #16 0x55c91ac8809a in access_with_adjusted_size /builds/mitsirkin/qemu/build/../softmmu/memory.c:554:18 #17 0x55c91ac87e3d in memory_region_dispatch_write /builds/mitsirkin/qemu/build/../softmmu/memory.c #18 0x55c91aca0661 in flatview_write_continue /builds/mitsirkin/qemu/build/../softmmu/physmem.c:2825:23 #19 0x55c91ac9da91 in flatview_write /builds/mitsirkin/qemu/build/../softmmu/physmem.c:2867:12 #20 0x55c91ac9da91 in address_space_write /builds/mitsirkin/qemu/build/../softmmu/physmem.c:2963:18 #21 0x55c91aca4766 in qtest_process_command /builds/mitsirkin/qemu/build/../softmmu/qtest.c #22 0x55c91aca3bfd in qtest_process_inbuf /builds/mitsirkin/qemu/build/../softmmu/qtest.c:796:9 #23 0x55c91aedd672 in tcp_chr_read /builds/mitsirkin/qemu/build/../chardev/char-socket.c:508:13 #24 0x7f199db120ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) #25 0x55c91afcfd2c in glib_pollfds_poll /builds/mitsirkin/qemu/build/../util/main-loop.c:297:9 #26 0x55c91afcfd2c in os_host_main_loop_wait /builds/mitsirkin/qemu/build/../util/main-loop.c:320:5 #27 0x55c91afcfd2c in main_loop_wait /builds/mitsirkin/qemu/build/../util/main-loop.c:596:11 #28 0x55c91a826d36 in qemu_main_loop /builds/mitsirkin/qemu/build/../softmmu/runstate.c:739:9 #29 0x55c91a3df4e5 in qemu_default_main /builds/mitsirkin/qemu/build/../softmmu/main.c:37:14 #30 0x7f199b85aeaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) #31 0x7f199b85af5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) #32 0x55c91a3b6084 in _start (/builds/mitsirkin/qemu/build/qemu-system-arm+0xc16084) UndefinedBehaviorSanitizer can not provide additional info. SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/mitsirkin/qemu/include/qemu/bswap.h:301:5 in ldl_he_p ==8621==ABORTING Broken pipe ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) ** ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8612]) failed unexpectedly (test program exited with status code -6) > Seems hard to debug, only reproduced under gitlab though Stefan reports > reproducing this locally. > I'm thinking of dropping this patchset for now, deferring to next > release - thoughts? > > > > > v3: > > - rebase > > > > v2: > > - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK > > - avoid adding status bits already set > > > > Yajun Wu (2): > > vhost: Change the sequence of device start > > vhost-user: Support vhost_dev_start > > > > hw/block/vhost-user-blk.c | 18 ++++++---- > > hw/net/vhost_net.c | 12 +++---- > > hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- > > 3 files changed, 90 insertions(+), 14 deletions(-) > > > > -- > > 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4 0/2] vhost-user: Support vhost_dev_start 2022-06-29 2:25 [PATCH 0/2] vhost-user: Support vhost_dev_start Yajun Wu ` (3 preceding siblings ...) 2022-10-17 6:44 ` [PATCH v3 " Yajun Wu @ 2022-11-07 4:08 ` Yajun Wu 2022-11-07 4:08 ` [PATCH v4 1/2] vhost: Change the sequence of device start Yajun Wu 2022-11-07 4:08 ` [PATCH v4 2/2] vhost-user: Support vhost_dev_start Yajun Wu 4 siblings, 2 replies; 18+ messages in thread From: Yajun Wu @ 2022-11-07 4:08 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. Another change is to move the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#introduction v4: - fix vhost_net_start_one fallback code v3: - rebase v2: - add setting status bit VIRTIO_CONFIG_S_FEATURES_OK - avoid adding status bits already set Yajun Wu (2): vhost: Change the sequence of device start vhost-user: Support vhost_dev_start hw/block/vhost-user-blk.c | 18 ++++++---- hw/net/vhost_net.c | 14 ++++---- hw/virtio/vhost-user.c | 74 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 92 insertions(+), 14 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4 1/2] vhost: Change the sequence of device start 2022-11-07 4:08 ` [PATCH v4 " Yajun Wu @ 2022-11-07 4:08 ` Yajun Wu 2022-11-07 4:08 ` [PATCH v4 2/2] vhost-user: Support vhost_dev_start Yajun Wu 1 sibling, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-11-07 4:08 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav This patch is part of adding vhost-user vhost_dev_start support. The motivation is to improve backend configuration speed and reduce live migration VM downtime. Moving the device start routines after finishing all the necessary device and VQ configuration, further aligning to the virtio specification for "device initialization sequence". Following patch will add vhost-user vhost_dev_start support. Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/block/vhost-user-blk.c | 18 +++++++++++------- hw/net/vhost_net.c | 14 ++++++++------ 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index 13bf5cc47a..28409c90f7 100644 --- a/hw/block/vhost-user-blk.c +++ b/hw/block/vhost-user-blk.c @@ -168,13 +168,6 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) goto err_guest_notifiers; } - ret = vhost_dev_start(&s->dev, vdev); - if (ret < 0) { - error_setg_errno(errp, -ret, "Error starting vhost"); - goto err_guest_notifiers; - } - s->started_vu = true; - /* guest_notifier_mask/pending not used yet, so just unmask * everything here. virtio-pci will do the right thing by * enabling/disabling irqfd. @@ -183,9 +176,20 @@ static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp) vhost_virtqueue_mask(&s->dev, vdev, i, false); } + s->dev.vq_index_end = s->dev.nvqs; + ret = vhost_dev_start(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error starting vhost"); + goto err_guest_notifiers; + } + s->started_vu = true; + return ret; err_guest_notifiers: + for (i = 0; i < s->dev.nvqs; i++) { + vhost_virtqueue_mask(&s->dev, vdev, i, true); + } k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); err_host_notifiers: vhost_dev_disable_notifiers(&s->dev, vdev); diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index d28f8b974b..0fe71ed309 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -387,21 +387,23 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, } else { peer = qemu_get_peer(ncs, n->max_queue_pairs); } - r = vhost_net_start_one(get_vhost_net(peer), dev); - - if (r < 0) { - goto err_start; - } if (peer->vring_enable) { /* restore vring enable state */ r = vhost_set_vring_enable(peer, peer->vring_enable); if (r < 0) { - vhost_net_stop_one(get_vhost_net(peer), dev); goto err_start; } } + + r = vhost_net_start_one(get_vhost_net(peer), dev); + if (r < 0) { + if (peer->vring_enable) { + vhost_set_vring_enable(peer, false); + } + goto err_start; + } } return 0; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v4 2/2] vhost-user: Support vhost_dev_start 2022-11-07 4:08 ` [PATCH v4 " Yajun Wu 2022-11-07 4:08 ` [PATCH v4 1/2] vhost: Change the sequence of device start Yajun Wu @ 2022-11-07 4:08 ` Yajun Wu 1 sibling, 0 replies; 18+ messages in thread From: Yajun Wu @ 2022-11-07 4:08 UTC (permalink / raw) To: qemu-devel, mst, yajunw, parav The motivation of adding vhost-user vhost_dev_start support is to improve backend configuration speed and reduce live migration VM downtime. Today VQ configuration is issued one by one. For virtio net with multi-queue support, backend needs to update RSS (Receive side scaling) on every rx queue enable. Updating RSS is time-consuming (typical time like 7ms). Implement already defined vhost status and message in the vhost specification [1]. (a) VHOST_USER_PROTOCOL_F_STATUS (b) VHOST_USER_SET_STATUS (c) VHOST_USER_GET_STATUS Send message VHOST_USER_SET_STATUS with VIRTIO_CONFIG_S_DRIVER_OK for device start and reset(0) for device stop. On reception of the DRIVER_OK message, backend can apply the needed setting only once (instead of incremental) and also utilize parallelism on enabling queues. This improves QEMU's live migration downtime with vhost user backend implementation by great margin, specially for the large number of VQs of 64 from 800 msec to 250 msec. [1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Parav Pandit <parav@nvidia.com> --- hw/virtio/vhost-user.c | 74 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 73 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 03415b6c95..bb5164b753 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -81,6 +81,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13, /* Feature 14 reserved for VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS. */ VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, + VHOST_USER_PROTOCOL_F_STATUS = 16, VHOST_USER_PROTOCOL_F_MAX }; @@ -126,6 +127,8 @@ typedef enum VhostUserRequest { VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_ADD_MEM_REG = 37, VHOST_USER_REM_MEM_REG = 38, + VHOST_USER_SET_STATUS = 39, + VHOST_USER_GET_STATUS = 40, VHOST_USER_MAX } VhostUserRequest; @@ -1452,6 +1455,43 @@ static int vhost_user_set_u64(struct vhost_dev *dev, int request, uint64_t u64, return 0; } +static int vhost_user_set_status(struct vhost_dev *dev, uint8_t status) +{ + return vhost_user_set_u64(dev, VHOST_USER_SET_STATUS, status, false); +} + +static int vhost_user_get_status(struct vhost_dev *dev, uint8_t *status) +{ + uint64_t value; + int ret; + + ret = vhost_user_get_u64(dev, VHOST_USER_GET_STATUS, &value); + if (ret < 0) { + return ret; + } + *status = value; + + return 0; +} + +static int vhost_user_add_status(struct vhost_dev *dev, uint8_t status) +{ + uint8_t s; + int ret; + + ret = vhost_user_get_status(dev, &s); + if (ret < 0) { + return ret; + } + + if ((s & status) == status) { + return 0; + } + s |= status; + + return vhost_user_set_status(dev, s); +} + static int vhost_user_set_features(struct vhost_dev *dev, uint64_t features) { @@ -1460,6 +1500,7 @@ static int vhost_user_set_features(struct vhost_dev *dev, * backend is actually logging changes */ bool log_enabled = features & (0x1ULL << VHOST_F_LOG_ALL); + int ret; /* * We need to include any extra backend only feature bits that @@ -1467,9 +1508,18 @@ static int vhost_user_set_features(struct vhost_dev *dev, * VHOST_USER_F_PROTOCOL_FEATURES bit for enabling protocol * features. */ - return vhost_user_set_u64(dev, VHOST_USER_SET_FEATURES, + ret = vhost_user_set_u64(dev, VHOST_USER_SET_FEATURES, features | dev->backend_features, log_enabled); + + if (virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + if (!ret) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_FEATURES_OK); + } + } + + return ret; } static int vhost_user_set_protocol_features(struct vhost_dev *dev, @@ -2615,6 +2665,27 @@ void vhost_user_cleanup(VhostUserState *user) user->chr = NULL; } +static int vhost_user_dev_start(struct vhost_dev *dev, bool started) +{ + if (!virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_STATUS)) { + return 0; + } + + /* Set device status only for last queue pair */ + if (dev->vq_index + dev->nvqs != dev->vq_index_end) { + return 0; + } + + if (started) { + return vhost_user_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE | + VIRTIO_CONFIG_S_DRIVER | + VIRTIO_CONFIG_S_DRIVER_OK); + } else { + return vhost_user_set_status(dev, 0); + } +} + const VhostOps user_ops = { .backend_type = VHOST_BACKEND_TYPE_USER, .vhost_backend_init = vhost_user_backend_init, @@ -2649,4 +2720,5 @@ const VhostOps user_ops = { .vhost_backend_mem_section_filter = vhost_user_mem_section_filter, .vhost_get_inflight_fd = vhost_user_get_inflight_fd, .vhost_set_inflight_fd = vhost_user_set_inflight_fd, + .vhost_dev_start = vhost_user_dev_start, }; -- 2.27.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
end of thread, other threads:[~2022-11-07 4:14 UTC | newest] Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-06-29 2:25 [PATCH 0/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-06-29 2:25 ` [PATCH 1/2] vhost: Change the sequence of device start Yajun Wu 2022-06-29 2:25 ` [PATCH 2/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-07-12 10:54 ` [PATCH v2 0/2] " Yajun Wu 2022-07-12 10:55 ` [PATCH v2 1/2] vhost: Change the sequence of device start Yajun Wu 2022-07-12 10:55 ` [PATCH v2 2/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-07-26 14:56 ` [PATCH v2 0/2] " Michael S. Tsirkin 2022-09-05 4:58 ` Yajun Wu 2022-10-12 10:10 ` Yajun Wu 2022-10-17 6:44 ` [PATCH v3 " Yajun Wu 2022-10-17 6:44 ` [PATCH v3 1/2] vhost: Change the sequence of device start Yajun Wu 2022-11-05 16:43 ` Michael S. Tsirkin 2022-10-17 6:44 ` [PATCH v3 2/2] vhost-user: Support vhost_dev_start Yajun Wu 2022-11-04 7:11 ` [PATCH v3 0/2] " Michael S. Tsirkin 2022-11-04 7:58 ` Michael S. Tsirkin 2022-11-07 4:08 ` [PATCH v4 " Yajun Wu 2022-11-07 4:08 ` [PATCH v4 1/2] vhost: Change the sequence of device start Yajun Wu 2022-11-07 4:08 ` [PATCH v4 2/2] vhost-user: Support vhost_dev_start Yajun Wu
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.