All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding
@ 2021-02-09 15:37 Eugenio Pérez
  2021-02-09 15:37 ` [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument Eugenio Pérez
                   ` (8 more replies)
  0 siblings, 9 replies; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

This series enable vhost (And vhost-vdpa) notifications forwarding for
software assisted live migration, implemented through a shadow
virtqueue.

Shadow virtqueue is a new method of tracking memory for migration:
Instead of relay on vDPA device's dirty logging capability, SW assisted
LM intercepts dataplane, forwarding the descriptors between VM and
device.

In this migration mode, qemu offers a new (shadow) vring to the device
to read and write into, and forwards descriptors between host vring
and qemu one. On used buffer relay, qemu will mark the dirty memory as
with plain virtio-net devices. This way, devices does not need to have
dirty page logging capability.

This RFC series just enables just the notifications forwarding part,
not buffer forwarding/tracking.

It is based on the ideas of DPDK SW assisted LM, in the series of
DPDK's https://patchwork.dpdk.org/cover/48370/ , but will use memory in
qemu Virtual Address Space for rings, instead of in guest's.

Comments are welcome.

Thanks!

v2: Fix non matching comments on qapi/net.json (Eric).
    Delete virtio_queue_get_idx and use already available
        virtio_get_queue_index (Jason).
    Do not check on used notification enable flag, since we need to
        particularize for each virtqueue layout and IOMMU, and it is
        unlikely for this code to be in final version.
    Handle multiqueue devices.
    Delete trailing dot in previous errpr_setg argument

Main changes from previous RFC [1] are:
* Use QMP to enable. Can disable through QMP too.
* Do not use vhost_dev_{enable,disable}_notifiers, since they override
  the VM ioeventfd set, and could cause race conditions. Do never modify
  irqfd or ioeventfd used for the guest.

[1] https://patchew.org/QEMU/20201120185105.279030-1-eperezma@redhat.com/

Eugenio Pérez (7):
  vhost: Delete trailing dot in errpr_setg argument
  virtio: Add virtio_queue_host_notifier_status
  vhost: Save masked_notifier state
  vhost: Add VhostShadowVirtqueue
  vhost: Add x-vhost-enable-shadow-vq qmp
  vhost: Route guest->host notification through shadow virtqueue
  vhost: Route host->guest notification through shadow virtqueue

 qapi/net.json                      |  22 ++++
 hw/virtio/vhost-shadow-virtqueue.h |  33 +++++
 include/hw/virtio/vhost.h          |   5 +
 include/hw/virtio/virtio.h         |   1 +
 hw/virtio/vhost-shadow-virtqueue.c | 205 +++++++++++++++++++++++++++++
 hw/virtio/vhost.c                  | 159 +++++++++++++++++++++-
 hw/virtio/virtio.c                 |   5 +
 hw/virtio/meson.build              |   2 +-
 8 files changed, 430 insertions(+), 2 deletions(-)
 create mode 100644 hw/virtio/vhost-shadow-virtqueue.h
 create mode 100644 hw/virtio/vhost-shadow-virtqueue.c

-- 
2.27.0



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-09 16:24   ` Eric Blake
  2021-02-09 15:37 ` [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status Eugenio Pérez
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

As error_setg points
---
 hw/virtio/vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 6e17d631f7..1c5b442081 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1358,7 +1358,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
     if (hdev->migration_blocker == NULL) {
         if (!(hdev->features & (0x1ULL << VHOST_F_LOG_ALL))) {
             error_setg(&hdev->migration_blocker,
-                       "Migration disabled: vhost lacks VHOST_F_LOG_ALL feature.");
+                       "Migration disabled: vhost lacks VHOST_F_LOG_ALL feature");
         } else if (vhost_dev_log_is_shared(hdev) && !qemu_memfd_alloc_check()) {
             error_setg(&hdev->migration_blocker,
                        "Migration disabled: failed to allocate shared memory");
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
  2021-02-09 15:37 ` [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-17 12:41     ` Stefan Hajnoczi
  2021-02-09 15:37 ` [RFC v2 3/7] vhost: Save masked_notifier state Eugenio Pérez
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

This allows shadow virtqueue code to assert the queue status before
making changes.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/virtio.h | 1 +
 hw/virtio/virtio.c         | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index b7ece7a6a8..227cec13a8 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -316,6 +316,7 @@ void virtio_device_release_ioeventfd(VirtIODevice *vdev);
 bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev);
 EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
 void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
+bool virtio_queue_host_notifier_status(const VirtQueue *vq);
 void virtio_queue_host_notifier_read(EventNotifier *n);
 void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
                                                 VirtIOHandleAIOOutput handle_output);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 1fd1917ca0..53473ae4df 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3594,6 +3594,11 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
     return &vq->host_notifier;
 }
 
+bool virtio_queue_host_notifier_status(const VirtQueue *vq)
+{
+    return vq->host_notifier_enabled;
+}
+
 void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled)
 {
     vq->host_notifier_enabled = enabled;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC v2 3/7] vhost: Save masked_notifier state
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
  2021-02-09 15:37 ` [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument Eugenio Pérez
  2021-02-09 15:37 ` [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-17 12:44     ` Stefan Hajnoczi
  2021-02-09 15:37 ` [RFC v2 4/7] vhost: Add VhostShadowVirtqueue Eugenio Pérez
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

It will be used to recover call eventfd.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost.h | 1 +
 hw/virtio/vhost.c         | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 4a8bc75415..ac963bf23d 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -28,6 +28,7 @@ struct vhost_virtqueue {
     unsigned avail_size;
     unsigned long long used_phys;
     unsigned used_size;
+    bool notifier_is_masked;
     EventNotifier masked_notifier;
     struct vhost_dev *dev;
 };
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 1c5b442081..5d0c817b8f 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1518,6 +1518,8 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
     /* should only be called after backend is connected */
     assert(hdev->vhost_ops);
 
+    hdev->vqs[index].notifier_is_masked = mask;
+
     if (mask) {
         assert(vdev->use_guest_notifier_mask);
         file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC v2 4/7] vhost: Add VhostShadowVirtqueue
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
                   ` (2 preceding siblings ...)
  2021-02-09 15:37 ` [RFC v2 3/7] vhost: Save masked_notifier state Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-17 13:01     ` Stefan Hajnoczi
  2021-02-09 15:37 ` [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

Vhost shadow virtqueue is an intermediate jump for virtqueue
notifications and buffers, allowing qemu to track them.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h | 24 ++++++++++++
 hw/virtio/vhost-shadow-virtqueue.c | 63 ++++++++++++++++++++++++++++++
 hw/virtio/meson.build              |  2 +-
 3 files changed, 88 insertions(+), 1 deletion(-)
 create mode 100644 hw/virtio/vhost-shadow-virtqueue.h
 create mode 100644 hw/virtio/vhost-shadow-virtqueue.c

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
new file mode 100644
index 0000000000..6cc18d6acb
--- /dev/null
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -0,0 +1,24 @@
+/*
+ * vhost software live migration ring
+ *
+ * SPDX-FileCopyrightText: Red Hat, Inc. 2021
+ * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef VHOST_SHADOW_VIRTQUEUE_H
+#define VHOST_SHADOW_VIRTQUEUE_H
+
+#include "qemu/osdep.h"
+
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/vhost.h"
+
+typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
+
+VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx);
+
+void vhost_shadow_vq_free(VhostShadowVirtqueue *vq);
+
+#endif
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
new file mode 100644
index 0000000000..b5d2645ae0
--- /dev/null
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -0,0 +1,63 @@
+/*
+ * vhost software live migration ring
+ *
+ * SPDX-FileCopyrightText: Red Hat, Inc. 2021
+ * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "hw/virtio/vhost-shadow-virtqueue.h"
+
+#include "qemu/error-report.h"
+#include "qemu/event_notifier.h"
+
+/* Shadow virtqueue to relay notifications */
+typedef struct VhostShadowVirtqueue {
+    /* Shadow kick notifier, sent to vhost */
+    EventNotifier kick_notifier;
+    /* Shadow call notifier, sent to vhost */
+    EventNotifier call_notifier;
+} VhostShadowVirtqueue;
+
+/*
+ * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
+ * methods and file descriptors.
+ */
+VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
+{
+    g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
+    int r;
+
+    r = event_notifier_init(&svq->kick_notifier, 0);
+    if (r != 0) {
+        error_report("Couldn't create kick event notifier: %s",
+                     strerror(errno));
+        goto err_init_kick_notifier;
+    }
+
+    r = event_notifier_init(&svq->call_notifier, 0);
+    if (r != 0) {
+        error_report("Couldn't create call event notifier: %s",
+                     strerror(errno));
+        goto err_init_call_notifier;
+    }
+
+    return svq;
+
+err_init_call_notifier:
+    event_notifier_cleanup(&svq->kick_notifier);
+
+err_init_kick_notifier:
+    return NULL;
+}
+
+/*
+ * Free the resources of the shadow virtqueue.
+ */
+void vhost_shadow_vq_free(VhostShadowVirtqueue *vq)
+{
+    event_notifier_cleanup(&vq->kick_notifier);
+    event_notifier_cleanup(&vq->call_notifier);
+    g_free(vq);
+}
diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
index fbff9bc9d4..8b5a0225fe 100644
--- a/hw/virtio/meson.build
+++ b/hw/virtio/meson.build
@@ -11,7 +11,7 @@ softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('vhost-stub.c'))
 
 virtio_ss = ss.source_set()
 virtio_ss.add(files('virtio.c'))
-virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c'))
+virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c', 'vhost-shadow-virtqueue.c'))
 virtio_ss.add(when: 'CONFIG_VHOST_USER', if_true: files('vhost-user.c'))
 virtio_ss.add(when: 'CONFIG_VHOST_VDPA', if_true: files('vhost-vdpa.c'))
 virtio_ss.add(when: 'CONFIG_VIRTIO_BALLOON', if_true: files('virtio-balloon.c'))
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
                   ` (3 preceding siblings ...)
  2021-02-09 15:37 ` [RFC v2 4/7] vhost: Add VhostShadowVirtqueue Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-17 15:26     ` Stefan Hajnoczi
  2021-02-09 15:37 ` [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

Command to enable shadow virtqueue looks like:

{ "execute": "x-vhost-enable-shadow-vq", "arguments": { "name": "dev0", "enable": true } }

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 qapi/net.json     | 22 ++++++++++++++++++++++
 hw/virtio/vhost.c |  6 ++++++
 2 files changed, 28 insertions(+)

diff --git a/qapi/net.json b/qapi/net.json
index c31748c87f..a1cdffb0f9 100644
--- a/qapi/net.json
+++ b/qapi/net.json
@@ -77,6 +77,28 @@
 ##
 { 'command': 'netdev_del', 'data': {'id': 'str'} }
 
+##
+# @x-vhost-enable-shadow-vq:
+#
+# Use vhost shadow virtqueue.
+#
+# @name: the device name of the virtual network adapter
+#
+# @enable: true to use he alternate shadow VQ notification path
+#
+# Returns: Error if failure, or 'no error' for success
+#
+# Since: 6.0
+#
+# Example:
+#
+# -> { "execute": "x-vhost-enable-shadow-vq", "arguments": {"enable": true} }
+#
+##
+{ 'command': 'x-vhost-enable-shadow-vq',
+  'data': {'name': 'str', 'enable': 'bool'},
+  'if': 'defined(CONFIG_VHOST_KERNEL)' }
+
 ##
 # @NetLegacyNicOptions:
 #
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 5d0c817b8f..8fbf14385e 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -15,6 +15,7 @@
 
 #include "qemu/osdep.h"
 #include "qapi/error.h"
+#include "qapi/qapi-commands-net.h"
 #include "hw/virtio/vhost.h"
 #include "qemu/atomic.h"
 #include "qemu/range.h"
@@ -1833,3 +1834,8 @@ int vhost_net_set_backend(struct vhost_dev *hdev,
 
     return -1;
 }
+
+void qmp_x_vhost_enable_shadow_vq(const char *name, bool enable, Error **errp)
+{
+    error_setg(errp, "Shadow virtqueue still not implemented");
+}
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
                   ` (4 preceding siblings ...)
  2021-02-09 15:37 ` [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-17 16:56     ` Stefan Hajnoczi
  2021-02-09 15:37 ` [RFC v2 7/7] vhost: Route host->guest " Eugenio Pérez
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

Shadow virtqueue notifications forwarding is disabled when vhost_dev
stops.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h |   7 ++
 include/hw/virtio/vhost.h          |   4 +
 hw/virtio/vhost-shadow-virtqueue.c |  97 ++++++++++++++++++-
 hw/virtio/vhost.c                  | 148 ++++++++++++++++++++++++++++-
 4 files changed, 253 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index 6cc18d6acb..c45035c4b7 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -17,6 +17,13 @@
 
 typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
 
+bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
+                               unsigned idx,
+                               VhostShadowVirtqueue *svq);
+void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
+                              unsigned idx,
+                              VhostShadowVirtqueue *svq);
+
 VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx);
 
 void vhost_shadow_vq_free(VhostShadowVirtqueue *vq);
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index ac963bf23d..884818b109 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -55,6 +55,8 @@ struct vhost_iommu {
     QLIST_ENTRY(vhost_iommu) iommu_next;
 };
 
+typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
+
 typedef struct VhostDevConfigOps {
     /* Vhost device config space changed callback
      */
@@ -83,7 +85,9 @@ struct vhost_dev {
     uint64_t backend_cap;
     bool started;
     bool log_enabled;
+    bool sw_lm_enabled;
     uint64_t log_size;
+    VhostShadowVirtqueue **shadow_vqs;
     Error *migration_blocker;
     const VhostOps *vhost_ops;
     void *opaque;
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index b5d2645ae0..01f282d434 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -8,9 +8,12 @@
  */
 
 #include "hw/virtio/vhost-shadow-virtqueue.h"
+#include "hw/virtio/vhost.h"
+
+#include "standard-headers/linux/vhost_types.h"
 
 #include "qemu/error-report.h"
-#include "qemu/event_notifier.h"
+#include "qemu/main-loop.h"
 
 /* Shadow virtqueue to relay notifications */
 typedef struct VhostShadowVirtqueue {
@@ -18,8 +21,95 @@ typedef struct VhostShadowVirtqueue {
     EventNotifier kick_notifier;
     /* Shadow call notifier, sent to vhost */
     EventNotifier call_notifier;
+
+    /* Borrowed virtqueue's guest to host notifier. */
+    EventNotifier host_notifier;
+
+    /* Virtio queue shadowing */
+    VirtQueue *vq;
 } VhostShadowVirtqueue;
 
+/* Forward guest notifications */
+static void vhost_handle_guest_kick(EventNotifier *n)
+{
+    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
+                                             host_notifier);
+
+    if (event_notifier_test_and_clear(n)) {
+        event_notifier_set(&svq->kick_notifier);
+    }
+}
+
+/*
+ * Start shadow virtqueue operation.
+ * @dev vhost device
+ * @hidx vhost virtqueue index
+ * @svq Shadow Virtqueue
+ *
+ * Run in RCU context
+ */
+bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
+                               unsigned idx,
+                               VhostShadowVirtqueue *svq)
+{
+    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
+    struct vhost_vring_file kick_file = {
+        .index = idx,
+        .fd = event_notifier_get_fd(&svq->kick_notifier),
+    };
+    int r;
+
+    /* Check that notifications are still going directly to vhost dev */
+    assert(virtio_queue_host_notifier_status(svq->vq));
+
+    event_notifier_init_fd(&svq->host_notifier,
+                           event_notifier_get_fd(vq_host_notifier));
+    event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);
+
+    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
+    if (unlikely(r != 0)) {
+        error_report("Couldn't set kick fd: %s", strerror(errno));
+        goto err_set_vring_kick;
+    }
+
+    /* Check for pending notifications from the guest */
+    vhost_handle_guest_kick(&svq->host_notifier);
+
+    return true;
+
+err_set_vring_kick:
+    event_notifier_set_handler(&svq->host_notifier, NULL);
+
+    return false;
+}
+
+/*
+ * Stop shadow virtqueue operation.
+ * @dev vhost device
+ * @idx vhost queue index
+ * @svq Shadow Virtqueue
+ *
+ * Run in RCU context
+ */
+void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
+                              unsigned idx,
+                              VhostShadowVirtqueue *svq)
+{
+    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
+    struct vhost_vring_file kick_file = {
+        .index = idx,
+        .fd = event_notifier_get_fd(vq_host_notifier),
+    };
+    int r;
+
+    /* Restore vhost kick */
+    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
+    /* Cannot do a lot of things */
+    assert(r == 0);
+
+    event_notifier_set_handler(&svq->host_notifier, NULL);
+}
+
 /*
  * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
  * methods and file descriptors.
@@ -27,8 +117,11 @@ typedef struct VhostShadowVirtqueue {
 VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
 {
     g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
+    int vq_idx = dev->vq_index + idx;
     int r;
 
+    svq->vq = virtio_get_queue(dev->vdev, vq_idx);
+
     r = event_notifier_init(&svq->kick_notifier, 0);
     if (r != 0) {
         error_report("Couldn't create kick event notifier: %s",
@@ -43,7 +136,7 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
         goto err_init_call_notifier;
     }
 
-    return svq;
+    return g_steal_pointer(&svq);
 
 err_init_call_notifier:
     event_notifier_cleanup(&svq->kick_notifier);
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 8fbf14385e..9d4728e545 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -25,6 +25,7 @@
 #include "exec/address-spaces.h"
 #include "hw/virtio/virtio-bus.h"
 #include "hw/virtio/virtio-access.h"
+#include "hw/virtio/vhost-shadow-virtqueue.h"
 #include "migration/blocker.h"
 #include "migration/qemu-file-types.h"
 #include "sysemu/dma.h"
@@ -937,6 +938,84 @@ static void vhost_log_global_stop(MemoryListener *listener)
     }
 }
 
+static int vhost_sw_live_migration_stop(struct vhost_dev *dev)
+{
+    int idx;
+
+    WITH_RCU_READ_LOCK_GUARD() {
+        dev->sw_lm_enabled = false;
+
+        for (idx = 0; idx < dev->nvqs; ++idx) {
+            vhost_shadow_vq_stop_rcu(dev, idx, dev->shadow_vqs[idx]);
+        }
+    }
+
+    for (idx = 0; idx < dev->nvqs; ++idx) {
+        vhost_shadow_vq_free(dev->shadow_vqs[idx]);
+    }
+
+    g_free(dev->shadow_vqs);
+    dev->shadow_vqs = NULL;
+    return 0;
+}
+
+static int vhost_sw_live_migration_start(struct vhost_dev *dev)
+{
+    int idx;
+
+    dev->shadow_vqs = g_new0(VhostShadowVirtqueue *, dev->nvqs);
+    for (idx = 0; idx < dev->nvqs; ++idx) {
+        dev->shadow_vqs[idx] = vhost_shadow_vq_new(dev, idx);
+        if (unlikely(dev->shadow_vqs[idx] == NULL)) {
+            goto err;
+        }
+    }
+
+    WITH_RCU_READ_LOCK_GUARD() {
+        for (idx = 0; idx < dev->nvqs; ++idx) {
+            bool ok = vhost_shadow_vq_start_rcu(dev, idx,
+                                                dev->shadow_vqs[idx]);
+
+            if (!ok) {
+                int stop_idx = idx;
+
+                while (--stop_idx >= 0) {
+                    vhost_shadow_vq_stop_rcu(dev, idx,
+                                             dev->shadow_vqs[stop_idx]);
+                }
+
+                goto err;
+            }
+        }
+    }
+
+    dev->sw_lm_enabled = true;
+    return 0;
+
+err:
+    for (; idx >= 0; --idx) {
+        vhost_shadow_vq_free(dev->shadow_vqs[idx]);
+    }
+    g_free(dev->shadow_vqs[idx]);
+
+    return -1;
+}
+
+static int vhost_sw_live_migration_enable(struct vhost_dev *dev,
+                                          bool enable_lm)
+{
+    int r;
+
+    if (enable_lm == dev->sw_lm_enabled) {
+        return 0;
+    }
+
+    r = enable_lm ? vhost_sw_live_migration_start(dev)
+                  : vhost_sw_live_migration_stop(dev);
+
+    return r;
+}
+
 static void vhost_log_start(MemoryListener *listener,
                             MemoryRegionSection *section,
                             int old, int new)
@@ -1381,6 +1460,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
     hdev->log = NULL;
     hdev->log_size = 0;
     hdev->log_enabled = false;
+    hdev->sw_lm_enabled = false;
     hdev->started = false;
     memory_listener_register(&hdev->memory_listener, &address_space_memory);
     QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
@@ -1486,6 +1566,10 @@ void vhost_dev_disable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev)
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
     int i, r;
 
+    if (hdev->sw_lm_enabled) {
+        vhost_sw_live_migration_enable(hdev, false);
+    }
+
     for (i = 0; i < hdev->nvqs; ++i) {
         r = virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), hdev->vq_index + i,
                                          false);
@@ -1808,6 +1892,11 @@ void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev)
         hdev->vhost_ops->vhost_dev_start(hdev, false);
     }
     for (i = 0; i < hdev->nvqs; ++i) {
+        if (hdev->sw_lm_enabled) {
+            vhost_shadow_vq_stop_rcu(hdev, i, hdev->shadow_vqs[i]);
+            vhost_shadow_vq_free(hdev->shadow_vqs[i]);
+        }
+
         vhost_virtqueue_stop(hdev,
                              vdev,
                              hdev->vqs + i,
@@ -1821,6 +1910,8 @@ void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev)
         memory_listener_unregister(&hdev->iommu_listener);
     }
     vhost_log_put(hdev, true);
+    g_free(hdev->shadow_vqs);
+    hdev->sw_lm_enabled = false;
     hdev->started = false;
     hdev->vdev = NULL;
 }
@@ -1837,5 +1928,60 @@ int vhost_net_set_backend(struct vhost_dev *hdev,
 
 void qmp_x_vhost_enable_shadow_vq(const char *name, bool enable, Error **errp)
 {
-    error_setg(errp, "Shadow virtqueue still not implemented");
+    struct vhost_dev *hdev, *hdev_err;
+    VirtIODevice *vdev;
+    const char *err_cause = NULL;
+    int r;
+    ErrorClass err_class = ERROR_CLASS_GENERIC_ERROR;
+
+    QLIST_FOREACH(hdev, &vhost_devices, entry) {
+        if (hdev->vdev && 0 == strcmp(hdev->vdev->name, name)) {
+            vdev = hdev->vdev;
+            break;
+        }
+    }
+
+    if (!hdev) {
+        err_class = ERROR_CLASS_DEVICE_NOT_FOUND;
+        err_cause = "Device not found";
+        goto not_found_err;
+    }
+
+    for ( ; hdev; hdev = QLIST_NEXT(hdev, entry)) {
+        if (vdev != hdev->vdev) {
+            continue;
+        }
+
+        if (!hdev->started) {
+            err_cause = "Device is not started";
+            goto err;
+        }
+
+        r = vhost_sw_live_migration_enable(hdev, enable);
+        if (unlikely(r)) {
+            err_cause = "Error enabling (see monitor)";
+            goto err;
+        }
+    }
+
+    return;
+
+err:
+    QLIST_FOREACH(hdev_err, &vhost_devices, entry) {
+        if (hdev_err == hdev) {
+            break;
+        }
+
+        if (vdev != hdev->vdev) {
+            continue;
+        }
+
+        vhost_sw_live_migration_enable(hdev, !enable);
+    }
+
+not_found_err:
+    if (err_cause) {
+        error_set(errp, err_class,
+                  "Can't enable shadow vq on %s: %s", name, err_cause);
+    }
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
                   ` (5 preceding siblings ...)
  2021-02-09 15:37 ` [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
@ 2021-02-09 15:37 ` Eugenio Pérez
  2021-02-17 17:24     ` Stefan Hajnoczi
  2021-03-01  6:24     ` Jason Wang
  2021-02-09 15:54 ` [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Perez Martin
  2021-02-11 11:12 ` no-reply
  8 siblings, 2 replies; 35+ messages in thread
From: Eugenio Pérez @ 2021-02-09 15:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-shadow-virtqueue.h |  2 ++
 hw/virtio/vhost-shadow-virtqueue.c | 49 ++++++++++++++++++++++++++++++
 hw/virtio/vhost.c                  |  5 ++-
 3 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index c45035c4b7..210133434c 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -17,6 +17,8 @@
 
 typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
 
+EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq);
+
 bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
                                unsigned idx,
                                VhostShadowVirtqueue *svq);
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 01f282d434..61d98ae652 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -24,6 +24,8 @@ typedef struct VhostShadowVirtqueue {
 
     /* Borrowed virtqueue's guest to host notifier. */
     EventNotifier host_notifier;
+    /* Host to guest notifier */
+    EventNotifier *guest_notifier;
 
     /* Virtio queue shadowing */
     VirtQueue *vq;
@@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
     }
 }
 
+/* Forward vhost notifications */
+static void vhost_handle_call(EventNotifier *n)
+{
+    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
+                                             call_notifier);
+
+    if (event_notifier_test_and_clear(n)) {
+        event_notifier_set(svq->guest_notifier);
+    }
+}
+
+/*
+ * Get the vhost call notifier of the shadow vq
+ * @vq Shadow virtqueue
+ */
+EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq)
+{
+    return &vq->call_notifier;
+}
+
 /*
  * Start shadow virtqueue operation.
  * @dev vhost device
@@ -57,6 +79,10 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
         .index = idx,
         .fd = event_notifier_get_fd(&svq->kick_notifier),
     };
+    struct vhost_vring_file call_file = {
+        .index = idx,
+        .fd = event_notifier_get_fd(&svq->call_notifier),
+    };
     int r;
 
     /* Check that notifications are still going directly to vhost dev */
@@ -66,6 +92,7 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
                            event_notifier_get_fd(vq_host_notifier));
     event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);
 
+    svq->guest_notifier = virtio_queue_get_guest_notifier(svq->vq);
     r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
     if (unlikely(r != 0)) {
         error_report("Couldn't set kick fd: %s", strerror(errno));
@@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
     /* Check for pending notifications from the guest */
     vhost_handle_guest_kick(&svq->host_notifier);
 
+    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
+    if (r != 0) {
+        error_report("Couldn't set call fd: %s", strerror(errno));
+        goto err_set_vring_call;
+    }
+
     return true;
 
+err_set_vring_call:
+    kick_file.fd = event_notifier_get_fd(vq_host_notifier);
+    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
+    assert(r == 0);
+
 err_set_vring_kick:
     event_notifier_set_handler(&svq->host_notifier, NULL);
 
@@ -108,6 +146,16 @@ void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
     assert(r == 0);
 
     event_notifier_set_handler(&svq->host_notifier, NULL);
+
+    if (!dev->vqs[idx].notifier_is_masked) {
+        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
+
+        /* Restore vhost call */
+        vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx, false);
+
+        /* Check for pending calls */
+        vhost_handle_call(e);
+    }
 }
 
 /*
@@ -136,6 +184,7 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
         goto err_init_call_notifier;
     }
 
+    event_notifier_set_handler(&svq->call_notifier, vhost_handle_call);
     return g_steal_pointer(&svq);
 
 err_init_call_notifier:
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 9d4728e545..0dc95679e9 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -975,7 +975,6 @@ static int vhost_sw_live_migration_start(struct vhost_dev *dev)
         for (idx = 0; idx < dev->nvqs; ++idx) {
             bool ok = vhost_shadow_vq_start_rcu(dev, idx,
                                                 dev->shadow_vqs[idx]);
-
             if (!ok) {
                 int stop_idx = idx;
 
@@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
     if (mask) {
         assert(vdev->use_guest_notifier_mask);
         file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
+    } else if (hdev->sw_lm_enabled) {
+        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
+        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
+        file.fd = event_notifier_get_fd(e);
     } else {
         file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
     }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
                   ` (6 preceding siblings ...)
  2021-02-09 15:37 ` [RFC v2 7/7] vhost: Route host->guest " Eugenio Pérez
@ 2021-02-09 15:54 ` Eugenio Perez Martin
  2021-02-11 11:12 ` no-reply
  8 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-09 15:54 UTC (permalink / raw)
  To: qemu-level
  Cc: Parav Pandit, Juan Quintela, Jason Wang, Michael S. Tsirkin,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Stefano Garzarella,
	Michael Lilja, Jim Harford, Rob Miller

Jason,

Correct me if I'm wrong. Based on this series, is your proposal to
move the touched functions in vhost.c [1] to vhost_ops, so they will
become backend-dependent operations, and then develop shadow vq
backend to intercept them?

Other code would also move to that new backend, like the setup/teardown.

[1] vhost_virtqueue_mask and vhost_dev_disable_notifiers



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument
  2021-02-09 15:37 ` [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument Eugenio Pérez
@ 2021-02-09 16:24   ` Eric Blake
  2021-02-09 18:11     ` Eugenio Perez Martin
  0 siblings, 1 reply; 35+ messages in thread
From: Eric Blake @ 2021-02-09 16:24 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Rob Miller,
	Michael Lilja, Jim Harford, Stefano Garzarella

On 2/9/21 9:37 AM, Eugenio Pérez wrote:
> As error_setg points

Incomplete sentence?

Missing Signed-off-by.

> ---
>  hw/virtio/vhost.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 6e17d631f7..1c5b442081 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -1358,7 +1358,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>      if (hdev->migration_blocker == NULL) {
>          if (!(hdev->features & (0x1ULL << VHOST_F_LOG_ALL))) {
>              error_setg(&hdev->migration_blocker,
> -                       "Migration disabled: vhost lacks VHOST_F_LOG_ALL feature.");
> +                       "Migration disabled: vhost lacks VHOST_F_LOG_ALL feature");
>          } else if (vhost_dev_log_is_shared(hdev) && !qemu_memfd_alloc_check()) {
>              error_setg(&hdev->migration_blocker,
>                         "Migration disabled: failed to allocate shared memory");
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument
  2021-02-09 16:24   ` Eric Blake
@ 2021-02-09 18:11     ` Eugenio Perez Martin
  2021-02-11 16:40         ` Stefano Garzarella
  0 siblings, 1 reply; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-09 18:11 UTC (permalink / raw)
  To: Eric Blake
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Stefan Hajnoczi, Eli Cohen,
	virtualization, Michael Lilja, Jim Harford, Stefano Garzarella

On Tue, Feb 9, 2021 at 5:25 PM Eric Blake <eblake@redhat.com> wrote:
>
> On 2/9/21 9:37 AM, Eugenio Pérez wrote:
> > As error_setg points
>
> Incomplete sentence?
>
> Missing Signed-off-by.
>

Sorry, I should have paid more attention.

Maybe it is better to send this though qemu-trivial, so it does not
mess with this series?

Thanks!

> > ---
> >  hw/virtio/vhost.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 6e17d631f7..1c5b442081 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -1358,7 +1358,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >      if (hdev->migration_blocker == NULL) {
> >          if (!(hdev->features & (0x1ULL << VHOST_F_LOG_ALL))) {
> >              error_setg(&hdev->migration_blocker,
> > -                       "Migration disabled: vhost lacks VHOST_F_LOG_ALL feature.");
> > +                       "Migration disabled: vhost lacks VHOST_F_LOG_ALL feature");
> >          } else if (vhost_dev_log_is_shared(hdev) && !qemu_memfd_alloc_check()) {
> >              error_setg(&hdev->migration_blocker,
> >                         "Migration disabled: failed to allocate shared memory");
> >
>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org
>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding
  2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
                   ` (7 preceding siblings ...)
  2021-02-09 15:54 ` [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Perez Martin
@ 2021-02-11 11:12 ` no-reply
  8 siblings, 0 replies; 35+ messages in thread
From: no-reply @ 2021-02-11 11:12 UTC (permalink / raw)
  To: eperezma
  Cc: parav, quintela, jasowang, mst, qemu-devel, armbru, sgarzare,
	hanand, xiao.w.wang, stefanha, eli, virtualization, ml,
	jim.harford, rob.miller

Patchew URL: https://patchew.org/QEMU/20210209153757.1653598-1-eperezma@redhat.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210209153757.1653598-1-eperezma@redhat.com
Subject: [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/1612868085-72809-1-git-send-email-bmeng.cn@gmail.com -> patchew/1612868085-72809-1-git-send-email-bmeng.cn@gmail.com
 * [new tag]         patchew/1612950879-49023-1-git-send-email-bmeng.cn@gmail.com -> patchew/1612950879-49023-1-git-send-email-bmeng.cn@gmail.com
 * [new tag]         patchew/1612956141-63712-1-git-send-email-bmeng.cn@gmail.com -> patchew/1612956141-63712-1-git-send-email-bmeng.cn@gmail.com
Switched to a new branch 'test'
0c0f4f0 vhost: Route host->guest notification through shadow virtqueue
ad1a0b1 vhost: Route guest->host notification through shadow virtqueue
abfe6ae vhost: Add x-vhost-enable-shadow-vq qmp
b3b7544 vhost: Add VhostShadowVirtqueue
cc4e481 vhost: Save masked_notifier state
5237982 virtio: Add virtio_queue_host_notifier_status
fd946d9 vhost: Delete trailing dot in errpr_setg argument

=== OUTPUT BEGIN ===
1/7 Checking commit fd946d9a576c (vhost: Delete trailing dot in errpr_setg argument)
ERROR: Missing Signed-off-by: line(s)

total: 1 errors, 0 warnings, 8 lines checked

Patch 1/7 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

2/7 Checking commit 523798209c6a (virtio: Add virtio_queue_host_notifier_status)
3/7 Checking commit cc4e48155cfa (vhost: Save masked_notifier state)
4/7 Checking commit b3b75447560c (vhost: Add VhostShadowVirtqueue)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#32: 
new file mode 100644

total: 0 errors, 1 warnings, 95 lines checked

Patch 4/7 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
5/7 Checking commit abfe6ae82846 (vhost: Add x-vhost-enable-shadow-vq qmp)
6/7 Checking commit ad1a0b1c0b05 (vhost: Route guest->host notification through shadow virtqueue)
7/7 Checking commit 0c0f4f08a1ef (vhost: Route host->guest notification through shadow virtqueue)
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20210209153757.1653598-1-eperezma@redhat.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument
  2021-02-09 18:11     ` Eugenio Perez Martin
@ 2021-02-11 16:40         ` Stefano Garzarella
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Garzarella @ 2021-02-11 16:40 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, Juan Quintela,
	qemu-level, Markus Armbruster, Harpreet Singh Anand, Xiao W Wang,
	Stefan Hajnoczi, Eli Cohen, virtualization, Michael Lilja,
	Jim Harford, Rob Miller

On Tue, Feb 09, 2021 at 07:11:41PM +0100, Eugenio Perez Martin wrote:
>On Tue, Feb 9, 2021 at 5:25 PM Eric Blake <eblake@redhat.com> wrote:
>>
>> On 2/9/21 9:37 AM, Eugenio Pérez wrote:
>> > As error_setg points
>>
>> Incomplete sentence?
>>
>> Missing Signed-off-by.
>>
>
>Sorry, I should have paid more attention.
>
>Maybe it is better to send this though qemu-trivial, so it does not
>mess with this series?

Yes, I agree that it can go regardless of this series.

Thanks,
Stefano



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument
@ 2021-02-11 16:40         ` Stefano Garzarella
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Garzarella @ 2021-02-11 16:40 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-level,
	Harpreet Singh Anand, Xiao W Wang, Stefan Hajnoczi, Eli Cohen,
	virtualization, Eric Blake, Michael Lilja, Jim Harford,
	Rob Miller

On Tue, Feb 09, 2021 at 07:11:41PM +0100, Eugenio Perez Martin wrote:
>On Tue, Feb 9, 2021 at 5:25 PM Eric Blake <eblake@redhat.com> wrote:
>>
>> On 2/9/21 9:37 AM, Eugenio Pérez wrote:
>> > As error_setg points
>>
>> Incomplete sentence?
>>
>> Missing Signed-off-by.
>>
>
>Sorry, I should have paid more attention.
>
>Maybe it is better to send this though qemu-trivial, so it does not
>mess with this series?

Yes, I agree that it can go regardless of this series.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status
  2021-02-09 15:37 ` [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status Eugenio Pérez
@ 2021-02-17 12:41     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 12:41 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-devel, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 1831 bytes --]

On Tue, Feb 09, 2021 at 04:37:52PM +0100, Eugenio Pérez wrote:
> This allows shadow virtqueue code to assert the queue status before
> making changes.
> 
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/virtio.h | 1 +
>  hw/virtio/virtio.c         | 5 +++++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index b7ece7a6a8..227cec13a8 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -316,6 +316,7 @@ void virtio_device_release_ioeventfd(VirtIODevice *vdev);
>  bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev);
>  EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
>  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
> +bool virtio_queue_host_notifier_status(const VirtQueue *vq);
>  void virtio_queue_host_notifier_read(EventNotifier *n);
>  void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
>                                                  VirtIOHandleAIOOutput handle_output);
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 1fd1917ca0..53473ae4df 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -3594,6 +3594,11 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
>      return &vq->host_notifier;
>  }
>  
> +bool virtio_queue_host_notifier_status(const VirtQueue *vq)
> +{
> +    return vq->host_notifier_enabled;
> +}
> +
>  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled)

Since there is a virtio_queue_set_host_notifier_enabled() I suggest
calling this function virtio_queue_is_host_notifier_enabled() or
virtio_queue_get_host_notifier_enabled(). That way it's clear they
set/get the same thing.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status
@ 2021-02-17 12:41     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 12:41 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, qemu-devel,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Eric Blake, Michael Lilja, Jim Harford


[-- Attachment #1.1: Type: text/plain, Size: 1831 bytes --]

On Tue, Feb 09, 2021 at 04:37:52PM +0100, Eugenio Pérez wrote:
> This allows shadow virtqueue code to assert the queue status before
> making changes.
> 
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/virtio.h | 1 +
>  hw/virtio/virtio.c         | 5 +++++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index b7ece7a6a8..227cec13a8 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -316,6 +316,7 @@ void virtio_device_release_ioeventfd(VirtIODevice *vdev);
>  bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev);
>  EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
>  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
> +bool virtio_queue_host_notifier_status(const VirtQueue *vq);
>  void virtio_queue_host_notifier_read(EventNotifier *n);
>  void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
>                                                  VirtIOHandleAIOOutput handle_output);
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 1fd1917ca0..53473ae4df 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -3594,6 +3594,11 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
>      return &vq->host_notifier;
>  }
>  
> +bool virtio_queue_host_notifier_status(const VirtQueue *vq)
> +{
> +    return vq->host_notifier_enabled;
> +}
> +
>  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled)

Since there is a virtio_queue_set_host_notifier_enabled() I suggest
calling this function virtio_queue_is_host_notifier_enabled() or
virtio_queue_get_host_notifier_enabled(). That way it's clear they
set/get the same thing.

Stefan

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 3/7] vhost: Save masked_notifier state
  2021-02-09 15:37 ` [RFC v2 3/7] vhost: Save masked_notifier state Eugenio Pérez
@ 2021-02-17 12:44     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 12:44 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-devel, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 337 bytes --]

On Tue, Feb 09, 2021 at 04:37:53PM +0100, Eugenio Pérez wrote:
> It will be used to recover call eventfd.
> 
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/vhost.h | 1 +
>  hw/virtio/vhost.c         | 2 ++
>  2 files changed, 3 insertions(+)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 3/7] vhost: Save masked_notifier state
@ 2021-02-17 12:44     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 12:44 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, qemu-devel,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Eric Blake, Michael Lilja, Jim Harford


[-- Attachment #1.1: Type: text/plain, Size: 337 bytes --]

On Tue, Feb 09, 2021 at 04:37:53PM +0100, Eugenio Pérez wrote:
> It will be used to recover call eventfd.
> 
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/vhost.h | 1 +
>  hw/virtio/vhost.c         | 2 ++
>  2 files changed, 3 insertions(+)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 4/7] vhost: Add VhostShadowVirtqueue
  2021-02-09 15:37 ` [RFC v2 4/7] vhost: Add VhostShadowVirtqueue Eugenio Pérez
@ 2021-02-17 13:01     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 13:01 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-devel, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 1077 bytes --]

On Tue, Feb 09, 2021 at 04:37:54PM +0100, Eugenio Pérez wrote:
> +/*
> + * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
> + * methods and file descriptors.
> + */
> +VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
> +{
> +    g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
> +    int r;
> +
> +    r = event_notifier_init(&svq->kick_notifier, 0);
> +    if (r != 0) {
> +        error_report("Couldn't create kick event notifier: %s",
> +                     strerror(errno));
> +        goto err_init_kick_notifier;
> +    }
> +
> +    r = event_notifier_init(&svq->call_notifier, 0);
> +    if (r != 0) {
> +        error_report("Couldn't create call event notifier: %s",
> +                     strerror(errno));
> +        goto err_init_call_notifier;
> +    }
> +
> +    return svq;

Use-after-free due to g_autofree. I think this should be:

  return g_steal_pointer(&svq)

https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html#g-steal-pointer

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 4/7] vhost: Add VhostShadowVirtqueue
@ 2021-02-17 13:01     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 13:01 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, qemu-devel,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Eric Blake, Michael Lilja, Jim Harford


[-- Attachment #1.1: Type: text/plain, Size: 1077 bytes --]

On Tue, Feb 09, 2021 at 04:37:54PM +0100, Eugenio Pérez wrote:
> +/*
> + * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
> + * methods and file descriptors.
> + */
> +VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
> +{
> +    g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
> +    int r;
> +
> +    r = event_notifier_init(&svq->kick_notifier, 0);
> +    if (r != 0) {
> +        error_report("Couldn't create kick event notifier: %s",
> +                     strerror(errno));
> +        goto err_init_kick_notifier;
> +    }
> +
> +    r = event_notifier_init(&svq->call_notifier, 0);
> +    if (r != 0) {
> +        error_report("Couldn't create call event notifier: %s",
> +                     strerror(errno));
> +        goto err_init_call_notifier;
> +    }
> +
> +    return svq;

Use-after-free due to g_autofree. I think this should be:

  return g_steal_pointer(&svq)

https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html#g-steal-pointer

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp
  2021-02-09 15:37 ` [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
@ 2021-02-17 15:26     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 15:26 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-devel, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 1190 bytes --]

On Tue, Feb 09, 2021 at 04:37:55PM +0100, Eugenio Pérez wrote:
> diff --git a/qapi/net.json b/qapi/net.json
> index c31748c87f..a1cdffb0f9 100644
> --- a/qapi/net.json
> +++ b/qapi/net.json
> @@ -77,6 +77,28 @@
>  ##
>  { 'command': 'netdev_del', 'data': {'id': 'str'} }
>  
> +##
> +# @x-vhost-enable-shadow-vq:
> +#
> +# Use vhost shadow virtqueue.

Is this command for testing only or do you expect it to be invoked by
libvirt in production? I think the shadow virtqueue can be an internal
QEMU feature that is hidden from management tools.

> +#
> +# @name: the device name of the virtual network adapter
> +#
> +# @enable: true to use he alternate shadow VQ notification path
> +#
> +# Returns: Error if failure, or 'no error' for success
> +#
> +# Since: 6.0

Is this a generic feature for any vhost or vDPA device? If yes, please
replace "virtual network adapter" in the doc comment.

Does this only apply to vhost-net devices? If so, please put "vhost-net"
in the name since there are other non-net vhost devices.

> +#
> +# Example:
> +#
> +# -> { "execute": "x-vhost-enable-shadow-vq", "arguments": {"enable": true} }

Missing "name" field?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp
@ 2021-02-17 15:26     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 15:26 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, qemu-devel,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Eric Blake, Michael Lilja, Jim Harford


[-- Attachment #1.1: Type: text/plain, Size: 1190 bytes --]

On Tue, Feb 09, 2021 at 04:37:55PM +0100, Eugenio Pérez wrote:
> diff --git a/qapi/net.json b/qapi/net.json
> index c31748c87f..a1cdffb0f9 100644
> --- a/qapi/net.json
> +++ b/qapi/net.json
> @@ -77,6 +77,28 @@
>  ##
>  { 'command': 'netdev_del', 'data': {'id': 'str'} }
>  
> +##
> +# @x-vhost-enable-shadow-vq:
> +#
> +# Use vhost shadow virtqueue.

Is this command for testing only or do you expect it to be invoked by
libvirt in production? I think the shadow virtqueue can be an internal
QEMU feature that is hidden from management tools.

> +#
> +# @name: the device name of the virtual network adapter
> +#
> +# @enable: true to use he alternate shadow VQ notification path
> +#
> +# Returns: Error if failure, or 'no error' for success
> +#
> +# Since: 6.0

Is this a generic feature for any vhost or vDPA device? If yes, please
replace "virtual network adapter" in the doc comment.

Does this only apply to vhost-net devices? If so, please put "vhost-net"
in the name since there are other non-net vhost devices.

> +#
> +# Example:
> +#
> +# -> { "execute": "x-vhost-enable-shadow-vq", "arguments": {"enable": true} }

Missing "name" field?

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue
  2021-02-09 15:37 ` [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
@ 2021-02-17 16:56     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 16:56 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-devel, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 7068 bytes --]

On Tue, Feb 09, 2021 at 04:37:56PM +0100, Eugenio Pérez wrote:
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index ac963bf23d..884818b109 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -55,6 +55,8 @@ struct vhost_iommu {
>      QLIST_ENTRY(vhost_iommu) iommu_next;
>  };
>  
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;

There is already another declaration in
hw/virtio/vhost-shadow-virtqueue.h. Should vhost.h include
vhost-shadow-virtqueue.h?

This is becoming confusing:
1. typedef in vhost-shadow-virtqueue.h
2. typedef in vhost.h
3. typedef in vhost-shadow-virtqueue.c

3 typedefs is a bit much :). I suggest:
1. typedef in vhost-shadow-virtqueue.h
2. #include "vhost-shadow-virtqueue.h" in vhost.h
3. struct VhostShadowVirtqueue (no typedef redefinition) in vhost-shadow-virtqueue.c

That should make the code easier to understand, navigate with tools, and
if a change is made (e.g. renaming the struct) then it won't be
necessary to change things in 3 places.

> +
>  typedef struct VhostDevConfigOps {
>      /* Vhost device config space changed callback
>       */
> @@ -83,7 +85,9 @@ struct vhost_dev {
>      uint64_t backend_cap;
>      bool started;
>      bool log_enabled;
> +    bool sw_lm_enabled;

Rename to shadow_vqs_enabled?

>      uint64_t log_size;
> +    VhostShadowVirtqueue **shadow_vqs;
>      Error *migration_blocker;
>      const VhostOps *vhost_ops;
>      void *opaque;
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index b5d2645ae0..01f282d434 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -8,9 +8,12 @@
>   */
>  
>  #include "hw/virtio/vhost-shadow-virtqueue.h"
> +#include "hw/virtio/vhost.h"
> +
> +#include "standard-headers/linux/vhost_types.h"
>  
>  #include "qemu/error-report.h"
> -#include "qemu/event_notifier.h"
> +#include "qemu/main-loop.h"
>  
>  /* Shadow virtqueue to relay notifications */
>  typedef struct VhostShadowVirtqueue {
> @@ -18,8 +21,95 @@ typedef struct VhostShadowVirtqueue {
>      EventNotifier kick_notifier;
>      /* Shadow call notifier, sent to vhost */
>      EventNotifier call_notifier;
> +
> +    /* Borrowed virtqueue's guest to host notifier. */
> +    EventNotifier host_notifier;

The purpose of these EventNotifier fields is not completely clear to me.
Here is how I interpret the comments:

1. The vhost device is set up to use kick_notifier/call_notifier when
   the shadow vq is enabled.

2. host_notifier is the guest-visible vq's host notifier. This is set up
   when the shadow vq is enabled.

But I'm not confident this is correct. Maybe you could expand the
comment to make it clear what is happening?

> +
> +    /* Virtio queue shadowing */
> +    VirtQueue *vq;
>  } VhostShadowVirtqueue;
>  
> +/* Forward guest notifications */
> +static void vhost_handle_guest_kick(EventNotifier *n)
> +{
> +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> +                                             host_notifier);
> +
> +    if (event_notifier_test_and_clear(n)) {
> +        event_notifier_set(&svq->kick_notifier);
> +    }
> +}

This function looks incomplete. You can make review easier by indicating
the state of the code:

  /* TODO pop requests from vq and put them onto vhost vq */

I'm not sure why it's useful to include this incomplete function in the
patch. Maybe the host notifier is already intercepted by the
guest-visible vq is still mapped directly to the vhost vq so this works?
An explanation in comments or the commit description would be helpful.

> +
> +/*
> + * Start shadow virtqueue operation.
> + * @dev vhost device
> + * @hidx vhost virtqueue index
> + * @svq Shadow Virtqueue
> + *
> + * Run in RCU context
> + */
> +bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> +                               unsigned idx,
> +                               VhostShadowVirtqueue *svq)
> +{
> +    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
> +    struct vhost_vring_file kick_file = {
> +        .index = idx,
> +        .fd = event_notifier_get_fd(&svq->kick_notifier),
> +    };
> +    int r;
> +
> +    /* Check that notifications are still going directly to vhost dev */
> +    assert(virtio_queue_host_notifier_status(svq->vq));
> +
> +    event_notifier_init_fd(&svq->host_notifier,
> +                           event_notifier_get_fd(vq_host_notifier));
> +    event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);

If I understand correctly svq->host_notifier only exists as an easy way
to use container_of() in vhost_handle_guest_kick?

svq->host_notifier does not actually own the fd and therefore
event_notifier_cleanup() must never be called on it?

Please document this.

> +
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> +    if (unlikely(r != 0)) {
> +        error_report("Couldn't set kick fd: %s", strerror(errno));
> +        goto err_set_vring_kick;
> +    }
> +
> +    /* Check for pending notifications from the guest */
> +    vhost_handle_guest_kick(&svq->host_notifier);
> +
> +    return true;

host_notifier is still registered with the vhost device so now the
kernel vhost thread and QEMU are both monitoring the ioeventfd at the
same time? Did I miss a vhost_set_vring_call() somewhere?

> +
> +err_set_vring_kick:
> +    event_notifier_set_handler(&svq->host_notifier, NULL);
> +
> +    return false;
> +}
> +
> +/*
> + * Stop shadow virtqueue operation.
> + * @dev vhost device
> + * @idx vhost queue index
> + * @svq Shadow Virtqueue
> + *
> + * Run in RCU context
> + */
> +void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
> +                              unsigned idx,
> +                              VhostShadowVirtqueue *svq)
> +{
> +    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
> +    struct vhost_vring_file kick_file = {
> +        .index = idx,
> +        .fd = event_notifier_get_fd(vq_host_notifier),
> +    };
> +    int r;
> +
> +    /* Restore vhost kick */
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> +    /* Cannot do a lot of things */
> +    assert(r == 0);
> +
> +    event_notifier_set_handler(&svq->host_notifier, NULL);

It may be necessary to call event_notifier_set(vq_host_notifier) before
vhost_set_vring_kick() so that the vhost kernel thread looks at the
vring immediately. That covers the case where svq->kick_notifier was
just set but not yet handled by the vhost kernel thread.

I'm not 100% sure this race condition can occur, but couldn't find
anything that prevents it.

> +err:
> +    for (; idx >= 0; --idx) {
> +        vhost_shadow_vq_free(dev->shadow_vqs[idx]);
> +    }
> +    g_free(dev->shadow_vqs[idx]);

Should this be g_free(dev->shadow_vqs)?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue
@ 2021-02-17 16:56     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 16:56 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, qemu-devel,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Eric Blake, Michael Lilja, Jim Harford


[-- Attachment #1.1: Type: text/plain, Size: 7068 bytes --]

On Tue, Feb 09, 2021 at 04:37:56PM +0100, Eugenio Pérez wrote:
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index ac963bf23d..884818b109 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -55,6 +55,8 @@ struct vhost_iommu {
>      QLIST_ENTRY(vhost_iommu) iommu_next;
>  };
>  
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;

There is already another declaration in
hw/virtio/vhost-shadow-virtqueue.h. Should vhost.h include
vhost-shadow-virtqueue.h?

This is becoming confusing:
1. typedef in vhost-shadow-virtqueue.h
2. typedef in vhost.h
3. typedef in vhost-shadow-virtqueue.c

3 typedefs is a bit much :). I suggest:
1. typedef in vhost-shadow-virtqueue.h
2. #include "vhost-shadow-virtqueue.h" in vhost.h
3. struct VhostShadowVirtqueue (no typedef redefinition) in vhost-shadow-virtqueue.c

That should make the code easier to understand, navigate with tools, and
if a change is made (e.g. renaming the struct) then it won't be
necessary to change things in 3 places.

> +
>  typedef struct VhostDevConfigOps {
>      /* Vhost device config space changed callback
>       */
> @@ -83,7 +85,9 @@ struct vhost_dev {
>      uint64_t backend_cap;
>      bool started;
>      bool log_enabled;
> +    bool sw_lm_enabled;

Rename to shadow_vqs_enabled?

>      uint64_t log_size;
> +    VhostShadowVirtqueue **shadow_vqs;
>      Error *migration_blocker;
>      const VhostOps *vhost_ops;
>      void *opaque;
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index b5d2645ae0..01f282d434 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -8,9 +8,12 @@
>   */
>  
>  #include "hw/virtio/vhost-shadow-virtqueue.h"
> +#include "hw/virtio/vhost.h"
> +
> +#include "standard-headers/linux/vhost_types.h"
>  
>  #include "qemu/error-report.h"
> -#include "qemu/event_notifier.h"
> +#include "qemu/main-loop.h"
>  
>  /* Shadow virtqueue to relay notifications */
>  typedef struct VhostShadowVirtqueue {
> @@ -18,8 +21,95 @@ typedef struct VhostShadowVirtqueue {
>      EventNotifier kick_notifier;
>      /* Shadow call notifier, sent to vhost */
>      EventNotifier call_notifier;
> +
> +    /* Borrowed virtqueue's guest to host notifier. */
> +    EventNotifier host_notifier;

The purpose of these EventNotifier fields is not completely clear to me.
Here is how I interpret the comments:

1. The vhost device is set up to use kick_notifier/call_notifier when
   the shadow vq is enabled.

2. host_notifier is the guest-visible vq's host notifier. This is set up
   when the shadow vq is enabled.

But I'm not confident this is correct. Maybe you could expand the
comment to make it clear what is happening?

> +
> +    /* Virtio queue shadowing */
> +    VirtQueue *vq;
>  } VhostShadowVirtqueue;
>  
> +/* Forward guest notifications */
> +static void vhost_handle_guest_kick(EventNotifier *n)
> +{
> +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> +                                             host_notifier);
> +
> +    if (event_notifier_test_and_clear(n)) {
> +        event_notifier_set(&svq->kick_notifier);
> +    }
> +}

This function looks incomplete. You can make review easier by indicating
the state of the code:

  /* TODO pop requests from vq and put them onto vhost vq */

I'm not sure why it's useful to include this incomplete function in the
patch. Maybe the host notifier is already intercepted by the
guest-visible vq is still mapped directly to the vhost vq so this works?
An explanation in comments or the commit description would be helpful.

> +
> +/*
> + * Start shadow virtqueue operation.
> + * @dev vhost device
> + * @hidx vhost virtqueue index
> + * @svq Shadow Virtqueue
> + *
> + * Run in RCU context
> + */
> +bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> +                               unsigned idx,
> +                               VhostShadowVirtqueue *svq)
> +{
> +    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
> +    struct vhost_vring_file kick_file = {
> +        .index = idx,
> +        .fd = event_notifier_get_fd(&svq->kick_notifier),
> +    };
> +    int r;
> +
> +    /* Check that notifications are still going directly to vhost dev */
> +    assert(virtio_queue_host_notifier_status(svq->vq));
> +
> +    event_notifier_init_fd(&svq->host_notifier,
> +                           event_notifier_get_fd(vq_host_notifier));
> +    event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);

If I understand correctly svq->host_notifier only exists as an easy way
to use container_of() in vhost_handle_guest_kick?

svq->host_notifier does not actually own the fd and therefore
event_notifier_cleanup() must never be called on it?

Please document this.

> +
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> +    if (unlikely(r != 0)) {
> +        error_report("Couldn't set kick fd: %s", strerror(errno));
> +        goto err_set_vring_kick;
> +    }
> +
> +    /* Check for pending notifications from the guest */
> +    vhost_handle_guest_kick(&svq->host_notifier);
> +
> +    return true;

host_notifier is still registered with the vhost device so now the
kernel vhost thread and QEMU are both monitoring the ioeventfd at the
same time? Did I miss a vhost_set_vring_call() somewhere?

> +
> +err_set_vring_kick:
> +    event_notifier_set_handler(&svq->host_notifier, NULL);
> +
> +    return false;
> +}
> +
> +/*
> + * Stop shadow virtqueue operation.
> + * @dev vhost device
> + * @idx vhost queue index
> + * @svq Shadow Virtqueue
> + *
> + * Run in RCU context
> + */
> +void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
> +                              unsigned idx,
> +                              VhostShadowVirtqueue *svq)
> +{
> +    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
> +    struct vhost_vring_file kick_file = {
> +        .index = idx,
> +        .fd = event_notifier_get_fd(vq_host_notifier),
> +    };
> +    int r;
> +
> +    /* Restore vhost kick */
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> +    /* Cannot do a lot of things */
> +    assert(r == 0);
> +
> +    event_notifier_set_handler(&svq->host_notifier, NULL);

It may be necessary to call event_notifier_set(vq_host_notifier) before
vhost_set_vring_kick() so that the vhost kernel thread looks at the
vring immediately. That covers the case where svq->kick_notifier was
just set but not yet handled by the vhost kernel thread.

I'm not 100% sure this race condition can occur, but couldn't find
anything that prevents it.

> +err:
> +    for (; idx >= 0; --idx) {
> +        vhost_shadow_vq_free(dev->shadow_vqs[idx]);
> +    }
> +    g_free(dev->shadow_vqs[idx]);

Should this be g_free(dev->shadow_vqs)?

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
  2021-02-09 15:37 ` [RFC v2 7/7] vhost: Route host->guest " Eugenio Pérez
@ 2021-02-17 17:24     ` Stefan Hajnoczi
  2021-03-01  6:24     ` Jason Wang
  1 sibling, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 17:24 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-devel, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

[-- Attachment #1: Type: text/plain, Size: 1605 bytes --]

On Tue, Feb 09, 2021 at 04:37:57PM +0100, Eugenio Pérez wrote:
> @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
>      }
>  }
>  
> +/* Forward vhost notifications */
> +static void vhost_handle_call(EventNotifier *n)

The name vhost_shadow_vq_handle_call() expresses the purpose of the
function more clearly.

> @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>      /* Check for pending notifications from the guest */
>      vhost_handle_guest_kick(&svq->host_notifier);
>  
> +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> +    if (r != 0) {
> +        error_report("Couldn't set call fd: %s", strerror(errno));
> +        goto err_set_vring_call;
> +    }

This ignores notifier_is_masked and always unmasks.

> @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
>      if (mask) {
>          assert(vdev->use_guest_notifier_mask);
>          file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> +    } else if (hdev->sw_lm_enabled) {
> +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> +        file.fd = event_notifier_get_fd(e);
>      } else {
>          file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
>      }

Maybe you can extend this function so it can be called unconditionally
from both vhost_shadow_vq_start_rcu() and vhost_shadow_vq_stop_rcu(). It
would be a single place that invokes vhost_set_vring_call().

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
@ 2021-02-17 17:24     ` Stefan Hajnoczi
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Hajnoczi @ 2021-02-17 17:24 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, qemu-devel,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Eric Blake, Michael Lilja, Jim Harford


[-- Attachment #1.1: Type: text/plain, Size: 1605 bytes --]

On Tue, Feb 09, 2021 at 04:37:57PM +0100, Eugenio Pérez wrote:
> @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
>      }
>  }
>  
> +/* Forward vhost notifications */
> +static void vhost_handle_call(EventNotifier *n)

The name vhost_shadow_vq_handle_call() expresses the purpose of the
function more clearly.

> @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>      /* Check for pending notifications from the guest */
>      vhost_handle_guest_kick(&svq->host_notifier);
>  
> +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> +    if (r != 0) {
> +        error_report("Couldn't set call fd: %s", strerror(errno));
> +        goto err_set_vring_call;
> +    }

This ignores notifier_is_masked and always unmasks.

> @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
>      if (mask) {
>          assert(vdev->use_guest_notifier_mask);
>          file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> +    } else if (hdev->sw_lm_enabled) {
> +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> +        file.fd = event_notifier_get_fd(e);
>      } else {
>          file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
>      }

Maybe you can extend this function so it can be called unconditionally
from both vhost_shadow_vq_start_rcu() and vhost_shadow_vq_stop_rcu(). It
would be a single place that invokes vhost_set_vring_call().

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status
  2021-02-17 12:41     ` Stefan Hajnoczi
  (?)
@ 2021-02-17 18:38     ` Eugenio Perez Martin
  -1 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-17 18:38 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

On Wed, Feb 17, 2021 at 1:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Feb 09, 2021 at 04:37:52PM +0100, Eugenio Pérez wrote:
> > This allows shadow virtqueue code to assert the queue status before
> > making changes.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  include/hw/virtio/virtio.h | 1 +
> >  hw/virtio/virtio.c         | 5 +++++
> >  2 files changed, 6 insertions(+)
> >
> > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > index b7ece7a6a8..227cec13a8 100644
> > --- a/include/hw/virtio/virtio.h
> > +++ b/include/hw/virtio/virtio.h
> > @@ -316,6 +316,7 @@ void virtio_device_release_ioeventfd(VirtIODevice *vdev);
> >  bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev);
> >  EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
> >  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
> > +bool virtio_queue_host_notifier_status(const VirtQueue *vq);
> >  void virtio_queue_host_notifier_read(EventNotifier *n);
> >  void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> >                                                  VirtIOHandleAIOOutput handle_output);
> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index 1fd1917ca0..53473ae4df 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -3594,6 +3594,11 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
> >      return &vq->host_notifier;
> >  }
> >
> > +bool virtio_queue_host_notifier_status(const VirtQueue *vq)
> > +{
> > +    return vq->host_notifier_enabled;
> > +}
> > +
> >  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled)
>
> Since there is a virtio_queue_set_host_notifier_enabled() I suggest
> calling this function virtio_queue_is_host_notifier_enabled() or
> virtio_queue_get_host_notifier_enabled(). That way it's clear they
> set/get the same thing.
>

I agree, will change.

Thanks!

> Stefan



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 4/7] vhost: Add VhostShadowVirtqueue
  2021-02-17 13:01     ` Stefan Hajnoczi
  (?)
@ 2021-02-17 18:40     ` Eugenio Perez Martin
  -1 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-17 18:40 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

On Wed, Feb 17, 2021 at 2:01 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Feb 09, 2021 at 04:37:54PM +0100, Eugenio Pérez wrote:
> > +/*
> > + * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
> > + * methods and file descriptors.
> > + */
> > +VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
> > +{
> > +    g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
> > +    int r;
> > +
> > +    r = event_notifier_init(&svq->kick_notifier, 0);
> > +    if (r != 0) {
> > +        error_report("Couldn't create kick event notifier: %s",
> > +                     strerror(errno));
> > +        goto err_init_kick_notifier;
> > +    }
> > +
> > +    r = event_notifier_init(&svq->call_notifier, 0);
> > +    if (r != 0) {
> > +        error_report("Couldn't create call event notifier: %s",
> > +                     strerror(errno));
> > +        goto err_init_call_notifier;
> > +    }
> > +
> > +    return svq;
>
> Use-after-free due to g_autofree. I think this should be:
>
>   return g_steal_pointer(&svq)
>
> https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html#g-steal-pointer

What a miss, thanks for pointing it out!



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp
  2021-02-17 15:26     ` Stefan Hajnoczi
  (?)
@ 2021-02-17 18:47     ` Eugenio Perez Martin
  2021-02-18 18:35       ` Eugenio Perez Martin
  -1 siblings, 1 reply; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-17 18:47 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

On Wed, Feb 17, 2021 at 4:26 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Feb 09, 2021 at 04:37:55PM +0100, Eugenio Pérez wrote:
> > diff --git a/qapi/net.json b/qapi/net.json
> > index c31748c87f..a1cdffb0f9 100644
> > --- a/qapi/net.json
> > +++ b/qapi/net.json
> > @@ -77,6 +77,28 @@
> >  ##
> >  { 'command': 'netdev_del', 'data': {'id': 'str'} }
> >
> > +##
> > +# @x-vhost-enable-shadow-vq:
> > +#
> > +# Use vhost shadow virtqueue.
>
> Is this command for testing only or do you expect it to be invoked by
> libvirt in production? I think the shadow virtqueue can be an internal
> QEMU feature that is hidden from management tools.
>

I think shadow virtqueue should kick in automatically when live
migration is triggered and the vhost device does not have _F_LOG too.

Maybe something like "prefer shadow vq to vhost logging" could be
exposed, but it is not a thing we have to figure now.

> > +#
> > +# @name: the device name of the virtual network adapter
> > +#
> > +# @enable: true to use he alternate shadow VQ notification path
> > +#
> > +# Returns: Error if failure, or 'no error' for success
> > +#
> > +# Since: 6.0
>
> Is this a generic feature for any vhost or vDPA device? If yes, please
> replace "virtual network adapter" in the doc comment.
>
> Does this only apply to vhost-net devices? If so, please put "vhost-net"
> in the name since there are other non-net vhost devices.

Right, thanks for the catch!

>
> > +#
> > +# Example:
> > +#
> > +# -> { "execute": "x-vhost-enable-shadow-vq", "arguments": {"enable": true} }
>
> Missing "name" field?

Yes, I will fix the example in future revisions.

Thanks!



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
  2021-02-17 17:24     ` Stefan Hajnoczi
  (?)
@ 2021-02-17 19:13     ` Eugenio Perez Martin
  -1 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-17 19:13 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

On Wed, Feb 17, 2021 at 6:24 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Feb 09, 2021 at 04:37:57PM +0100, Eugenio Pérez wrote:
> > @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> >      }
> >  }
> >
> > +/* Forward vhost notifications */
> > +static void vhost_handle_call(EventNotifier *n)
>
> The name vhost_shadow_vq_handle_call() expresses the purpose of the
> function more clearly.
>

I will rename it.

> > @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >      /* Check for pending notifications from the guest */
> >      vhost_handle_guest_kick(&svq->host_notifier);
> >
> > +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> > +    if (r != 0) {
> > +        error_report("Couldn't set call fd: %s", strerror(errno));
> > +        goto err_set_vring_call;
> > +    }
>
> This ignores notifier_is_masked and always unmasks.

Right, I will check for it too.

>
> > @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
> >      if (mask) {
> >          assert(vdev->use_guest_notifier_mask);
> >          file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> > +    } else if (hdev->sw_lm_enabled) {
> > +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> > +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> > +        file.fd = event_notifier_get_fd(e);
> >      } else {
> >          file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
> >      }
>
> Maybe you can extend this function so it can be called unconditionally
> from both vhost_shadow_vq_start_rcu() and vhost_shadow_vq_stop_rcu(). It
> would be a single place that invokes vhost_set_vring_call().

Your proposal is better, but I'm not sure if something depends on
calling mask(..., false) repeatedly. For example, if guest_notifier
changes.

However, from SVQ point of view we are only interested in avoiding to
set masked notifiers over a vhost already masked: unmasked state will
always incur on a file descriptor change. So I think it should be fine
if we allow unmasking, In other words, allow calling
vhost_set_vring_call(guest_notifier()) repeatedly, so old behavior is
preserved, and only omit the mask(..., true) if the notifier is
already masked, so we can call it unconditionally on
vhost_shadow_vq_start/stop.

Thanks!



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue
  2021-02-17 16:56     ` Stefan Hajnoczi
  (?)
@ 2021-02-17 20:11     ` Eugenio Perez Martin
  -1 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-17 20:11 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

On Wed, Feb 17, 2021 at 5:56 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Feb 09, 2021 at 04:37:56PM +0100, Eugenio Pérez wrote:
> > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > index ac963bf23d..884818b109 100644
> > --- a/include/hw/virtio/vhost.h
> > +++ b/include/hw/virtio/vhost.h
> > @@ -55,6 +55,8 @@ struct vhost_iommu {
> >      QLIST_ENTRY(vhost_iommu) iommu_next;
> >  };
> >
> > +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
>
> There is already another declaration in
> hw/virtio/vhost-shadow-virtqueue.h. Should vhost.h include
> vhost-shadow-virtqueue.h?
>
> This is becoming confusing:
> 1. typedef in vhost-shadow-virtqueue.h
> 2. typedef in vhost.h
> 3. typedef in vhost-shadow-virtqueue.c
>
> 3 typedefs is a bit much :). I suggest:
> 1. typedef in vhost-shadow-virtqueue.h
> 2. #include "vhost-shadow-virtqueue.h" in vhost.h
> 3. struct VhostShadowVirtqueue (no typedef redefinition) in vhost-shadow-virtqueue.c
>
> That should make the code easier to understand, navigate with tools, and
> if a change is made (e.g. renaming the struct) then it won't be
> necessary to change things in 3 places.
>

Totally agree.

> > +
> >  typedef struct VhostDevConfigOps {
> >      /* Vhost device config space changed callback
> >       */
> > @@ -83,7 +85,9 @@ struct vhost_dev {
> >      uint64_t backend_cap;
> >      bool started;
> >      bool log_enabled;
> > +    bool sw_lm_enabled;
>
> Rename to shadow_vqs_enabled?
>

Ok, will change.

> >      uint64_t log_size;
> > +    VhostShadowVirtqueue **shadow_vqs;
> >      Error *migration_blocker;
> >      const VhostOps *vhost_ops;
> >      void *opaque;
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> > index b5d2645ae0..01f282d434 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -8,9 +8,12 @@
> >   */
> >
> >  #include "hw/virtio/vhost-shadow-virtqueue.h"
> > +#include "hw/virtio/vhost.h"
> > +
> > +#include "standard-headers/linux/vhost_types.h"
> >
> >  #include "qemu/error-report.h"
> > -#include "qemu/event_notifier.h"
> > +#include "qemu/main-loop.h"
> >
> >  /* Shadow virtqueue to relay notifications */
> >  typedef struct VhostShadowVirtqueue {
> > @@ -18,8 +21,95 @@ typedef struct VhostShadowVirtqueue {
> >      EventNotifier kick_notifier;
> >      /* Shadow call notifier, sent to vhost */
> >      EventNotifier call_notifier;
> > +
> > +    /* Borrowed virtqueue's guest to host notifier. */
> > +    EventNotifier host_notifier;
>
> The purpose of these EventNotifier fields is not completely clear to me.
> Here is how I interpret the comments:
>
> 1. The vhost device is set up to use kick_notifier/call_notifier when
>    the shadow vq is enabled.
>

Right. These are the ones the vhost backend sees, so I will add this to the doc.

> 2. host_notifier is the guest-visible vq's host notifier. This is set up
>    when the shadow vq is enabled.
>

If by guest-visible you mean that the guest can notify the device
though it, yes. Names are used as in qemu virtqueue device code.

As you point later in the code, is borrowed (and not just referenced
through a pointer) to reduce qemu VirtIODevice dependency.

> But I'm not confident this is correct. Maybe you could expand the
> comment to make it clear what is happening?
>
> > +
> > +    /* Virtio queue shadowing */
> > +    VirtQueue *vq;
> >  } VhostShadowVirtqueue;
> >
> > +/* Forward guest notifications */
> > +static void vhost_handle_guest_kick(EventNotifier *n)
> > +{
> > +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> > +                                             host_notifier);
> > +
> > +    if (event_notifier_test_and_clear(n)) {
> > +        event_notifier_set(&svq->kick_notifier);
> > +    }
> > +}
>
> This function looks incomplete. You can make review easier by indicating
> the state of the code:
>
>   /* TODO pop requests from vq and put them onto vhost vq */
>

Only guest -> device notifications are forwarded at this point,
buffers aren't. The vhost backend/device still needs to access the
latter the regular way, but guest -> device notifications now make an
extra jump.

> I'm not sure why it's useful to include this incomplete function in the
> patch.

It has little use at this moment except logging that notifications are
being forwarded through qemu. Once notifications are being forwarded
properly, we can develop buffer treatment on top of this.

In my debugging it helps me to bisect the problem if the network loses
connectivity when I enable shadow virtqueue: If it works in this
commit, but not in 7/7 ("vhost: Route host->guest notification through
shadow virtqueue"), I know that is the latter commit fault. If it
works on 7/7, but it doesn't work on commits that exchange/translates
buffers through qemu [1], I know host->guest notifications are being
forwarded OK and the problem is elsewhere.

> Maybe the host notifier is already intercepted by the
> guest-visible vq is still mapped directly to the vhost vq so this works?
> An explanation in comments or the commit description would be helpful.
>

I'm not sure what you mean in the first part, but descriptors and
buffers are still mapped in vhost device, yes.

> > +
> > +/*
> > + * Start shadow virtqueue operation.
> > + * @dev vhost device
> > + * @hidx vhost virtqueue index
> > + * @svq Shadow Virtqueue
> > + *
> > + * Run in RCU context
> > + */
> > +bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> > +                               unsigned idx,
> > +                               VhostShadowVirtqueue *svq)
> > +{
> > +    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
> > +    struct vhost_vring_file kick_file = {
> > +        .index = idx,
> > +        .fd = event_notifier_get_fd(&svq->kick_notifier),
> > +    };
> > +    int r;
> > +
> > +    /* Check that notifications are still going directly to vhost dev */
> > +    assert(virtio_queue_host_notifier_status(svq->vq));
> > +
> > +    event_notifier_init_fd(&svq->host_notifier,
> > +                           event_notifier_get_fd(vq_host_notifier));
> > +    event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);
>
> If I understand correctly svq->host_notifier only exists as an easy way
> to use container_of() in vhost_handle_guest_kick?
>
> svq->host_notifier does not actually own the fd and therefore
> event_notifier_cleanup() must never be called on it?
>
> Please document this.
>

Right, will document better.

> > +
> > +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> > +    if (unlikely(r != 0)) {
> > +        error_report("Couldn't set kick fd: %s", strerror(errno));
> > +        goto err_set_vring_kick;
> > +    }
> > +
> > +    /* Check for pending notifications from the guest */
> > +    vhost_handle_guest_kick(&svq->host_notifier);
> > +
> > +    return true;
>
(> host_notifier is still registered with the vhost device so now the
> kernel vhost thread and QEMU are both monitoring the ioeventfd at the
> same time?

The vhost device is not monitoring host_notifier, since we replaced it
with previous vhost_set_vring_kick(dev, &kick_file).

> Did I miss a vhost_set_vring_call() somewhere?
>

It is in the next commit. This one just intercepts guest->device notifications.

> > +
> > +err_set_vring_kick:
> > +    event_notifier_set_handler(&svq->host_notifier, NULL);
> > +
> > +    return false;
> > +}
> > +
> > +/*
> > + * Stop shadow virtqueue operation.
> > + * @dev vhost device
> > + * @idx vhost queue index
> > + * @svq Shadow Virtqueue
> > + *
> > + * Run in RCU context
> > + */
> > +void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
> > +                              unsigned idx,
> > +                              VhostShadowVirtqueue *svq)
> > +{
> > +    EventNotifier *vq_host_notifier = virtio_queue_get_host_notifier(svq->vq);
> > +    struct vhost_vring_file kick_file = {
> > +        .index = idx,
> > +        .fd = event_notifier_get_fd(vq_host_notifier),
> > +    };
> > +    int r;
> > +
> > +    /* Restore vhost kick */
> > +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> > +    /* Cannot do a lot of things */
> > +    assert(r == 0);
> > +
> > +    event_notifier_set_handler(&svq->host_notifier, NULL);
>
> It may be necessary to call event_notifier_set(vq_host_notifier) before
> vhost_set_vring_kick() so that the vhost kernel thread looks at the
> vring immediately. That covers the case where svq->kick_notifier was
> just set but not yet handled by the vhost kernel thread.
>
> I'm not 100% sure this race condition can occur, but couldn't find
> anything that prevents it.

As I can see, we should not lose any notifications as long as vhost
kernel thread checks the fd at set vring kick. They are always sent by
the guest, and qemu never kicks it. We would have the same problem in
vhost_virtqueue_start and qemu doesn't kick in it.

However, I'm not sure if we can say that all devices will check it...
Should it be mandated by vDPA?

>
> > +err:
> > +    for (; idx >= 0; --idx) {
> > +        vhost_shadow_vq_free(dev->shadow_vqs[idx]);
> > +    }
> > +    g_free(dev->shadow_vqs[idx]);
>
> Should this be g_free(dev->shadow_vqs)?

Right, a big miss again.

Thank you very much for the useful and in- depth review!

[1] Not in this series, but already posted in previous RFC and
obviously the final solution will contain them :).



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp
  2021-02-17 18:47     ` Eugenio Perez Martin
@ 2021-02-18 18:35       ` Eugenio Perez Martin
  0 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-02-18 18:35 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Rob Miller, Parav Pandit, Michael S. Tsirkin, Jason Wang,
	Juan Quintela, qemu-level, Markus Armbruster,
	Harpreet Singh Anand, Xiao W Wang, Eli Cohen, virtualization,
	Michael Lilja, Jim Harford, Stefano Garzarella

On Wed, Feb 17, 2021 at 7:47 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Wed, Feb 17, 2021 at 4:26 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Tue, Feb 09, 2021 at 04:37:55PM +0100, Eugenio Pérez wrote:
> > > diff --git a/qapi/net.json b/qapi/net.json
> > > index c31748c87f..a1cdffb0f9 100644
> > > --- a/qapi/net.json
> > > +++ b/qapi/net.json
> > > @@ -77,6 +77,28 @@
> > >  ##
> > >  { 'command': 'netdev_del', 'data': {'id': 'str'} }
> > >
> > > +##
> > > +# @x-vhost-enable-shadow-vq:
> > > +#
> > > +# Use vhost shadow virtqueue.
> >
> > Is this command for testing only or do you expect it to be invoked by
> > libvirt in production? I think the shadow virtqueue can be an internal
> > QEMU feature that is hidden from management tools.
> >
>
> I think shadow virtqueue should kick in automatically when live
> migration is triggered and the vhost device does not have _F_LOG too.
>
> Maybe something like "prefer shadow vq to vhost logging" could be
> exposed, but it is not a thing we have to figure now.
>
> > > +#
> > > +# @name: the device name of the virtual network adapter
> > > +#
> > > +# @enable: true to use he alternate shadow VQ notification path
> > > +#
> > > +# Returns: Error if failure, or 'no error' for success
> > > +#
> > > +# Since: 6.0
> >
> > Is this a generic feature for any vhost or vDPA device? If yes, please
> > replace "virtual network adapter" in the doc comment.
> >
> > Does this only apply to vhost-net devices? If so, please put "vhost-net"
> > in the name since there are other non-net vhost devices.
>
> Right, thanks for the catch!
>

Moreover, the command should not be in net.json. However, I don't see
a generic virtio/vhost file to add the command.

> >
> > > +#
> > > +# Example:
> > > +#
> > > +# -> { "execute": "x-vhost-enable-shadow-vq", "arguments": {"enable": true} }
> >
> > Missing "name" field?
>
> Yes, I will fix the example in future revisions.
>
> Thanks!



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
  2021-02-09 15:37 ` [RFC v2 7/7] vhost: Route host->guest " Eugenio Pérez
@ 2021-03-01  6:24     ` Jason Wang
  2021-03-01  6:24     ` Jason Wang
  1 sibling, 0 replies; 35+ messages in thread
From: Jason Wang @ 2021-03-01  6:24 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Juan Quintela, Michael S. Tsirkin,
	Markus Armbruster, virtualization, Harpreet Singh Anand,
	Xiao W Wang, Stefan Hajnoczi, Eli Cohen, Stefano Garzarella,
	Michael Lilja, Jim Harford, Rob Miller


On 2021/2/9 11:37 下午, Eugenio Pérez wrote:
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   hw/virtio/vhost-shadow-virtqueue.h |  2 ++
>   hw/virtio/vhost-shadow-virtqueue.c | 49 ++++++++++++++++++++++++++++++
>   hw/virtio/vhost.c                  |  5 ++-
>   3 files changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
> index c45035c4b7..210133434c 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.h
> +++ b/hw/virtio/vhost-shadow-virtqueue.h
> @@ -17,6 +17,8 @@
>   
>   typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
>   
> +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq);
> +
>   bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>                                  unsigned idx,
>                                  VhostShadowVirtqueue *svq);
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index 01f282d434..61d98ae652 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -24,6 +24,8 @@ typedef struct VhostShadowVirtqueue {
>   
>       /* Borrowed virtqueue's guest to host notifier. */
>       EventNotifier host_notifier;
> +    /* Host to guest notifier */
> +    EventNotifier *guest_notifier;
>   
>       /* Virtio queue shadowing */
>       VirtQueue *vq;
> @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
>       }
>   }
>   
> +/* Forward vhost notifications */
> +static void vhost_handle_call(EventNotifier *n)
> +{
> +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> +                                             call_notifier);
> +
> +    if (event_notifier_test_and_clear(n)) {
> +        event_notifier_set(svq->guest_notifier);
> +    }
> +}


So I wonder how this is synchonized with virtqueue mask/unmask. Or the 
masking is totally transparent to shadow virtqueue?

Thanks


> +
> +/*
> + * Get the vhost call notifier of the shadow vq
> + * @vq Shadow virtqueue
> + */
> +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq)
> +{
> +    return &vq->call_notifier;
> +}
> +
>   /*
>    * Start shadow virtqueue operation.
>    * @dev vhost device
> @@ -57,6 +79,10 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>           .index = idx,
>           .fd = event_notifier_get_fd(&svq->kick_notifier),
>       };
> +    struct vhost_vring_file call_file = {
> +        .index = idx,
> +        .fd = event_notifier_get_fd(&svq->call_notifier),
> +    };
>       int r;
>   
>       /* Check that notifications are still going directly to vhost dev */
> @@ -66,6 +92,7 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>                              event_notifier_get_fd(vq_host_notifier));
>       event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);
>   
> +    svq->guest_notifier = virtio_queue_get_guest_notifier(svq->vq);
>       r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
>       if (unlikely(r != 0)) {
>           error_report("Couldn't set kick fd: %s", strerror(errno));
> @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>       /* Check for pending notifications from the guest */
>       vhost_handle_guest_kick(&svq->host_notifier);
>   
> +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> +    if (r != 0) {
> +        error_report("Couldn't set call fd: %s", strerror(errno));
> +        goto err_set_vring_call;
> +    }
> +
>       return true;
>   
> +err_set_vring_call:
> +    kick_file.fd = event_notifier_get_fd(vq_host_notifier);
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> +    assert(r == 0);
> +
>   err_set_vring_kick:
>       event_notifier_set_handler(&svq->host_notifier, NULL);
>   
> @@ -108,6 +146,16 @@ void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
>       assert(r == 0);
>   
>       event_notifier_set_handler(&svq->host_notifier, NULL);
> +
> +    if (!dev->vqs[idx].notifier_is_masked) {
> +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> +
> +        /* Restore vhost call */
> +        vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx, false);
> +
> +        /* Check for pending calls */
> +        vhost_handle_call(e);
> +    }
>   }
>   
>   /*
> @@ -136,6 +184,7 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
>           goto err_init_call_notifier;
>       }
>   
> +    event_notifier_set_handler(&svq->call_notifier, vhost_handle_call);
>       return g_steal_pointer(&svq);
>   
>   err_init_call_notifier:
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 9d4728e545..0dc95679e9 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -975,7 +975,6 @@ static int vhost_sw_live_migration_start(struct vhost_dev *dev)
>           for (idx = 0; idx < dev->nvqs; ++idx) {
>               bool ok = vhost_shadow_vq_start_rcu(dev, idx,
>                                                   dev->shadow_vqs[idx]);
> -
>               if (!ok) {
>                   int stop_idx = idx;
>   
> @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
>       if (mask) {
>           assert(vdev->use_guest_notifier_mask);
>           file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> +    } else if (hdev->sw_lm_enabled) {
> +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> +        file.fd = event_notifier_get_fd(e);
>       } else {
>           file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
>       }



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
@ 2021-03-01  6:24     ` Jason Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Jason Wang @ 2021-03-01  6:24 UTC (permalink / raw)
  To: Eugenio Pérez, qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, virtualization,
	Harpreet Singh Anand, Xiao W Wang, Stefan Hajnoczi, Eli Cohen,
	Michael Lilja, Jim Harford, Rob Miller


On 2021/2/9 11:37 下午, Eugenio Pérez wrote:
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   hw/virtio/vhost-shadow-virtqueue.h |  2 ++
>   hw/virtio/vhost-shadow-virtqueue.c | 49 ++++++++++++++++++++++++++++++
>   hw/virtio/vhost.c                  |  5 ++-
>   3 files changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
> index c45035c4b7..210133434c 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.h
> +++ b/hw/virtio/vhost-shadow-virtqueue.h
> @@ -17,6 +17,8 @@
>   
>   typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
>   
> +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq);
> +
>   bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>                                  unsigned idx,
>                                  VhostShadowVirtqueue *svq);
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index 01f282d434..61d98ae652 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -24,6 +24,8 @@ typedef struct VhostShadowVirtqueue {
>   
>       /* Borrowed virtqueue's guest to host notifier. */
>       EventNotifier host_notifier;
> +    /* Host to guest notifier */
> +    EventNotifier *guest_notifier;
>   
>       /* Virtio queue shadowing */
>       VirtQueue *vq;
> @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
>       }
>   }
>   
> +/* Forward vhost notifications */
> +static void vhost_handle_call(EventNotifier *n)
> +{
> +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> +                                             call_notifier);
> +
> +    if (event_notifier_test_and_clear(n)) {
> +        event_notifier_set(svq->guest_notifier);
> +    }
> +}


So I wonder how this is synchonized with virtqueue mask/unmask. Or the 
masking is totally transparent to shadow virtqueue?

Thanks


> +
> +/*
> + * Get the vhost call notifier of the shadow vq
> + * @vq Shadow virtqueue
> + */
> +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq)
> +{
> +    return &vq->call_notifier;
> +}
> +
>   /*
>    * Start shadow virtqueue operation.
>    * @dev vhost device
> @@ -57,6 +79,10 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>           .index = idx,
>           .fd = event_notifier_get_fd(&svq->kick_notifier),
>       };
> +    struct vhost_vring_file call_file = {
> +        .index = idx,
> +        .fd = event_notifier_get_fd(&svq->call_notifier),
> +    };
>       int r;
>   
>       /* Check that notifications are still going directly to vhost dev */
> @@ -66,6 +92,7 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>                              event_notifier_get_fd(vq_host_notifier));
>       event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);
>   
> +    svq->guest_notifier = virtio_queue_get_guest_notifier(svq->vq);
>       r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
>       if (unlikely(r != 0)) {
>           error_report("Couldn't set kick fd: %s", strerror(errno));
> @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
>       /* Check for pending notifications from the guest */
>       vhost_handle_guest_kick(&svq->host_notifier);
>   
> +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> +    if (r != 0) {
> +        error_report("Couldn't set call fd: %s", strerror(errno));
> +        goto err_set_vring_call;
> +    }
> +
>       return true;
>   
> +err_set_vring_call:
> +    kick_file.fd = event_notifier_get_fd(vq_host_notifier);
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> +    assert(r == 0);
> +
>   err_set_vring_kick:
>       event_notifier_set_handler(&svq->host_notifier, NULL);
>   
> @@ -108,6 +146,16 @@ void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
>       assert(r == 0);
>   
>       event_notifier_set_handler(&svq->host_notifier, NULL);
> +
> +    if (!dev->vqs[idx].notifier_is_masked) {
> +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> +
> +        /* Restore vhost call */
> +        vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx, false);
> +
> +        /* Check for pending calls */
> +        vhost_handle_call(e);
> +    }
>   }
>   
>   /*
> @@ -136,6 +184,7 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
>           goto err_init_call_notifier;
>       }
>   
> +    event_notifier_set_handler(&svq->call_notifier, vhost_handle_call);
>       return g_steal_pointer(&svq);
>   
>   err_init_call_notifier:
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 9d4728e545..0dc95679e9 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -975,7 +975,6 @@ static int vhost_sw_live_migration_start(struct vhost_dev *dev)
>           for (idx = 0; idx < dev->nvqs; ++idx) {
>               bool ok = vhost_shadow_vq_start_rcu(dev, idx,
>                                                   dev->shadow_vqs[idx]);
> -
>               if (!ok) {
>                   int stop_idx = idx;
>   
> @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
>       if (mask) {
>           assert(vdev->use_guest_notifier_mask);
>           file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> +    } else if (hdev->sw_lm_enabled) {
> +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> +        file.fd = event_notifier_get_fd(e);
>       } else {
>           file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
>       }

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
  2021-03-01  6:24     ` Jason Wang
  (?)
@ 2021-03-02 16:30     ` Eugenio Perez Martin
  -1 siblings, 0 replies; 35+ messages in thread
From: Eugenio Perez Martin @ 2021-03-02 16:30 UTC (permalink / raw)
  To: Jason Wang
  Cc: Parav Pandit, Juan Quintela, Markus Armbruster,
	Michael S. Tsirkin, qemu-level, virtualization,
	Harpreet Singh Anand, Xiao W Wang, Stefan Hajnoczi, Eli Cohen,
	Stefano Garzarella, Michael Lilja, Jim Harford, Rob Miller

On Mon, Mar 1, 2021 at 7:24 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> On 2021/2/9 11:37 下午, Eugenio Pérez wrote:
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   hw/virtio/vhost-shadow-virtqueue.h |  2 ++
> >   hw/virtio/vhost-shadow-virtqueue.c | 49 ++++++++++++++++++++++++++++++
> >   hw/virtio/vhost.c                  |  5 ++-
> >   3 files changed, 55 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
> > index c45035c4b7..210133434c 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > @@ -17,6 +17,8 @@
> >
> >   typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
> >
> > +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq);
> > +
> >   bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >                                  unsigned idx,
> >                                  VhostShadowVirtqueue *svq);
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> > index 01f282d434..61d98ae652 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -24,6 +24,8 @@ typedef struct VhostShadowVirtqueue {
> >
> >       /* Borrowed virtqueue's guest to host notifier. */
> >       EventNotifier host_notifier;
> > +    /* Host to guest notifier */
> > +    EventNotifier *guest_notifier;
> >
> >       /* Virtio queue shadowing */
> >       VirtQueue *vq;
> > @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> >       }
> >   }
> >
> > +/* Forward vhost notifications */
> > +static void vhost_handle_call(EventNotifier *n)
> > +{
> > +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> > +                                             call_notifier);
> > +
> > +    if (event_notifier_test_and_clear(n)) {
> > +        event_notifier_set(svq->guest_notifier);
> > +    }
> > +}
>
>
> So I wonder how this is synchonized with virtqueue mask/unmask. Or the
> masking is totally transparent to shadow virtqueue?
>

Hi Jason.

Thanks for pointing it out. Actually, the design proposed in this
series is wrong.

In this series, when the guest masks a virtqueue, qemu still sends the
masked notifier as call fd to the vhost device. While that ioctl acts
as a synchronization point, it disables shadow vq polling and used
buffers forwarding entirely. The guest could be actively polling the
virtqueue used ring and waiting for used buffers before unmasking:
This is valid as far as I can tell.

At unmasking, either guest's notifier or shadow virtqueue call
notifier is set, and masked is checked as usual. Again this is
transparent for the dataplane (vhost_handle_call in this series).

We could have a race in vhost_shadow_vq_stop_rcu, since mask/unmask
status is set by vcpu thread and these are called by QMP here [1]. As
Stefan pointed out, vhost_shadow_vq_start_rcu is bad because it
unmasks unconditionally.

In the next revision the masked status will be also tracked by the
shadow virtqueue, but will only affect qemu->guest used notifications.

> Thanks
>
>
> > +
> > +/*
> > + * Get the vhost call notifier of the shadow vq
> > + * @vq Shadow virtqueue
> > + */
> > +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq)
> > +{
> > +    return &vq->call_notifier;
> > +}
> > +
> >   /*
> >    * Start shadow virtqueue operation.
> >    * @dev vhost device
> > @@ -57,6 +79,10 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >           .index = idx,
> >           .fd = event_notifier_get_fd(&svq->kick_notifier),
> >       };
> > +    struct vhost_vring_file call_file = {
> > +        .index = idx,
> > +        .fd = event_notifier_get_fd(&svq->call_notifier),
> > +    };
> >       int r;
> >
> >       /* Check that notifications are still going directly to vhost dev */
> > @@ -66,6 +92,7 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >                              event_notifier_get_fd(vq_host_notifier));
> >       event_notifier_set_handler(&svq->host_notifier, vhost_handle_guest_kick);
> >
> > +    svq->guest_notifier = virtio_queue_get_guest_notifier(svq->vq);
> >       r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> >       if (unlikely(r != 0)) {
> >           error_report("Couldn't set kick fd: %s", strerror(errno));
> > @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >       /* Check for pending notifications from the guest */
> >       vhost_handle_guest_kick(&svq->host_notifier);
> >
> > +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> > +    if (r != 0) {
> > +        error_report("Couldn't set call fd: %s", strerror(errno));
> > +        goto err_set_vring_call;
> > +    }
> > +
> >       return true;
> >
> > +err_set_vring_call:
> > +    kick_file.fd = event_notifier_get_fd(vq_host_notifier);
> > +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> > +    assert(r == 0);
> > +
> >   err_set_vring_kick:
> >       event_notifier_set_handler(&svq->host_notifier, NULL);
> >
> > @@ -108,6 +146,16 @@ void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
> >       assert(r == 0);
> >
> >       event_notifier_set_handler(&svq->host_notifier, NULL);
> > +
> > +    if (!dev->vqs[idx].notifier_is_masked) {
> > +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
 > > +
> > +        /* Restore vhost call */
> > +        vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx, false);
> > +
> > +        /* Check for pending calls */
> > +        vhost_handle_call(e);
> > +    }

[1] We could have a race condition if vcpu mask/unmask just between
reading it and calling vhost_virtqueue_mask: vhost_shadow_vq_stop_rcu
would override whatever guest set. It will be fixed in the next
revision.

> >   }
> >
> >   /*
> > @@ -136,6 +184,7 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
> >           goto err_init_call_notifier;
> >       }
> >
> > +    event_notifier_set_handler(&svq->call_notifier, vhost_handle_call);
> >       return g_steal_pointer(&svq);
> >
> >   err_init_call_notifier:
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 9d4728e545..0dc95679e9 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -975,7 +975,6 @@ static int vhost_sw_live_migration_start(struct vhost_dev *dev)
> >           for (idx = 0; idx < dev->nvqs; ++idx) {
> >               bool ok = vhost_shadow_vq_start_rcu(dev, idx,
> >                                                   dev->shadow_vqs[idx]);
> > -
> >               if (!ok) {
> >                   int stop_idx = idx;
> >
> > @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
> >       if (mask) {
> >           assert(vdev->use_guest_notifier_mask);
> >           file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> > +    } else if (hdev->sw_lm_enabled) {
> > +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> > +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> > +        file.fd = event_notifier_get_fd(e);
> >       } else {
> >           file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
> >       }
>



^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2021-03-02 16:31 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-09 15:37 [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Pérez
2021-02-09 15:37 ` [RFC v2 1/7] vhost: Delete trailing dot in errpr_setg argument Eugenio Pérez
2021-02-09 16:24   ` Eric Blake
2021-02-09 18:11     ` Eugenio Perez Martin
2021-02-11 16:40       ` Stefano Garzarella
2021-02-11 16:40         ` Stefano Garzarella
2021-02-09 15:37 ` [RFC v2 2/7] virtio: Add virtio_queue_host_notifier_status Eugenio Pérez
2021-02-17 12:41   ` Stefan Hajnoczi
2021-02-17 12:41     ` Stefan Hajnoczi
2021-02-17 18:38     ` Eugenio Perez Martin
2021-02-09 15:37 ` [RFC v2 3/7] vhost: Save masked_notifier state Eugenio Pérez
2021-02-17 12:44   ` Stefan Hajnoczi
2021-02-17 12:44     ` Stefan Hajnoczi
2021-02-09 15:37 ` [RFC v2 4/7] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2021-02-17 13:01   ` Stefan Hajnoczi
2021-02-17 13:01     ` Stefan Hajnoczi
2021-02-17 18:40     ` Eugenio Perez Martin
2021-02-09 15:37 ` [RFC v2 5/7] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
2021-02-17 15:26   ` Stefan Hajnoczi
2021-02-17 15:26     ` Stefan Hajnoczi
2021-02-17 18:47     ` Eugenio Perez Martin
2021-02-18 18:35       ` Eugenio Perez Martin
2021-02-09 15:37 ` [RFC v2 6/7] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
2021-02-17 16:56   ` Stefan Hajnoczi
2021-02-17 16:56     ` Stefan Hajnoczi
2021-02-17 20:11     ` Eugenio Perez Martin
2021-02-09 15:37 ` [RFC v2 7/7] vhost: Route host->guest " Eugenio Pérez
2021-02-17 17:24   ` Stefan Hajnoczi
2021-02-17 17:24     ` Stefan Hajnoczi
2021-02-17 19:13     ` Eugenio Perez Martin
2021-03-01  6:24   ` Jason Wang
2021-03-01  6:24     ` Jason Wang
2021-03-02 16:30     ` Eugenio Perez Martin
2021-02-09 15:54 ` [RFC v2 0/7] vDPA shadow virtqueue - notifications forwarding Eugenio Perez Martin
2021-02-11 11:12 ` no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.