qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] Enable vdpa net migration with features depending on CVQ
@ 2023-07-28 17:20 Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

At this moment the migration of net features that depends on CVQ is not
possible, as there is no reliable way to restore the device state like mac
address, number of enabled queues, etc to the destination.  This is mainly
caused because the device must only read CVQ, and process all the commands
before resuming the dataplane.

This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
ioctl for dataplane vqs only after the device has processed all commands.
---
From FRC:
* Enable vqs early in case CVQ cannot be shadowed.

Eugenio Pérez (7):
  vdpa: export vhost_vdpa_set_vring_ready
  vdpa: add should_enable op
  vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
  vdpa: add stub vhost_vdpa_should_enable
  vdpa: delay enable of data vqs
  vdpa: enable cvq svq if data vq are shadowed
  vdpa: remove net cvq migration blocker

 include/hw/virtio/vhost-vdpa.h |  9 +++++
 hw/virtio/vhost-vdpa.c         | 33 ++++++++++++----
 net/vhost-vdpa.c               | 69 ++++++++++++++++++++++++++--------
 hw/virtio/trace-events         |  2 +-
 4 files changed, 89 insertions(+), 24 deletions(-)

-- 
2.39.3




^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 2/7] vdpa: add should_enable op Eugenio Pérez
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

The vhost-vdpa net backend needs to enable vrings in a different order
than default, so export it.

No functional change intended except for tracing, that now includes the
(virtio) index being enabled and the return value of the ioctl.

Still ignoring return value of this function if called from
vhost_vdpa_dev_start, as reorganize calling code around it is out of
the scope of this series.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h |  1 +
 hw/virtio/vhost-vdpa.c         | 28 ++++++++++++++++++++--------
 hw/virtio/trace-events         |  2 +-
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index e64bfc7f98..5407d54fd7 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -57,6 +57,7 @@ typedef struct vhost_vdpa {
 } VhostVDPA;
 
 int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
+int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx);
 
 int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
                        hwaddr size, void *vaddr, bool readonly);
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 42f2a4bae9..bebcc9fe7c 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -532,6 +532,19 @@ int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range)
     return ret < 0 ? -errno : 0;
 }
 
+int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx)
+{
+    struct vhost_dev *dev = v->dev;
+    struct vhost_vring_state state = {
+        .index = idx,
+        .num = 1,
+    };
+    int r = vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
+
+    trace_vhost_vdpa_set_vring_ready(dev, idx, r);
+    return r;
+}
+
 /*
  * The use of this function is for requests that only need to be
  * applied once. Typically such request occurs at the beginning
@@ -876,16 +889,15 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
     return idx;
 }
 
-static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
+static int vhost_vdpa_set_vrings_ready(struct vhost_dev *dev)
 {
+    struct vhost_vdpa *v = dev->opaque;
     int i;
-    trace_vhost_vdpa_set_vring_ready(dev);
+
+    assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
+
     for (i = 0; i < dev->nvqs; ++i) {
-        struct vhost_vring_state state = {
-            .index = dev->vq_index + i,
-            .num = 1,
-        };
-        vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
+        vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
     }
     return 0;
 }
@@ -1298,7 +1310,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         if (unlikely(!ok)) {
             return -1;
         }
-        vhost_vdpa_set_vring_ready(dev);
+        vhost_vdpa_set_vrings_ready(dev);
     } else {
         vhost_vdpa_suspend(dev);
         vhost_vdpa_svqs_stop(dev);
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 7109cf1a3b..1cb9027d1e 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -48,7 +48,7 @@ vhost_vdpa_set_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRI
 vhost_vdpa_get_device_id(void *dev, uint32_t device_id) "dev: %p device_id %"PRIu32
 vhost_vdpa_reset_device(void *dev) "dev: %p"
 vhost_vdpa_get_vq_index(void *dev, int idx, int vq_idx) "dev: %p idx: %d vq idx: %d"
-vhost_vdpa_set_vring_ready(void *dev) "dev: %p"
+vhost_vdpa_set_vring_ready(void *dev, unsigned i, int r) "dev: %p, idx: %u, r: %d"
 vhost_vdpa_dump_config(void *dev, const char *line) "dev: %p %s"
 vhost_vdpa_set_config(void *dev, uint32_t offset, uint32_t size, uint32_t flags) "dev: %p offset: %"PRIu32" size: %"PRIu32" flags: 0x%"PRIx32
 vhost_vdpa_get_config(void *dev, void *config, uint32_t config_len) "dev: %p config: %p config_len: %"PRIu32
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/7] vdpa: add should_enable op
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 3/7] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

To restore the device at the destination of a live migration we send the
commands through control virtqueue.  For a device to read CVQ it must
have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we will not send vring_enable until all configuration is
used by the device.

As a first step, enable a new vitio ops per vhost_vdpa device to know if
we should enable a virtqueue or not.  This srtuct can be reused in the
future to add more actions to vhost_vdpa that depend on the virtIO kind
of device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 5407d54fd7..3d330d439a 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -30,6 +30,13 @@ typedef struct VhostVDPAHostNotifier {
     void *addr;
 } VhostVDPAHostNotifier;
 
+struct vhost_vdpa;
+typedef bool (*vhost_vdpa_virtio_should_enable_op)(const struct vhost_vdpa *v);
+
+typedef struct VhostVDPAVirtIOOps {
+    vhost_vdpa_virtio_should_enable_op should_enable;
+} VhostVDPAVirtIOOps;
+
 typedef struct vhost_vdpa {
     int device_fd;
     int index;
@@ -48,6 +55,7 @@ typedef struct vhost_vdpa {
     VhostIOVATree *iova_tree;
     GPtrArray *shadow_vqs;
     const VhostShadowVirtqueueOps *shadow_vq_ops;
+    const VhostVDPAVirtIOOps *virtio_ops;
     void *shadow_vq_ops_opaque;
     struct vhost_dev *dev;
     Error *migration_blocker;
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/7] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 2/7] vdpa: add should_enable op Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 4/7] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

This allow to skip some rings that qemu does not want to enable.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index bebcc9fe7c..1281502a71 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -896,6 +896,11 @@ static int vhost_vdpa_set_vrings_ready(struct vhost_dev *dev)
 
     assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
 
+    if (v->virtio_ops && v->virtio_ops->should_enable
+        && !(v->virtio_ops->should_enable(v))) {
+        return 0;
+    }
+
     for (i = 0; i < dev->nvqs; ++i) {
         vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
     }
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/7] vdpa: add stub vhost_vdpa_should_enable
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (2 preceding siblings ...)
  2023-07-28 17:20 ` [PATCH 3/7] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 5/7] vdpa: delay enable of data vqs Eugenio Pérez
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

To restore the device at the destination of a live migration we send the
commands through control virtqueue.  For a device to read CVQ it must
have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we will not send vring_enable until all configuration is
used by the device.

Add vhost-vdpa net vhost_vdpa_should_enable.  Do not change the behavior
in this commit, only introduce the op.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 9795306742..3d7dc3e5c0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -1255,6 +1255,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
     .avail_handler = vhost_vdpa_net_handle_ctrl_avail,
 };
 
+static bool vhost_vdpa_should_enable(const struct vhost_vdpa *v)
+{
+    return true;
+}
+
+static const VhostVDPAVirtIOOps vhost_vdpa_virtio_net_ops = {
+    .should_enable = vhost_vdpa_should_enable,
+};
+
 /**
  * Probe if CVQ is isolated
  *
@@ -1378,6 +1387,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
     s->vhost_vdpa.shadow_vqs_enabled = svq;
     s->vhost_vdpa.iova_range = iova_range;
     s->vhost_vdpa.shadow_data = svq;
+    s->vhost_vdpa.virtio_ops = &vhost_vdpa_virtio_net_ops;
     if (queue_pair_index == 0) {
         vhost_vdpa_net_valid_svq_features(features,
                                           &s->vhost_vdpa.migration_blocker);
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/7] vdpa: delay enable of data vqs
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (3 preceding siblings ...)
  2023-07-28 17:20 ` [PATCH 4/7] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 6/7] vdpa: enable cvq svq if data vq are shadowed Eugenio Pérez
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

To restore the device at the destination of a live migration we send
the commands through control virtqueue.  For a device to read CVQ it
must have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we do not send vring_enable until all configuration is used
by the device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v2: Enable the dataplane vqs if cannot shadow CVQ because of device
features or ASID.
---
 net/vhost-vdpa.c | 44 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 43 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3d7dc3e5c0..2c1cfda657 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -283,6 +283,15 @@ static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
     return DO_UPCAST(VhostVDPAState, nc, nc0);
 }
 
+/** From any vdpa net client, get the netclient of the last queue pair */
+static VhostVDPAState *vhost_vdpa_net_last_nc_vdpa(VhostVDPAState *s)
+{
+    VirtIONet *n = qemu_get_nic_opaque(s->nc.peer);
+    NetClientState *nc = qemu_get_peer(n->nic->ncs, n->max_ncs - 1);
+
+    return DO_UPCAST(VhostVDPAState, nc, nc);
+}
+
 static void vhost_vdpa_net_log_global_enable(VhostVDPAState *s, bool enable)
 {
     struct vhost_vdpa *v = &s->vhost_vdpa;
@@ -996,6 +1005,13 @@ static int vhost_vdpa_net_load(NetClientState *nc)
         return r;
     }
 
+    for (int i = 0; i < v->dev->vq_index; ++i) {
+        r = vhost_vdpa_set_vring_ready(v, i);
+        if (unlikely(r)) {
+            return r;
+        }
+    }
+
     return 0;
 }
 
@@ -1255,9 +1271,35 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
     .avail_handler = vhost_vdpa_net_handle_ctrl_avail,
 };
 
+/**
+ * Check if a vhost_vdpa device should enable before DRIVER_OK
+ *
+ * CVQ must always start first if we want to restore the state safely. Do not
+ * start data vqs if the device has CVQ.
+ */
 static bool vhost_vdpa_should_enable(const struct vhost_vdpa *v)
 {
-    return true;
+    struct vhost_dev *dev = v->dev;
+    VhostVDPAState *s = container_of(v, VhostVDPAState, vhost_vdpa);
+    VhostVDPAState *cvq_s = vhost_vdpa_net_last_nc_vdpa(s);
+
+    if (!(dev->vq_index_end % 2)) {
+        /* vDPA device does not have CVQ */
+        return true;
+    }
+
+    if (dev->vq_index + 1 == dev->vq_index_end) {
+        /* We're evaluating CVQ, that must always enable first */
+        return true;
+    }
+
+    if (!vhost_vdpa_net_valid_svq_features(v->dev->features, NULL) ||
+        !cvq_s->cvq_isolated) {
+        /* CVQ is not isolated, so let's enable as usual */
+        return true;
+    }
+
+    return false;
 }
 
 static const VhostVDPAVirtIOOps vhost_vdpa_virtio_net_ops = {
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/7] vdpa: enable cvq svq if data vq are shadowed
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (4 preceding siblings ...)
  2023-07-28 17:20 ` [PATCH 5/7] vdpa: delay enable of data vqs Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-28 17:20 ` [PATCH 7/7] vdpa: remove net cvq migration blocker Eugenio Pérez
  2023-07-31  6:41 ` [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Jason Wang
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

Previous to this commit, it was assumed that data can only be shadowed
with x-cvq, or if a migration was in place.  So CVQ did not need to
check for migration.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 2c1cfda657..118837c6b9 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -514,11 +514,10 @@ static int vhost_vdpa_net_cvq_start(NetClientState *nc)
 
     s0 = vhost_vdpa_net_first_nc_vdpa(s);
     v->shadow_data = s0->vhost_vdpa.shadow_vqs_enabled;
-    v->shadow_vqs_enabled = s->always_svq;
+    v->shadow_vqs_enabled = v->shadow_data;
     s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
 
-    if (s->vhost_vdpa.shadow_data) {
-        /* SVQ is already configured for all virtqueues */
+    if (s->always_svq) {
         goto out;
     }
 
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/7] vdpa: remove net cvq migration blocker
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (5 preceding siblings ...)
  2023-07-28 17:20 ` [PATCH 6/7] vdpa: enable cvq svq if data vq are shadowed Eugenio Pérez
@ 2023-07-28 17:20 ` Eugenio Pérez
  2023-07-31  6:41 ` [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Jason Wang
  7 siblings, 0 replies; 11+ messages in thread
From: Eugenio Pérez @ 2023-07-28 17:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jason Wang, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

Now that we have add migration blockers if the device does not support
all the needed features, remove the general blocker applied to all net
devices with CVQ.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 118837c6b9..cc3455d131 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -1443,18 +1443,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
         s->cvq_isolated = cvq_isolated;
-
-        /*
-         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
-         * there is no way to set the device state (MAC, MQ, etc) before
-         * starting the datapath.
-         *
-         * Migration blocker ownership now belongs to s->vhost_vdpa.
-         */
-        if (!svq) {
-            error_setg(&s->vhost_vdpa.migration_blocker,
-                       "net vdpa cannot migrate with CVQ feature");
-        }
     }
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/7] Enable vdpa net migration with features depending on CVQ
  2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
                   ` (6 preceding siblings ...)
  2023-07-28 17:20 ` [PATCH 7/7] vdpa: remove net cvq migration blocker Eugenio Pérez
@ 2023-07-31  6:41 ` Jason Wang
  2023-07-31 10:15   ` Eugenio Perez Martin
  7 siblings, 1 reply; 11+ messages in thread
From: Jason Wang @ 2023-07-31  6:41 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: qemu-devel, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

On Sat, Jul 29, 2023 at 1:20 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> At this moment the migration of net features that depends on CVQ is not
> possible, as there is no reliable way to restore the device state like mac
> address, number of enabled queues, etc to the destination.  This is mainly
> caused because the device must only read CVQ, and process all the commands
> before resuming the dataplane.
>
> This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
> ioctl for dataplane vqs only after the device has processed all commands.

I think it's better to explain (that is what I don't understand) why
we can not simply reorder vhost_net_start_one() in vhost_net_start()?

    for (i = 0; i < nvhosts; i++) {
        if (i < data_queue_pairs) {
            peer = qemu_get_peer(ncs, i);
        } else {
            peer = qemu_get_peer(ncs, n->max_queue_pairs);
        }

        if (peer->vring_enable) {
            /* restore vring enable state */
            r = vhost_set_vring_enable(peer, peer->vring_enable);

            if (r < 0) {
                goto err_start;
            }
        }

=>      r = vhost_net_start_one(get_vhost_net(peer), dev);
        if (r < 0) {
            goto err_start;
        }
    }

Can we simply start cvq first here?

Thanks

> ---
> From FRC:
> * Enable vqs early in case CVQ cannot be shadowed.
>
> Eugenio Pérez (7):
>   vdpa: export vhost_vdpa_set_vring_ready
>   vdpa: add should_enable op
>   vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
>   vdpa: add stub vhost_vdpa_should_enable
>   vdpa: delay enable of data vqs
>   vdpa: enable cvq svq if data vq are shadowed
>   vdpa: remove net cvq migration blocker
>
>  include/hw/virtio/vhost-vdpa.h |  9 +++++
>  hw/virtio/vhost-vdpa.c         | 33 ++++++++++++----
>  net/vhost-vdpa.c               | 69 ++++++++++++++++++++++++++--------
>  hw/virtio/trace-events         |  2 +-
>  4 files changed, 89 insertions(+), 24 deletions(-)
>
> --
> 2.39.3
>
>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/7] Enable vdpa net migration with features depending on CVQ
  2023-07-31  6:41 ` [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Jason Wang
@ 2023-07-31 10:15   ` Eugenio Perez Martin
  2023-08-01  3:48     ` Jason Wang
  0 siblings, 1 reply; 11+ messages in thread
From: Eugenio Perez Martin @ 2023-07-31 10:15 UTC (permalink / raw)
  To: Jason Wang
  Cc: qemu-devel, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

On Mon, Jul 31, 2023 at 8:41 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Sat, Jul 29, 2023 at 1:20 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > At this moment the migration of net features that depends on CVQ is not
> > possible, as there is no reliable way to restore the device state like mac
> > address, number of enabled queues, etc to the destination.  This is mainly
> > caused because the device must only read CVQ, and process all the commands
> > before resuming the dataplane.
> >
> > This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
> > ioctl for dataplane vqs only after the device has processed all commands.
>
> I think it's better to explain (that is what I don't understand) why
> we can not simply reorder vhost_net_start_one() in vhost_net_start()?
>
>     for (i = 0; i < nvhosts; i++) {
>         if (i < data_queue_pairs) {
>             peer = qemu_get_peer(ncs, i);
>         } else {
>             peer = qemu_get_peer(ncs, n->max_queue_pairs);
>         }
>
>         if (peer->vring_enable) {
>             /* restore vring enable state */
>             r = vhost_set_vring_enable(peer, peer->vring_enable);
>
>             if (r < 0) {
>                 goto err_start;
>             }
>         }
>
> =>      r = vhost_net_start_one(get_vhost_net(peer), dev);
>         if (r < 0) {
>             goto err_start;
>         }
>     }
>
> Can we simply start cvq first here?
>

Well the current order is:
* set dev features (conditioned by
* Configure all vq addresses
* Configure all vq size
...
* Enable cvq
* DRIVER_OK
* Enable all the rest of the queues.

If we just start CVQ first, we need to modify vhost_vdpa_set_features
as minimum. A lot of code that depends on vdev->vq_index{,_end} may be
affected.

Also, I'm not sure if all the devices will support configure address,
vq size, etc after DRIVER_OK.

> Thanks
>
> > ---
> > From FRC:
> > * Enable vqs early in case CVQ cannot be shadowed.
> >
> > Eugenio Pérez (7):
> >   vdpa: export vhost_vdpa_set_vring_ready
> >   vdpa: add should_enable op
> >   vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
> >   vdpa: add stub vhost_vdpa_should_enable
> >   vdpa: delay enable of data vqs
> >   vdpa: enable cvq svq if data vq are shadowed
> >   vdpa: remove net cvq migration blocker
> >
> >  include/hw/virtio/vhost-vdpa.h |  9 +++++
> >  hw/virtio/vhost-vdpa.c         | 33 ++++++++++++----
> >  net/vhost-vdpa.c               | 69 ++++++++++++++++++++++++++--------
> >  hw/virtio/trace-events         |  2 +-
> >  4 files changed, 89 insertions(+), 24 deletions(-)
> >
> > --
> > 2.39.3
> >
> >
>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/7] Enable vdpa net migration with features depending on CVQ
  2023-07-31 10:15   ` Eugenio Perez Martin
@ 2023-08-01  3:48     ` Jason Wang
  0 siblings, 0 replies; 11+ messages in thread
From: Jason Wang @ 2023-08-01  3:48 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, Gautam Dawar, si-wei.liu, Zhu Lingshan,
	Stefano Garzarella, Parav Pandit, Cindy Lu, Michael S. Tsirkin,
	Harpreet Singh Anand, Laurent Vivier, Shannon Nelson, Lei Yang,
	Dragos Tatulea

On Mon, Jul 31, 2023 at 6:15 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 31, 2023 at 8:41 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Sat, Jul 29, 2023 at 1:20 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > At this moment the migration of net features that depends on CVQ is not
> > > possible, as there is no reliable way to restore the device state like mac
> > > address, number of enabled queues, etc to the destination.  This is mainly
> > > caused because the device must only read CVQ, and process all the commands
> > > before resuming the dataplane.
> > >
> > > This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
> > > ioctl for dataplane vqs only after the device has processed all commands.
> >
> > I think it's better to explain (that is what I don't understand) why
> > we can not simply reorder vhost_net_start_one() in vhost_net_start()?
> >
> >     for (i = 0; i < nvhosts; i++) {
> >         if (i < data_queue_pairs) {
> >             peer = qemu_get_peer(ncs, i);
> >         } else {
> >             peer = qemu_get_peer(ncs, n->max_queue_pairs);
> >         }
> >
> >         if (peer->vring_enable) {
> >             /* restore vring enable state */
> >             r = vhost_set_vring_enable(peer, peer->vring_enable);
> >
> >             if (r < 0) {
> >                 goto err_start;
> >             }
> >         }
> >
> > =>      r = vhost_net_start_one(get_vhost_net(peer), dev);
> >         if (r < 0) {
> >             goto err_start;
> >         }
> >     }
> >
> > Can we simply start cvq first here?
> >
>
> Well the current order is:
> * set dev features (conditioned by
> * Configure all vq addresses
> * Configure all vq size
> ...
> * Enable cvq
> * DRIVER_OK
> * Enable all the rest of the queues.
>
> If we just start CVQ first, we need to modify vhost_vdpa_set_features
> as minimum. A lot of code that depends on vdev->vq_index{,_end} may be
> affected.
>
> Also, I'm not sure if all the devices will support configure address,
> vq size, etc after DRIVER_OK.

Ok, so basically what I meant is to seek a way to refactor
vhost_net_start() instead of introducing new ops (e.g introducing
virtio ops in vhost seems a layer violation anyhow).

Can we simply factor VRING_ENABLE out and then we can enable vring in
any order as we want in vhost_net_start()?

Thanks

>
> > Thanks
> >
> > > ---
> > > From FRC:
> > > * Enable vqs early in case CVQ cannot be shadowed.
> > >
> > > Eugenio Pérez (7):
> > >   vdpa: export vhost_vdpa_set_vring_ready
> > >   vdpa: add should_enable op
> > >   vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
> > >   vdpa: add stub vhost_vdpa_should_enable
> > >   vdpa: delay enable of data vqs
> > >   vdpa: enable cvq svq if data vq are shadowed
> > >   vdpa: remove net cvq migration blocker
> > >
> > >  include/hw/virtio/vhost-vdpa.h |  9 +++++
> > >  hw/virtio/vhost-vdpa.c         | 33 ++++++++++++----
> > >  net/vhost-vdpa.c               | 69 ++++++++++++++++++++++++++--------
> > >  hw/virtio/trace-events         |  2 +-
> > >  4 files changed, 89 insertions(+), 24 deletions(-)
> > >
> > > --
> > > 2.39.3
> > >
> > >
> >
>



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-08-01  3:49 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
2023-07-28 17:20 ` [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
2023-07-28 17:20 ` [PATCH 2/7] vdpa: add should_enable op Eugenio Pérez
2023-07-28 17:20 ` [PATCH 3/7] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
2023-07-28 17:20 ` [PATCH 4/7] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
2023-07-28 17:20 ` [PATCH 5/7] vdpa: delay enable of data vqs Eugenio Pérez
2023-07-28 17:20 ` [PATCH 6/7] vdpa: enable cvq svq if data vq are shadowed Eugenio Pérez
2023-07-28 17:20 ` [PATCH 7/7] vdpa: remove net cvq migration blocker Eugenio Pérez
2023-07-31  6:41 ` [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Jason Wang
2023-07-31 10:15   ` Eugenio Perez Martin
2023-08-01  3:48     ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).